Sample records for ensemble averaging procedure

  1. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  2. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  3. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  4. Characterizing RNA ensembles from NMR data with kinematic models

    PubMed Central

    Fonseca, Rasmus; Pachov, Dimitar V.; Bernauer, Julie; van den Bedem, Henry

    2014-01-01

    Functional mechanisms of biomolecules often manifest themselves precisely in transient conformational substates. Researchers have long sought to structurally characterize dynamic processes in non-coding RNA, combining experimental data with computer algorithms. However, adequate exploration of conformational space for these highly dynamic molecules, starting from static crystal structures, remains challenging. Here, we report a new conformational sampling procedure, KGSrna, which can efficiently probe the native ensemble of RNA molecules in solution. We found that KGSrna ensembles accurately represent the conformational landscapes of 3D RNA encoded by NMR proton chemical shifts. KGSrna resolves motionally averaged NMR data into structural contributions; when coupled with residual dipolar coupling data, a KGSrna ensemble revealed a previously uncharacterized transient excited state of the HIV-1 trans-activation response element stem–loop. Ensemble-based interpretations of averaged data can aid in formulating and testing dynamic, motion-based hypotheses of functional mechanisms in RNAs with broad implications for RNA engineering and therapeutic intervention. PMID:25114056

  5. An ensemble forecast of the South China Sea monsoon

    NASA Astrophysics Data System (ADS)

    Krishnamurti, T. N.; Tewari, Mukul; Bensman, Ed; Han, Wei; Zhang, Zhan; Lau, William K. M.

    1999-05-01

    This paper presents a generalized ensemble forecast procedure for the tropical latitudes. Here we propose an empirical orthogonal function-based procedure for the definition of a seven-member ensemble. The wind and the temperature fields are perturbed over the global tropics. Although the forecasts are made over the global belt with a high-resolution model, the emphasis of this study is on a South China Sea monsoon. Over this domain of the South China Sea includes the passage of a Tropical Storm, Gary, that moved eastwards north of the Philippines. The ensemble forecast handled the precipitation of this storm reasonably well. A global model at the resolution Triangular Truncation 126 waves is used to carry out these seven forecasts. The evaluation of the ensemble of forecasts is carried out via standard root mean square errors of the precipitation and the wind fields. The ensemble average is shown to have a higher skill compared to a control experiment, which was a first analysis based on operational data sets over both the global tropical and South China Sea domain. All of these experiments were subjected to physical initialization which provides a spin-up of the model rain close to that obtained from satellite and gauge-based estimates. The results furthermore show that inherently much higher skill resides in the forecast precipitation fields if they are averaged over area elements of the order of 4° latitude by 4° longitude squares.

  6. Estimation of Uncertainties in the Global Distance Test (GDT_TS) for CASP Models.

    PubMed

    Li, Wenlin; Schaeffer, R Dustin; Otwinowski, Zbyszek; Grishin, Nick V

    2016-01-01

    The Critical Assessment of techniques for protein Structure Prediction (or CASP) is a community-wide blind test experiment to reveal the best accomplishments of structure modeling. Assessors have been using the Global Distance Test (GDT_TS) measure to quantify prediction performance since CASP3 in 1998. However, identifying significant score differences between close models is difficult because of the lack of uncertainty estimations for this measure. Here, we utilized the atomic fluctuations caused by structure flexibility to estimate the uncertainty of GDT_TS scores. Structures determined by nuclear magnetic resonance are deposited as ensembles of alternative conformers that reflect the structural flexibility, whereas standard X-ray refinement produces the static structure averaged over time and space for the dynamic ensembles. To recapitulate the structural heterogeneous ensemble in the crystal lattice, we performed time-averaged refinement for X-ray datasets to generate structural ensembles for our GDT_TS uncertainty analysis. Using those generated ensembles, our study demonstrates that the time-averaged refinements produced structure ensembles with better agreement with the experimental datasets than the averaged X-ray structures with B-factors. The uncertainty of the GDT_TS scores, quantified by their standard deviations (SDs), increases for scores lower than 50 and 70, with maximum SDs of 0.3 and 1.23 for X-ray and NMR structures, respectively. We also applied our procedure to the high accuracy version of GDT-based score and produced similar results with slightly higher SDs. To facilitate score comparisons by the community, we developed a user-friendly web server that produces structure ensembles for NMR and X-ray structures and is accessible at http://prodata.swmed.edu/SEnCS. Our work helps to identify the significance of GDT_TS score differences, as well as to provide structure ensembles for estimating SDs of any scores.

  7. Using Bayes Model Averaging for Wind Power Forecasts

    NASA Astrophysics Data System (ADS)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data does not contain information, but it has the disadvantage of nearly doubling the number of model parameters to be estimated. Second, the BMA procedure is run with group mean wind power as the response variable instead of group mean wind speed. This also solves the problem with longer consecutive periods without information in the input data, but it leaves the power curve to also be estimated from the data. [1] Raftery, A. E., et al. (2005). Using Bayesian Model Averaging to Calibrate Forecast Ensembles. Monthly Weather Review, 133, 1155-1174. [2]Revheim, P. P. and H. G. Beyer (2013). Using Bayesian Model Averaging for wind farm group forecasts. EWEA Wind Power Forecasting Technology Workshop,Rotterdam, 4-5 December 2013. [3]Sloughter, J. M., T. Gneiting and A. E. Raftery (2010). Probabilistic Wind Speed Forecasting Using Ensembles and Bayesian Model Averaging. Journal of the American Statistical Association, Vol. 105, No. 489, 25-35

  8. Efficient and Unbiased Sampling of Biomolecular Systems in the Canonical Ensemble: A Review of Self-Guided Langevin Dynamics

    PubMed Central

    Wu, Xiongwu; Damjanovic, Ana; Brooks, Bernard R.

    2013-01-01

    This review provides a comprehensive description of the self-guided Langevin dynamics (SGLD) and the self-guided molecular dynamics (SGMD) methods and their applications. Example systems are included to provide guidance on optimal application of these methods in simulation studies. SGMD/SGLD has enhanced ability to overcome energy barriers and accelerate rare events to affordable time scales. It has been demonstrated that with moderate parameters, SGLD can routinely cross energy barriers of 20 kT at a rate that molecular dynamics (MD) or Langevin dynamics (LD) crosses 10 kT barriers. The core of these methods is the use of local averages of forces and momenta in a direct manner that can preserve the canonical ensemble. The use of such local averages results in methods where low frequency motion “borrows” energy from high frequency degrees of freedom when a barrier is approached and then returns that excess energy after a barrier is crossed. This self-guiding effect also results in an accelerated diffusion to enhance conformational sampling efficiency. The resulting ensemble with SGLD deviates in a small way from the canonical ensemble, and that deviation can be corrected with either an on-the-fly or a post processing reweighting procedure that provides an excellent canonical ensemble for systems with a limited number of accelerated degrees of freedom. Since reweighting procedures are generally not size extensive, a newer method, SGLDfp, uses local averages of both momenta and forces to preserve the ensemble without reweighting. The SGLDfp approach is size extensive and can be used to accelerate low frequency motion in large systems, or in systems with explicit solvent where solvent diffusion is also to be enhanced. Since these methods are direct and straightforward, they can be used in conjunction with many other sampling methods or free energy methods by simply replacing the integration of degrees of freedom that are normally sampled by MD or LD. PMID:23913991

  9. On the v-representability of ensemble densities of electron systems

    NASA Astrophysics Data System (ADS)

    Gonis, A.; Däne, M.

    2018-05-01

    Analogously to the case at zero temperature, where the density of the ground state of an interacting many-particle system determines uniquely (within an arbitrary additive constant) the external potential acting on the system, the thermal average of the density over an ensemble defined by the Boltzmann distribution at the minimum of the thermodynamic potential, or the free energy, determines the external potential uniquely (and not just modulo a constant) acting on a system described by this thermodynamic potential or free energy. The paper describes a formal procedure that generates the domain of a constrained search over general ensembles (at zero or elevated temperatures) that lead to a given density, including as a special case a density thermally averaged at a given temperature, and in the case of a v-representable density determines the external potential leading to the ensemble density. As an immediate consequence of the general formalism, the concept of v-representability is extended beyond the hitherto discussed case of ground state densities to encompass excited states as well. Specific application to thermally averaged densities solves the v-representability problem in connection with the Mermin functional in a manner analogous to that in which this problem was recently settled with respect to the Hohenberg and Kohn functional. The main formalism is illustrated with numerical results for ensembles of one-dimensional, non-interacting systems of particles under a harmonic potential.

  10. On the v-representability of ensemble densities of electron systems

    DOE PAGES

    Gonis, A.; Dane, M.

    2017-12-30

    Analogously to the case at zero temperature, where the density of the ground state of an interacting many-particle system determines uniquely (within an arbitrary additive constant) the external potential acting on the system, the thermal average of the density over an ensemble defined by the Boltzmann distribution at the minimum of the thermodynamic potential, or the free energy, determines the external potential uniquely (and not just modulo a constant) acting on a system described by this thermodynamic potential or free energy. The study describes a formal procedure that generates the domain of a constrained search over general ensembles (at zeromore » or elevated temperatures) that lead to a given density, including as a special case a density thermally averaged at a given temperature, and in the case of a v-representable density determines the external potential leading to the ensemble density. As an immediate consequence of the general formalism, the concept of v-representability is extended beyond the hitherto discussed case of ground state densities to encompass excited states as well. Specific application to thermally averaged densities solves the v-representability problem in connection with the Mermin functional in a manner analogous to that in which this problem was recently settled with respect to the Hohenberg and Kohn functional. Finally, the main formalism is illustrated with numerical results for ensembles of one-dimensional, non-interacting systems of particles under a harmonic potential.« less

  11. Decadal climate predictions improved by ocean ensemble dispersion filtering

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the whole ensemble with its ensemble average, improves a prediction system. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Our study shows that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure applying the average during the model run, called ensemble dispersion filter, results in more accurate results than the standard prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5625167','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5625167"><span>On averaging aspect ratios and distortion parameters over ice crystal population ensembles for estimating effective scattering asymmetry parameters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>van Diedenhoven, Bastiaan; Ackerman, Andrew S.; Fridlind, Ann M.; Cairns, Brian</p> <p>2017-01-01</p> <p>The use of ensemble-average values of aspect ratio and distortion parameter of hexagonal ice prisms for the estimation of ensemble-average scattering asymmetry parameters is evaluated. Using crystal aspect ratios greater than unity generally leads to ensemble-average values of aspect ratio that are inconsistent with the ensemble-average asymmetry parameters. When a definition of aspect ratio is used that limits the aspect ratio to below unity (α≤1) for both hexagonal plates and columns, the effective asymmetry parameters calculated using ensemble-average aspect ratios are generally consistent with ensemble-average asymmetry parameters, especially if aspect ratios are geometrically averaged. Ensemble-average distortion parameters generally also yield effective asymmetry parameters that are largely consistent with ensemble-average asymmetry parameters. In the case of mixtures of plates and columns, it is recommended to geometrically average the α≤1 aspect ratios and to subsequently calculate the effective asymmetry parameter using a column or plate geometry when the contribution by columns to a given mixture’s total projected area is greater or lower than 50%, respectively. In addition, we show that ensemble-average aspect ratios, distortion parameters and asymmetry parameters can generally be retrieved accurately from simulated multi-directional polarization measurements based on mixtures of varying columns and plates. However, such retrievals tend to be somewhat biased toward yielding column-like aspect ratios. Furthermore, generally large retrieval errors can occur for mixtures with approximately equal contributions of columns and plates and for ensembles with strong contributions of thin plates. PMID:28983127</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.emc.ncep.noaa.gov/gmb/wx20cb/NAEFS/ncep_biasavg_percent.html','SCIGOVWS'); return false;" href="http://www.emc.ncep.noaa.gov/gmb/wx20cb/NAEFS/ncep_biasavg_percent.html"><span>--No Title--</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>2008073000 2008072900 2008072800 Background information bias reduction = ( | domain-averaged ensemble <em>mean</em> bias | - | domain-averaged bias-corrected ensemble <em>mean</em> bias | / | domain-averaged bias-corrected ensemble <em>mean</em> bias | NAEFS Products | NAEFS | EMC Ensemble Products EMC | NCEP | National Weather Service</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4980076','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4980076"><span>Bayesian Ensemble Trees (BET) for Clustering and Prediction in Heterogeneous Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Duan, Leo L.; Clancy, John P.; Szczesniak, Rhonda D.</p> <p>2016-01-01</p> <p>We propose a novel “tree-averaging” model that utilizes the ensemble of classification and regression trees (CART). Each constituent tree is estimated with a subset of similar data. We treat this grouping of subsets as Bayesian Ensemble Trees (BET) and model them as a Dirichlet process. We show that BET determines the optimal number of trees by adapting to the data heterogeneity. Compared with the other ensemble methods, BET requires much fewer trees and shows equivalent prediction accuracy using weighted averaging. Moreover, each tree in BET provides variable selection criterion and interpretation for each subset. We developed an efficient estimating procedure with improved estimation strategies in both CART and mixture models. We demonstrate these advantages of BET with simulations and illustrate the approach with a real-world data example involving regression of lung function measurements obtained from patients with cystic fibrosis. Supplemental materials are available online. PMID:27524872</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.emc.ncep.noaa.gov/gmb/wx20cb/NAEFS/cmc_biasavg_percent_control.html','SCIGOVWS'); return false;" href="http://www.emc.ncep.noaa.gov/gmb/wx20cb/NAEFS/cmc_biasavg_percent_control.html"><span>--No Title--</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>2008112500 2008112400 Background information bias reduction = ( | domain-averaged ensemble <em>mean</em> bias | - | domain-averaged bias-corrected ensemble <em>mean</em> bias | / | domain-averaged bias-corrected ensemble <em>mean</em> bias</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014GeoJI.197.1770W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014GeoJI.197.1770W"><span>On estimating attenuation from the amplitude of the spectrally whitened ambient seismic field</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Weemstra, Cornelis; Westra, Willem; Snieder, Roel; Boschi, Lapo</p> <p>2014-06-01</p> <p>Measuring attenuation on the basis of interferometric, receiver-receiver surface waves is a non-trivial task: the amplitude, more than the phase, of ensemble-averaged cross-correlations is strongly affected by non-uniformities in the ambient wavefield. In addition, ambient noise data are typically pre-processed in ways that affect the amplitude itself. Some authors have recently attempted to measure attenuation in receiver-receiver cross-correlations obtained after the usual pre-processing of seismic ambient-noise records, including, most notably, spectral whitening. Spectral whitening replaces the cross-spectrum with a unit amplitude spectrum. It is generally assumed that cross-terms have cancelled each other prior to spectral whitening. Cross-terms are peaks in the cross-correlation due to simultaneously acting noise sources, that is, spurious traveltime delays due to constructive interference of signal coming from different sources. Cancellation of these cross-terms is a requirement for the successful retrieval of interferometric receiver-receiver signal and results from ensemble averaging. In practice, ensemble averaging is replaced by integrating over sufficiently long time or averaging over several cross-correlation windows. Contrary to the general assumption, we show in this study that cross-terms are not required to cancel each other prior to spectral whitening, but may also cancel each other after the whitening procedure. Specifically, we derive an analytic approximation for the amplitude difference associated with the reversed order of cancellation and normalization. Our approximation shows that an amplitude decrease results from the reversed order. This decrease is predominantly non-linear at small receiver-receiver distances: at distances smaller than approximately two wavelengths, whitening prior to ensemble averaging causes a significantly stronger decay of the cross-spectrum.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25844624','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25844624"><span>Individual differences in ensemble perception reveal multiple, independent levels of ensemble representation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Haberman, Jason; Brady, Timothy F; Alvarez, George A</p> <p>2015-04-01</p> <p>Ensemble perception, including the ability to "see the average" from a group of items, operates in numerous feature domains (size, orientation, speed, facial expression, etc.). Although the ubiquity of ensemble representations is well established, the large-scale cognitive architecture of this process remains poorly defined. We address this using an individual differences approach. In a series of experiments, observers saw groups of objects and reported either a single item from the group or the average of the entire group. High-level ensemble representations (e.g., average facial expression) showed complete independence from low-level ensemble representations (e.g., average orientation). In contrast, low-level ensemble representations (e.g., orientation and color) were correlated with each other, but not with high-level ensemble representations (e.g., facial expression and person identity). These results suggest that there is not a single domain-general ensemble mechanism, and that the relationship among various ensemble representations depends on how proximal they are in representational space. (c) 2015 APA, all rights reserved).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EGUGA..12.2601X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EGUGA..12.2601X"><span>Upgrades to the REA method for producing probabilistic climate change projections</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, Ying; Gao, Xuejie; Giorgi, Filippo</p> <p>2010-05-01</p> <p>We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008JCoPh.227.6249C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008JCoPh.227.6249C"><span>Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.</p> <p>2008-06-01</p> <p>An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhRvE..87e2713K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhRvE..87e2713K"><span>Improved estimation of anomalous diffusion exponents in single-particle tracking experiments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kepten, Eldad; Bronshtein, Irena; Garini, Yuval</p> <p>2013-05-01</p> <p>The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li class="active"><span>1</span></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_1 --> <div id="page_2" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li class="active"><span>2</span></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="21"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNH31B0224Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNH31B0224Z"><span>Data Assimilation by Ensemble Kalman Filter during One-Dimensional Nonlinear Consolidation in Randomly Heterogeneous Highly Compressible Aquitards</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zapata Norberto, B.; Morales-Casique, E.; Herrera, G. S.</p> <p>2017-12-01</p> <p>Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. We explore the effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards by means of 1-D Monte Carlo numerical simulations. 2000 realizations are generated for each of the following parameters: hydraulic conductivity (K), compression index (Cc) and void ratio (e). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system. Random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady state conditions. We further propose a data assimilation scheme by means of ensemble Kalman filter to estimate the ensemble mean distribution of K, pore-pressure and total settlement. We consider the case where pore-pressure measurements are available at given time intervals. We test our approach by generating a 1-D realization of K with exponential spatial correlation, and solving the nonlinear flow and consolidation problem. These results are taken as our "true" solution. We take pore-pressure "measurements" at different times from this "true" solution. The ensemble Kalman filter method is then employed to estimate ensemble mean distribution of K, pore-pressure and total settlement based on the sequential assimilation of these pore-pressure measurements. The ensemble-mean estimates from this procedure closely approximate those from the "true" solution. This procedure can be easily extended to other random variables such as compression index and void ratio.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22525639-cosmological-ensemble-directional-averages-observables','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22525639-cosmological-ensemble-directional-averages-observables"><span>Cosmological ensemble and directional averages of observables</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Bonvin, Camille; Clarkson, Chris; Durrer, Ruth</p> <p></p> <p>We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmologicalmore » observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1185896-ensemble-sampling-vs-time-sampling-molecular-dynamics-simulations-thermal-conductivity','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1185896-ensemble-sampling-vs-time-sampling-molecular-dynamics-simulations-thermal-conductivity"><span>Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Gordiz, Kiarash; Singh, David J.; Henry, Asegun</p> <p>2015-01-29</p> <p>In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148l3329Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148l3329Z"><span>Inferring properties of disordered chains from FRET transfer efficiencies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zheng, Wenwei; Zerze, Gül H.; Borgia, Alessandro; Mittal, Jeetain; Schuler, Benjamin; Best, Robert B.</p> <p>2018-03-01</p> <p>Förster resonance energy transfer (FRET) is a powerful tool for elucidating both structural and dynamic properties of unfolded or disordered biomolecules, especially in single-molecule experiments. However, the key observables, namely, the mean transfer efficiency and fluorescence lifetimes of the donor and acceptor chromophores, are averaged over a broad distribution of donor-acceptor distances. The inferred average properties of the ensemble therefore depend on the form of the model distribution chosen to describe the distance, as has been widely recognized. In addition, while the distribution for one type of polymer model may be appropriate for a chain under a given set of physico-chemical conditions, it may not be suitable for the same chain in a different environment so that even an apparently consistent application of the same model over all conditions may distort the apparent changes in chain dimensions with variation of temperature or solution composition. Here, we present an alternative and straightforward approach to determining ensemble properties from FRET data, in which the polymer scaling exponent is allowed to vary with solution conditions. In its simplest form, it requires either the mean FRET efficiency or fluorescence lifetime information. In order to test the accuracy of the method, we have utilized both synthetic FRET data from implicit and explicit solvent simulations for 30 different protein sequences, and experimental single-molecule FRET data for an intrinsically disordered and a denatured protein. In all cases, we find that the inferred radii of gyration are within 10% of the true values, thus providing higher accuracy than simpler polymer models. In addition, the scaling exponents obtained by our procedure are in good agreement with those determined directly from the molecular ensemble. Our approach can in principle be generalized to treating other ensemble-averaged functions of intramolecular distances from experimental data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160007028','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160007028"><span>Multi-Model Ensemble Wake Vortex Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.</p> <p>2015-01-01</p> <p>Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25510166','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25510166"><span>Reduced set averaging of face identity in children and adolescents with autism.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rhodes, Gillian; Neumann, Markus F; Ewing, Louise; Palermo, Romina</p> <p>2015-01-01</p> <p>Individuals with autism have difficulty abstracting and updating average representations from their diet of faces. These averages function as perceptual norms for coding faces, and poorly calibrated norms may contribute to face recognition difficulties in autism. Another kind of average, known as an ensemble representation, can be abstracted from briefly glimpsed sets of faces. Here we show for the first time that children and adolescents with autism also have difficulty abstracting ensemble representations from sets of faces. On each trial, participants saw a study set of four identities and then indicated whether a test face was present. The test face could be a set average or a set identity, from either the study set or another set. Recognition of set averages was reduced in participants with autism, relative to age- and ability-matched typically developing participants. This difference, which actually represents more accurate responding, indicates weaker set averaging and thus weaker ensemble representations of face identity in autism. Our finding adds to the growing evidence for atypical abstraction of average face representations from experience in autism. Weak ensemble representations may have negative consequences for face processing in autism, given the importance of ensemble representations in dealing with processing capacity limitations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJT....38..149S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJT....38..149S"><span>Establishment of a New National Reference Ensemble of Water Triple Point Cells</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Senn, Remo</p> <p>2017-10-01</p> <p>The results of the Bilateral Comparison EURAMET.T-K3.5 (w/VSL, The Netherlands) with the goal to link Switzerland's ITS-90 realization (Ar to Al) to the latest key comparisons gave strong indications for a discrepancy in the realization of the triple point of water. Due to the age of the cells of about twenty years, it was decided to replace the complete reference ensemble with new "state-of-the-art" cells. Three new water triple point cells from three different suppliers were purchased, as well as a new maintenance bath for an additional improvement of the realization. In several loops measurements were taken, each cell of both ensembles intercompared, and the deviations and characteristics determined. The measurements show a significant lower average value of the old ensemble of 0.59 ± 0.25 mK (k=2) in comparison with the new one. Likewise, the behavior of the old cells is very unstable with a drift downward during the realization of the triple point. Based on these results the impact of the new ensemble on the ITS-90 realization from Ar to Al was calculated and set in the context to performed calibrations and their related uncertainties in the past. This paper presents the instrumentation, cells, measurement procedure, results, uncertainties and impact of the new national reference ensemble of water triple point cells on the current ITS-90 realization in Switzerland.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JGRD..117.5309L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JGRD..117.5309L"><span>Simultaneous assimilation of AIRS Xco2 and meteorological observations in a carbon climate model with an ensemble Kalman filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Junjie; Fung, Inez; Kalnay, Eugenia; Kang, Ji-Sun; Olsen, Edward T.; Chen, Luke</p> <p>2012-03-01</p> <p>This study is our first step toward the generation of 6 hourly 3-D CO2 fields that can be used to validate CO2 forecast models by combining CO2 observations from multiple sources using ensemble Kalman filtering. We discuss a procedure to assimilate Atmospheric Infrared Sounder (AIRS) column-averaged dry-air mole fraction of CO2 (Xco2) in conjunction with meteorological observations with the coupled Local Ensemble Transform Kalman Filter (LETKF)-Community Atmospheric Model version 3.5. We examine the impact of assimilating AIRS Xco2 observations on CO2 fields by comparing the results from the AIRS-run, which assimilates both AIRS Xco2 and meteorological observations, to those from the meteor-run, which only assimilates meteorological observations. We find that assimilating AIRS Xco2 results in a surface CO2 seasonal cycle and the N-S surface gradient closer to the observations. When taking account of the CO2 uncertainty estimation from the LETKF, the CO2 analysis brackets the observed seasonal cycle. Verification against independent aircraft observations shows that assimilating AIRS Xco2 improves the accuracy of the CO2 vertical profiles by about 0.5-2 ppm depending on location and altitude. The results show that the CO2 analysis ensemble spread at AIRS Xco2 space is between 0.5 and 2 ppm, and the CO2 analysis ensemble spread around the peak level of the averaging kernels is between 1 and 2 ppm. This uncertainty estimation is consistent with the magnitude of the CO2 analysis error verified against AIRS Xco2 observations and the independent aircraft CO2 vertical profiles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.A51I0194E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.A51I0194E"><span>Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Erfanian, A.; Fomenko, L.; Wang, G.</p> <p>2016-12-01</p> <p>Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4748182','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4748182"><span>Unveiling Inherent Degeneracies in Determining Population-weighted Ensembles of Inter-domain Orientational Distributions Using NMR Residual Dipolar Couplings: Application to RNA Helix Junction Helix Motifs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yang, Shan; Al-Hashimi, Hashim M.</p> <p>2016-01-01</p> <p>A growing number of studies employ time-averaged experimental data to determine dynamic ensembles of biomolecules. While it is well known that different ensembles can satisfy experimental data to within error, the extent and nature of these degeneracies, and their impact on the accuracy of the ensemble determination remains poorly understood. Here, we use simulations and a recently introduced metric for assessing ensemble similarity to explore degeneracies in determining ensembles using NMR residual dipolar couplings (RDCs) with specific application to A-form helices in RNA. Various target ensembles were constructed representing different domain-domain orientational distributions that are confined to a topologically restricted (<10%) conformational space. Five independent sets of ensemble averaged RDCs were then computed for each target ensemble and a ‘sample and select’ scheme used to identify degenerate ensembles that satisfy RDCs to within experimental uncertainty. We find that ensembles with different ensemble sizes and that can differ significantly from the target ensemble (by as much as ΣΩ ~ 0.4 where ΣΩ varies between 0 and 1 for maximum and minimum ensemble similarity, respectively) can satisfy the ensemble averaged RDCs. These deviations increase with the number of unique conformers and breadth of the target distribution, and result in significant uncertainty in determining conformational entropy (as large as 5 kcal/mol at T = 298 K). Nevertheless, the RDC-degenerate ensembles are biased towards populated regions of the target ensemble, and capture other essential features of the distribution, including the shape. Our results identify ensemble size as a major source of uncertainty in determining ensembles and suggest that NMR interactions such as RDCs and spin relaxation, on their own, do not carry the necessary information needed to determine conformational entropy at a useful level of precision. The framework introduced here provides a general approach for exploring degeneracies in ensemble determination for different types of experimental data. PMID:26131693</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140011180','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140011180"><span>Hybrid Data Assimilation without Ensemble Filtering</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Todling, Ricardo; Akkraoui, Amal El</p> <p>2014-01-01</p> <p>The Global Modeling and Assimilation Office is preparing to upgrade its three-dimensional variational system to a hybrid approach in which the ensemble is generated using a square-root ensemble Kalman filter (EnKF) and the variational problem is solved using the Grid-point Statistical Interpolation system. As in most EnKF applications, we found it necessary to employ a combination of multiplicative and additive inflations, to compensate for sampling and modeling errors, respectively and, to maintain the small-member ensemble solution close to the variational solution; we also found it necessary to re-center the members of the ensemble about the variational analysis. During tuning of the filter we have found re-centering and additive inflation to play a considerably larger role than expected, particularly in a dual-resolution context when the variational analysis is ran at larger resolution than the ensemble. This led us to consider a hybrid strategy in which the members of the ensemble are generated by simply converting the variational analysis to the resolution of the ensemble and applying additive inflation, thus bypassing the EnKF. Comparisons of this, so-called, filter-free hybrid procedure with an EnKF-based hybrid procedure and a control non-hybrid, traditional, scheme show both hybrid strategies to provide equally significant improvement over the control; more interestingly, the filter-free procedure was found to give qualitatively similar results to the EnKF-based procedure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27874263','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27874263"><span>Ensemble perception of color in autistic adults.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Maule, John; Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna</p> <p>2017-05-01</p> <p>Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839-851. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5484362','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5484362"><span>Ensemble perception of color in autistic adults</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna</p> <p>2016-01-01</p> <p>Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839–851. © 2016 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research PMID:27874263</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29725108','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29725108"><span>Fitting a function to time-dependent ensemble averaged data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias</p> <p>2018-05-03</p> <p>Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20170009122&hterms=vortex&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dvortex','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20170009122&hterms=vortex&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dvortex"><span>Multimodel Ensemble Methods for Prediction of Wake-Vortex Transport and Decay Originating NASA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Korner, Stephan; Ahmad, Nashat N.; Holzapfel, Frank; VanValkenburg, Randal L.</p> <p>2017-01-01</p> <p>Several multimodel ensemble methods are selected and further developed to improve the deterministic and probabilistic prediction skills of individual wake-vortex transport and decay models. The different multimodel ensemble methods are introduced, and their suitability for wake applications is demonstrated. The selected methods include direct ensemble averaging, Bayesian model averaging, and Monte Carlo simulation. The different methodologies are evaluated employing data from wake-vortex field measurement campaigns conducted in the United States and Germany.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvE..96f2122M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvE..96f2122M"><span>Scale-invariant Green-Kubo relation for time-averaged diffusivity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Meyer, Philipp; Barkai, Eli; Kantz, Holger</p> <p>2017-12-01</p> <p>In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..16.6931B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..16.6931B"><span>Creating "Intelligent" Ensemble Averages Using a Process-Based Framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baker, Noel; Taylor, Patrick</p> <p>2014-05-01</p> <p>The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014GMDD....7.7525M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014GMDD....7.7525M"><span>Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.</p> <p>2014-11-01</p> <p>Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.emc.ncep.noaa.gov/gmb/ens/NAEFS/NAEFS-eval.html','SCIGOVWS'); return false;" href="http://www.emc.ncep.noaa.gov/gmb/ens/NAEFS/NAEFS-eval.html"><span>EMC Global Climate And Weather Modeling Branch Personnel</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>Comparison Statistics which includes: NCEP <em>Raw</em> and Bias-Corrected Ensemble Domain Averaged Bias NCEP <em>Raw</em> and Bias-Corrected Ensemble Domain Averaged Bias Reduction (Percents) CMC <em>Raw</em> and Bias-Corrected Control Forecast Domain Averaged Bias CMC <em>Raw</em> and Bias-Corrected Control Forecast Domain Averaged Bias Reduction</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1922l0007M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1922l0007M"><span>xEMD procedures as a data - Assisted filtering method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Machrowska, Anna; Jonak, Józef</p> <p>2018-01-01</p> <p>The article presents the possibility of using Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD), Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Improved Complete Ensemble Empirical Mode Decomposition (ICEEMD) algorithms for mechanical system condition monitoring applications. There were presented the results of the xEMD procedures used for vibration signals of system in different states of wear.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li class="active"><span>2</span></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_2 --> <div id="page_3" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li class="active"><span>3</span></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="41"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016IJSyS..47..406C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016IJSyS..47..406C"><span>MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Lei; Kamel, Mohamed S.</p> <p>2016-01-01</p> <p>In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015GMD.....8.1233M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015GMD.....8.1233M"><span>Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.</p> <p>2015-04-01</p> <p>Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28972674','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28972674"><span>Quantifying rapid changes in cardiovascular state with a moving ensemble average.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cieslak, Matthew; Ryan, William S; Babenko, Viktoriya; Erro, Hannah; Rathbun, Zoe M; Meiring, Wendy; Kelsey, Robert M; Blascovich, Jim; Grafton, Scott T</p> <p>2018-04-01</p> <p>MEAP, the moving ensemble analysis pipeline, is a new open-source tool designed to perform multisubject preprocessing and analysis of cardiovascular data, including electrocardiogram (ECG), impedance cardiogram (ICG), and continuous blood pressure (BP). In addition to traditional ensemble averaging, MEAP implements a moving ensemble averaging method that allows for the continuous estimation of indices related to cardiovascular state, including cardiac output, preejection period, heart rate variability, and total peripheral resistance, among others. Here, we define the moving ensemble technique mathematically, highlighting its differences from fixed-window ensemble averaging. We describe MEAP's interface and features for signal processing, artifact correction, and cardiovascular-based fMRI analysis. We demonstrate the accuracy of MEAP's novel B point detection algorithm on a large collection of hand-labeled ICG waveforms. As a proof of concept, two subjects completed a series of four physical and cognitive tasks (cold pressor, Valsalva maneuver, video game, random dot kinetogram) on 3 separate days while ECG, ICG, and BP were recorded. Critically, the moving ensemble method reliably captures the rapid cyclical cardiovascular changes related to the baroreflex during the Valsalva maneuver and the classic cold pressor response. Cardiovascular measures were seen to vary considerably within repetitions of the same cognitive task for each individual, suggesting that a carefully designed paradigm could be used to capture fast-acting event-related changes in cardiovascular state. © 2017 Society for Psychophysiological Research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/56275','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/56275"><span>Impact of Bias-Correction Type and Conditional Training on Bayesian Model Averaging over the Northeast United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Michael J. Erickson; Brian A. Colle; Joseph J. Charney</p> <p>2012-01-01</p> <p>The performance of a multimodel ensemble over the northeast United States is evaluated before and after applying bias correction and Bayesian model averaging (BMA). The 13-member Stony Brook University (SBU) ensemble at 0000 UTC is combined with the 21-member National Centers for Environmental Prediction (NCEP) Short-Range Ensemble Forecast (SREF) system at 2100 UTC....</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29399270','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29399270"><span>The Weighted-Average Lagged Ensemble.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>DelSole, T; Trenary, L; Tippett, M K</p> <p>2017-11-01</p> <p>A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMSA33B..01S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMSA33B..01S"><span>Ionospheric Storm Reconstructions with a Multimodel Ensemble Prdiction System (MEPS) of Data Assimilation Models: Mid and Low Latitude Dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schunk, R. W.; Scherliess, L.; Eccles, V.; Gardner, L. C.; Sojka, J. J.; Zhu, L.; Pi, X.; Mannucci, A. J.; Komjathy, A.; Wang, C.; Rosen, G.</p> <p>2016-12-01</p> <p>As part of the NASA-NSF Space Weather Modeling Collaboration, we created a Multimodel Ensemble Prediction System (MEPS) for the Ionosphere-Thermosphere-Electrodynamics system that is based on Data Assimilation (DA) models. MEPS is composed of seven physics-based data assimilation models that cover the globe. Ensemble modeling can be conducted for the mid-low latitude ionosphere using the four GAIM data assimilation models, including the Gauss Markov (GM), Full Physics (FP), Band Limited (BL) and 4DVAR DA models. These models can assimilate Total Electron Content (TEC) from a constellation of satellites, bottom-side electron density profiles from digisondes, in situ plasma densities, occultation data and ultraviolet emissions. The four GAIM models were run for the March 16-17, 2013, geomagnetic storm period with the same data, but we also systematically added new data types and re-ran the GAIM models to see how the different data types affected the GAIM results, with the emphasis on elucidating differences in the underlying ionospheric dynamics and thermospheric coupling. Also, for each scenario the outputs from the four GAIM models were used to produce an ensemble mean for TEC, NmF2, and hmF2. A simple average of the models was used in the ensemble averaging to see if there was an improvement of the ensemble average over the individual models. For the scenarios considered, the ensemble average yielded better specifications than the individual GAIM models. The model differences and averages, and the consequent differences in ionosphere-thermosphere coupling and dynamics will be discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850019472','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850019472"><span>An interplanetary magnetic field ensemble at 1 AU</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Matthaeus, W. H.; Goldstein, M. L.; King, J. H.</p> <p>1985-01-01</p> <p>A method for calculation ensemble averages from magnetic field data is described. A data set comprising approximately 16 months of nearly continuous ISEE-3 magnetic field data is used in this study. Individual subintervals of this data, ranging from 15 hours to 15.6 days comprise the ensemble. The sole condition for including each subinterval in the averages is the degree to which it represents a weakly time-stationary process. Averages obtained by this method are appropriate for a turbulence description of the interplanetary medium. The ensemble average correlation length obtained from all subintervals is found to be 4.9 x 10 to the 11th cm. The average value of the variances of the magnetic field components are in the approximate ratio 8:9:10, where the third component is the local mean field direction. The correlation lengths and variances are found to have a systematic variation with subinterval duration, reflecting the important role of low-frequency fluctuations in the interplanetary medium.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29257722','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29257722"><span>Perceived Average Orientation Reflects Effective Gist of the Surface.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cha, Oakyoon; Chong, Sang Chul</p> <p>2018-03-01</p> <p>The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28986784','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28986784"><span>Capturing Three-Dimensional Genome Organization in Individual Cells by Single-Cell Hi-C.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nagano, Takashi; Wingett, Steven W; Fraser, Peter</p> <p>2017-01-01</p> <p>Hi-C is a powerful method to investigate genome-wide, higher-order chromatin and chromosome conformations averaged from a population of cells. To expand the potential of Hi-C for single-cell analysis, we developed single-cell Hi-C. Similar to the existing "ensemble" Hi-C method, single-cell Hi-C detects proximity-dependent ligation events between cross-linked and restriction-digested chromatin fragments in cells. A major difference between the single-cell Hi-C and ensemble Hi-C protocol is that the proximity-dependent ligation is carried out in the nucleus. This allows the isolation of individual cells in which nearly the entire Hi-C procedure has been carried out, enabling the production of a Hi-C library and data from individual cells. With this new method, we studied genome conformations and found evidence for conserved topological domain organization from cell to cell, but highly variable interdomain contacts and chromosome folding genome wide. In addition, we found that the single-cell Hi-C protocol provided cleaner results with less technical noise suggesting it could be used to improve the ensemble Hi-C technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/115985-thermostatted-molecular-dynamics-how-avoid-toda-demon-hidden-nose-hoover-dynamics','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/115985-thermostatted-molecular-dynamics-how-avoid-toda-demon-hidden-nose-hoover-dynamics"><span>Thermostatted molecular dynamics: How to avoid the Toda demon hidden in Nose-Hoover dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Holian, B.L.; Voter, A.F.; Ravelo, R.</p> <p></p> <p>The Nose-Hoover thermostat, which is often used in the hope of modifying molecular dynamics trajectories in order to achieve canonical-ensemble averages, has hidden in it a Toda ``demon,`` which can give rise to unwanted, noncanonical undulations in the instantaneous kinetic temperature. We show how these long-lived oscillations arise from insufficient coupling of the thermostat to the atoms, and give straightforward, practical procedures for avoiding this weak-coupling pathology in isothermal molecular dynamics simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JChPh.125u4905L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JChPh.125u4905L"><span>Simulation studies of the fidelity of biomolecular structure ensemble recreation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lätzer, Joachim; Eastwood, Michael P.; Wolynes, Peter G.</p> <p>2006-12-01</p> <p>We examine the ability of Bayesian methods to recreate structural ensembles for partially folded molecules from averaged data. Specifically we test the ability of various algorithms to recreate different transition state ensembles for folding proteins using a multiple replica simulation algorithm using input from "gold standard" reference ensembles that were first generated with a Gō-like Hamiltonian having nonpairwise additive terms. A set of low resolution data, which function as the "experimental" ϕ values, were first constructed from this reference ensemble. The resulting ϕ values were then treated as one would treat laboratory experimental data and were used as input in the replica reconstruction algorithm. The resulting ensembles of structures obtained by the replica algorithm were compared to the gold standard reference ensemble, from which those "data" were, in fact, obtained. It is found that for a unimodal transition state ensemble with a low barrier, the multiple replica algorithm does recreate the reference ensemble fairly successfully when no experimental error is assumed. The Kolmogorov-Smirnov test as well as principal component analysis show that the overlap of the recovered and reference ensembles is significantly enhanced when multiple replicas are used. Reduction of the multiple replica ensembles by clustering successfully yields subensembles with close similarity to the reference ensembles. On the other hand, for a high barrier transition state with two distinct transition state ensembles, the single replica algorithm only samples a few structures of one of the reference ensemble basins. This is due to the fact that the ϕ values are intrinsically ensemble averaged quantities. The replica algorithm with multiple copies does sample both reference ensemble basins. In contrast to the single replica case, the multiple replicas are constrained to reproduce the average ϕ values, but allow fluctuations in ϕ for each individual copy. These fluctuations facilitate a more faithful sampling of the reference ensemble basins. Finally, we test how robustly the reconstruction algorithm can function by introducing errors in ϕ comparable in magnitude to those suggested by some authors. In this circumstance we observe that the chances of ensemble recovery with the replica algorithm are poor using a single replica, but are improved when multiple copies are used. A multimodal transition state ensemble, however, turns out to be more sensitive to large errors in ϕ (if appropriately gauged) and attempts at successful recreation of the reference ensemble with simple replica algorithms can fall short.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007AdWR...30.1371D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007AdWR...30.1371D"><span>Multi-model ensemble hydrologic prediction using Bayesian model averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh</p> <p>2007-05-01</p> <p>Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148j4114N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148j4114N"><span>Implicit ligand theory for relative binding free energies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nguyen, Trung Hai; Minh, David D. L.</p> <p>2018-03-01</p> <p>Implicit ligand theory enables noncovalent binding free energies to be calculated based on an exponential average of the binding potential of mean force (BPMF)—the binding free energy between a flexible ligand and rigid receptor—over a precomputed ensemble of receptor configurations. In the original formalism, receptor configurations were drawn from or reweighted to the apo ensemble. Here we show that BPMFs averaged over a holo ensemble yield binding free energies relative to the reference ligand that specifies the ensemble. When using receptor snapshots from an alchemical simulation with a single ligand, the new statistical estimator outperforms the original.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29350933','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29350933"><span>Reproducing the Ensemble Average Polar Solvation Energy of a Protein from a Single Structure: Gaussian-Based Smooth Dielectric Function for Macromolecular Modeling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chakravorty, Arghya; Jia, Zhe; Li, Lin; Zhao, Shan; Alexov, Emil</p> <p>2018-02-13</p> <p>Typically, the ensemble average polar component of solvation energy (ΔG polar solv ) of a macromolecule is computed using molecular dynamics (MD) or Monte Carlo (MC) simulations to generate conformational ensemble and then single/rigid conformation solvation energy calculation is performed on each snapshot. The primary objective of this work is to demonstrate that Poisson-Boltzmann (PB)-based approach using a Gaussian-based smooth dielectric function for macromolecular modeling previously developed by us (Li et al. J. Chem. Theory Comput. 2013, 9 (4), 2126-2136) can reproduce that ensemble average (ΔG polar solv ) of a protein from a single structure. We show that the Gaussian-based dielectric model reproduces the ensemble average ΔG polar solv (⟨ΔG polar solv ⟩) from an energy-minimized structure of a protein regardless of the minimization environment (structure minimized in vacuo, implicit or explicit waters, or crystal structure); the best case, however, is when it is paired with an in vacuo-minimized structure. In other minimization environments (implicit or explicit waters or crystal structure), the traditional two-dielectric model can still be selected with which the model produces correct solvation energies. Our observations from this work reflect how the ability to appropriately mimic the motion of residues, especially the salt bridge residues, influences a dielectric model's ability to reproduce the ensemble average value of polar solvation free energy from a single in vacuo-minimized structure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhRvL.110j0603P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhRvL.110j0603P"><span>Ergodicity Breaking in Geometric Brownian Motion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Peters, O.; Klein, W.</p> <p>2013-03-01</p> <p>Geometric Brownian motion (GBM) is a model for systems as varied as financial instruments and populations. The statistical properties of GBM are complicated by nonergodicity, which can lead to ensemble averages exhibiting exponential growth while any individual trajectory collapses according to its time average. A common tactic for bringing time averages closer to ensemble averages is diversification. In this Letter, we study the effects of diversification using the concept of ergodicity breaking.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..17.4171K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..17.4171K"><span>A New Multivariate Approach in Generating Ensemble Meteorological Forcings for Hydrological Forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Khajehei, Sepideh; Moradkhani, Hamid</p> <p>2015-04-01</p> <p>Producing reliable and accurate hydrologic ensemble forecasts are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model structure, and model parameters. Producing reliable and skillful precipitation ensemble forecasts is one approach to reduce the total uncertainty in hydrological applications. Currently, National Weather Prediction (NWP) models are developing ensemble forecasts for various temporal ranges. It is proven that raw products from NWP models are biased in mean and spread. Given the above state, there is a need for methods that are able to generate reliable ensemble forecasts for hydrological applications. One of the common techniques is to apply statistical procedures in order to generate ensemble forecast from NWP-generated single-value forecasts. The procedure is based on the bivariate probability distribution between the observation and single-value precipitation forecast. However, one of the assumptions of the current method is fitting Gaussian distribution to the marginal distributions of observed and modeled climate variable. Here, we have described and evaluated a Bayesian approach based on Copula functions to develop an ensemble precipitation forecast from the conditional distribution of single-value precipitation forecasts. Copula functions are known as the multivariate joint distribution of univariate marginal distributions, which are presented as an alternative procedure in capturing the uncertainties related to meteorological forcing. Copulas are capable of modeling the joint distribution of two variables with any level of correlation and dependency. This study is conducted over a sub-basin in the Columbia River Basin in USA using the monthly precipitation forecasts from Climate Forecast System (CFS) with 0.5x0.5 Deg. spatial resolution to reproduce the observations. The verification is conducted on a different period and the superiority of the procedure is compared with Ensemble Pre-Processor approach currently used by National Weather Service River Forecast Centers in USA.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MAP...130..107E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MAP...130..107E"><span>Prediction of dosage-based parameters from the puff dispersion of airborne materials in urban environments using the CFD-RANS methodology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.</p> <p>2018-02-01</p> <p>One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict ensemble-average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model's predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble-average peak concentration was systematically underpredicted by the model to a degree higher than the allowable by the acceptance criteria, in 1 of the 2 wind-tunnel experiments. The model performance depended on the positions of the examined sensors in relation to the emission source and the buildings configuration. The work presented in this paper was carried out (partly) within the scope of COST Action ES1006 "Evaluation, improvement, and guidance for the use of local-scale emergency prediction and response tools for airborne hazards in built environments".</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1919222R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1919222R"><span>Analyzing the impact of changing size and composition of a crop model ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rodríguez, Alfredo</p> <p>2017-04-01</p> <p>The use of an ensemble of crop growth simulation models is a practice recently adopted in order to quantify aspects of uncertainties in model simulations. Yet, while the climate modelling community has extensively investigated the properties of model ensembles and their implications, this has hardly been investigated for crop model ensembles (Wallach et al., 2016). In their ensemble of 27 wheat models, Martre et al. (2015) found that the accuracy of the multi-model ensemble-average only increases up to an ensemble size of ca. 10, but does not improve when including more models in the analysis. However, even when this number of members is reached, questions about the impact of the addition or removal of a member to/from the ensemble arise. When selecting ensemble members, identifying members with poor performance or giving implausible results can make a large difference on the outcome. The objective of this study is to set up a methodology that defines indicators to show the effects of changing the ensemble composition and size on simulation results, when a selection procedure of ensemble members is applied. Ensemble mean or median, and variance are measures used to depict ensemble results among other indicators. We are utilizing simulations from an ensemble of wheat models that have been used to construct impact response surfaces (Pirttioja et al., 2015) (IRSs). These show the response of an impact variable (e.g., crop yield) to systematic changes in two explanatory variables (e.g., precipitation and temperature). Using these, we compare different sub-ensembles in terms of the mean, median and spread, and also by comparing IRSs. The methodology developed here allows comparing an ensemble before and after applying any procedure that changes the ensemble composition and size by measuring the impact of this decision on the ensemble central tendency measures. The methodology could also be further developed to compare the effect of changing ensemble composition and size on IRS features. References Martre, P., Wallach, D., Asseng, S., Ewert, F., Jones, J.W., Rötter, R.P., Boote, K.J., Ruane, A.C., Thorburn, P.J., Cammarano, D., Hatfield, J.L., Rosenzweig, C., Aggarwal, P.K., Angulo, C., Basso, B., Bertuzzi, P., Biernath, C., Brisson, N., Challinor, A.J., Doltra, J., Gayler, S., Goldberg, R., Grant, R.F., Heng, L., Hooker, J., Hunt, L.A., Ingwersen, J., Izaurralde, R.C., Kersebaum, K.C., Muller, C., Kumar, S.N., Nendel, C., O'Leary, G., Olesen, J.E., Osborne, T.M., Palosuo, T., Priesack, E., Ripoche, D., Semenov, M.A., Shcherbak, I., Steduto, P., Stockle, C.O., Stratonovitch, P., Streck, T., Supit, I., Tao, F.L., Travasso, M., Waha, K., White, J.W., Wolf, J., 2015. Multimodel ensembles of wheat growth: many models are better than one. Glob. Change Biol. 21, 911-925. Pirttioja N., Carter T., Fronzek S., Bindi M., Hoffmann H., Palosuo T., Ruiz-Ramos, M., Tao F., Trnka M., Acutis M., Asseng S., Baranowski P., Basso B., Bodin P., Buis S., Cammarano D., Deligios P., Destain M.-F., Doro L., Dumont B., Ewert F., Ferrise R., Francois L., Gaiser T., Hlavinka P., Jacquemin I., Kersebaum K.-C., Kollas C., Krzyszczak J., Lorite I. J., Minet J., Minguez M. I., Montesion M., Moriondo M., Müller C., Nendel C., Öztürk I., Perego A., Rodriguez, A., Ruane A.C., Ruget F., Sanna M., Semenov M., Slawinski C., Stratonovitch P., Supit I., Waha K., Wang E., Wu L., Zhao Z., Rötter R.P, 2015. A crop model ensemble analysis of temperature and precipitation effects on wheat yield across a European transect using impact response surfaces. Clim. Res., 65:87-105, doi:10.3354/cr01322 Wallach, D., Mearns, L.O. Ruane, A.C., Rötter, R.P., Asseng, S. (2016). Lessons from climate modeling on the design and use of ensembles for crop modeling. Climate Change (in press) doi:10.1007/s10584-016-1803-1.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29475799','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29475799"><span>Reprint of "Investigating ensemble perception of emotions in autistic and typical children and adolescents".</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth</p> <p>2018-01-01</p> <p>Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28160619','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28160619"><span>Ensemble perception of emotions in autistic and typical children and adolescents.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth</p> <p>2017-04-01</p> <p>Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li class="active"><span>3</span></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_3 --> <div id="page_4" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li class="active"><span>4</span></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="61"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..1514098W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..1514098W"><span>Supermodeling With A Global Atmospheric Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wiegerinck, Wim; Burgers, Willem; Selten, Frank</p> <p>2013-04-01</p> <p>In weather and climate prediction studies it often turns out to be the case that the multi-model ensemble mean prediction has the best prediction skill scores. One possible explanation is that the major part of the model error is random and is averaged out in the ensemble mean. In the standard multi-model ensemble approach, the models are integrated in time independently and the predicted states are combined a posteriori. Recently an alternative ensemble prediction approach has been proposed in which the models exchange information during the simulation and synchronize on a common solution that is closer to the truth than any of the individual model solutions in the standard multi-model ensemble approach or a weighted average of these. This approach is called the super modeling approach (SUMO). The potential of the SUMO approach has been demonstrated in the context of simple, low-order, chaotic dynamical systems. The information exchange takes the form of linear nudging terms in the dynamical equations that nudge the solution of each model to the solution of all other models in the ensemble. With a suitable choice of the connection strengths the models synchronize on a common solution that is indeed closer to the true system than any of the individual model solutions without nudging. This approach is called connected SUMO. An alternative approach is to integrate a weighted averaged model, weighted SUMO. At each time step all models in the ensemble calculate the tendency, these tendencies are weighted averaged and the state is integrated one time step into the future with this weighted averaged tendency. It was shown that in case the connected SUMO synchronizes perfectly, the connected SUMO follows the weighted averaged trajectory and both approaches yield the same solution. In this study we pioneer both approaches in the context of a global, quasi-geostrophic, three-level atmosphere model that is capable of simulating quite realistically the extra-tropical circulation in the Northern Hemisphere winter.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.A41A3010B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.A41A3010B"><span>Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baker, N. C.; Taylor, P. C.</p> <p>2014-12-01</p> <p>The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28600677','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28600677"><span>Ensemble coding remains accurate under object and spatial visual working memory load.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Epstein, Michael L; Emmanouil, Tatiana A</p> <p>2017-10-01</p> <p>A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890005750','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890005750"><span>Determination of longitudinal aerodynamic derivatives using flight data from an icing research aircraft</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ranaudo, R. J.; Batterson, J. G.; Reehorst, A. L.; Bond, T. H.; Omara, T. M.</p> <p>1989-01-01</p> <p>A flight test was performed with the NASA Lewis Research Center's DH-6 icing research aircraft. The purpose was to employ a flight test procedure and data analysis method, to determine the accuracy with which the effects of ice on aircraft stability and control could be measured. For simplicity, flight testing was restricted to the short period longitudinal mode. Two flights were flown in a clean (baseline) configuration, and two flights were flown with simulated horizontal tail ice. Forty-five repeat doublet maneuvers were performed in each of four test configurations, at a given trim speed, to determine the ensemble variation of the estimated stability and control derivatives. Additional maneuvers were also performed in each configuration, to determine the variation in the longitudinal derivative estimates over a wide range of trim speeds. Stability and control derivatives were estimated by a Modified Stepwise Regression (MSR) technique. A measure of the confidence in the derivative estimates was obtained by comparing the standard error for the ensemble of repeat maneuvers, to the average of the estimated standard errors predicted by the MSR program. A multiplicative relationship was determined between the ensemble standard error, and the averaged program standard errors. In addition, a 95 percent confidence interval analysis was performed for the elevator effectiveness estimates, C sub m sub delta e. This analysis identified the speed range where changes in C sub m sub delta e could be attributed to icing effects. The magnitude of icing effects on the derivative estimates were strongly dependent on flight speed and aircraft wing flap configuration. With wing flaps up, the estimated derivatives were degraded most at lower speeds corresponding to that configuration. With wing flaps extended to 10 degrees, the estimated derivatives were degraded most at the higher corresponding speeds. The effects of icing on the changes in longitudinal stability and control derivatives were adequately determined by the flight test procedure and the MSR analysis method discussed herein.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A21F2211K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A21F2211K"><span>Can decadal climate predictions be improved by ocean ensemble dispersion filtering?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.</p> <p>2017-12-01</p> <p>Decadal predictions by Earth system models aim to capture the state and phase of the climate several years inadvance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-termweather forecasts represent an initial value problem and long-term climate projections represent a boundarycondition problem, the decadal climate prediction falls in-between these two time scales. The ocean memorydue to its heat capacity holds big potential skill on the decadal scale. In recent years, more precise initializationtechniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions.Ensembles are another important aspect. Applying slightly perturbed predictions results in an ensemble. Insteadof using and evaluating one prediction, but the whole ensemble or its ensemble average, improves a predictionsystem. However, climate models in general start losing the initialized signal and its predictive skill from oneforecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improvedby a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. Wefound that this procedure, called ensemble dispersion filter, results in more accurate results than the standarddecadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions showan increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with largerensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from oceanensemble dispersion filtering toward the ensemble mean. This study is part of MiKlip (fona-miklip.de) - a major project on decadal climate prediction in Germany.We focus on the Max-Planck-Institute Earth System Model using the low-resolution version (MPI-ESM-LR) andMiKlip's basic initialization strategy as in 2017 published decadal climate forecast: http://www.fona-miklip.de/decadal-forecast-2017-2026/decadal-forecast-for-2017-2026/ More informations about this study in JAMES:DOI: 10.1002/2016MS000787</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvF...2a4703E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvF...2a4703E"><span>Equilibrium energy spectrum of point vortex motion with remarks on ensemble choice and ergodicity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Esler, J. G.</p> <p>2017-01-01</p> <p>The dynamics and statistical mechanics of N chaotically evolving point vortices in the doubly periodic domain are revisited. The selection of the correct microcanonical ensemble for the system is first investigated. The numerical results of Weiss and McWilliams [Phys. Fluids A 3, 835 (1991), 10.1063/1.858014], who argued that the point vortex system with N =6 is nonergodic because of an apparent discrepancy between ensemble averages and dynamical time averages, are shown to be due to an incorrect ensemble definition. When the correct microcanonical ensemble is sampled, accounting for the vortex momentum constraint, time averages obtained from direct numerical simulation agree with ensemble averages within the sampling error of each calculation, i.e., there is no numerical evidence for nonergodicity. Further, in the N →∞ limit it is shown that the vortex momentum no longer constrains the long-time dynamics and therefore that the correct microcanonical ensemble for statistical mechanics is that associated with the entire constant energy hypersurface in phase space. Next, a recently developed technique is used to generate an explicit formula for the density of states function for the system, including for arbitrary distributions of vortex circulations. Exact formulas for the equilibrium energy spectrum, and for the probability density function of the energy in each Fourier mode, are then obtained. Results are compared with a series of direct numerical simulations with N =50 and excellent agreement is found, confirming the relevance of the results for interpretation of quantum and classical two-dimensional turbulence.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29454895','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29454895"><span>Ensemble coding of face identity is present but weaker in congenital prosopagnosia.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Robson, Matthew K; Palermo, Romina; Jeffery, Linda; Neumann, Markus F</p> <p>2018-03-01</p> <p>Individuals with congenital prosopagnosia (CP) are impaired at identifying individual faces but do not appear to show impairments in extracting the average identity from a group of faces (known as ensemble coding). However, possible deficits in ensemble coding in a previous study (CPs n = 4) may have been masked because CPs relied on pictorial (image) cues rather than identity cues. Here we asked whether a larger sample of CPs (n = 11) would show intact ensemble coding of identity when availability of image cues was minimised. Participants viewed a "set" of four faces and then judged whether a subsequent individual test face, either an exemplar or a "set average", was in the preceding set. Ensemble coding occurred when matching (vs. mismatching) averages were mistakenly endorsed as set members. We assessed both image- and identity-based ensemble coding, by varying whether test faces were either the same or different images of the identities in the set. CPs showed significant ensemble coding in both tasks, indicating that their performance was independent of image cues. As a group, CPs' ensemble coding was weaker than controls in both tasks, consistent with evidence that perceptual processing of face identity is disrupted in CP. This effect was driven by CPs (n= 3) who, in addition to having impaired face memory, also performed particularly poorly on a measure of face perception (CFPT). Future research, using larger samples, should examine whether deficits in ensemble coding may be restricted to CPs who also have substantial face perception deficits. Copyright © 2018 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26723635','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26723635"><span>Bayesian ensemble refinement by replica simulations and reweighting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hummer, Gerhard; Köfinger, Jürgen</p> <p>2015-12-28</p> <p>We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JChPh.143x3150H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JChPh.143x3150H"><span>Bayesian ensemble refinement by replica simulations and reweighting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hummer, Gerhard; Köfinger, Jürgen</p> <p>2015-12-01</p> <p>We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23006350','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23006350"><span>Practical experimental certification of computational quantum gates using a twirling procedure.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Moussa, Osama; da Silva, Marcus P; Ryan, Colm A; Laflamme, Raymond</p> <p>2012-08-17</p> <p>Because of the technical difficulty of building large quantum computers, it is important to be able to estimate how faithful a given implementation is to an ideal quantum computer. The common approach of completely characterizing the computation process via quantum process tomography requires an exponential amount of resources, and thus is not practical even for relatively small devices. We solve this problem by demonstrating that twirling experiments previously used to characterize the average fidelity of quantum memories efficiently can be easily adapted to estimate the average fidelity of the experimental implementation of important quantum computation processes, such as unitaries in the Clifford group, in a practical and efficient manner with applicability in current quantum devices. Using this procedure, we demonstrate state-of-the-art coherent control of an ensemble of magnetic moments of nuclear spins in a single crystal solid by implementing the encoding operation for a 3-qubit code with only a 1% degradation in average fidelity discounting preparation and measurement errors. We also highlight one of the advances that was instrumental in achieving such high fidelity control.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A24D..02Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A24D..02Y"><span>Decadal climate prediction in the large ensemble limit</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yeager, S. G.; Rosenbloom, N. A.; Strand, G.; Lindsay, K. T.; Danabasoglu, G.; Karspeck, A. R.; Bates, S. C.; Meehl, G. A.</p> <p>2017-12-01</p> <p>In order to quantify the benefits of initialization for climate prediction on decadal timescales, two parallel sets of historical simulations are required: one "initialized" ensemble that incorporates observations of past climate states and one "uninitialized" ensemble whose internal climate variations evolve freely and without synchronicity. In the large ensemble limit, ensemble averaging isolates potentially predictable forced and internal variance components in the "initialized" set, but only the forced variance remains after averaging the "uninitialized" set. The ensemble size needed to achieve this variance decomposition, and to robustly distinguish initialized from uninitialized decadal predictions, remains poorly constrained. We examine a large ensemble (LE) of initialized decadal prediction (DP) experiments carried out using the Community Earth System Model (CESM). This 40-member CESM-DP-LE set of experiments represents the "initialized" complement to the CESM large ensemble of 20th century runs (CESM-LE) documented in Kay et al. (2015). Both simulation sets share the same model configuration, historical radiative forcings, and large ensemble sizes. The twin experiments afford an unprecedented opportunity to explore the sensitivity of DP skill assessment, and in particular the skill enhancement associated with initialization, to ensemble size. This talk will highlight the benefits of a large ensemble size for initialized predictions of seasonal climate over land in the Atlantic sector as well as predictions of shifts in the likelihood of climate extremes that have large societal impact.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1814908S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1814908S"><span>Single-ping ADCP measurements in the Strait of Gibraltar</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo</p> <p>2016-04-01</p> <p>In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22676006-interpolation-property-values-between-electron-numbers-inconsistent-ensemble-averaging','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22676006-interpolation-property-values-between-electron-numbers-inconsistent-ensemble-averaging"><span>Interpolation of property-values between electron numbers is inconsistent with ensemble averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Miranda-Quintana, Ramón Alain; Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1; Ayers, Paul W.</p> <p>2016-06-28</p> <p>In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integermore » electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19730037315&hterms=Coding+decoding&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3DCoding%2Bdecoding','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19730037315&hterms=Coding+decoding&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3DCoding%2Bdecoding"><span>The random coding bound is tight for the average code.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gallager, R. G.</p> <p>1973-01-01</p> <p>The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNG31A0145S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNG31A0145S"><span>On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.</p> <p>2017-12-01</p> <p>Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AtmRe.207..155A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AtmRe.207..155A"><span>An ensemble-ANFIS based uncertainty assessment model for forecasting multi-scalar standardized precipitation index</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ali, Mumtaz; Deo, Ravinesh C.; Downs, Nathan J.; Maraseni, Tek</p> <p>2018-07-01</p> <p>Forecasting drought by means of the World Meteorological Organization-approved Standardized Precipitation Index (SPI) is considered to be a fundamental task to support socio-economic initiatives and effectively mitigating the climate-risk. This study aims to develop a robust drought modelling strategy to forecast multi-scalar SPI in drought-rich regions of Pakistan where statistically significant lagged combinations of antecedent SPI are used to forecast future SPI. With ensemble-Adaptive Neuro Fuzzy Inference System ('ensemble-ANFIS') executed via a 10-fold cross-validation procedure, a model is constructed by randomly partitioned input-target data. Resulting in 10-member ensemble-ANFIS outputs, judged by mean square error and correlation coefficient in the training period, the optimal forecasts are attained by the averaged simulations, and the model is benchmarked with M5 Model Tree and Minimax Probability Machine Regression (MPMR). The results show the proposed ensemble-ANFIS model's preciseness was notably better (in terms of the root mean square and mean absolute error including the Willmott's, Nash-Sutcliffe and Legates McCabe's index) for the 6- and 12- month compared to the 3-month forecasts as verified by the largest error proportions that registered in smallest error band. Applying 10-member simulations, ensemble-ANFIS model was validated for its ability to forecast severity (S), duration (D) and intensity (I) of drought (including the error bound). This enabled uncertainty between multi-models to be rationalized more efficiently, leading to a reduction in forecast error caused by stochasticity in drought behaviours. Through cross-validations at diverse sites, a geographic signature in modelled uncertainties was also calculated. Considering the superiority of ensemble-ANFIS approach and its ability to generate uncertainty-based information, the study advocates the versatility of a multi-model approach for drought-risk forecasting and its prime importance for estimating drought properties over confidence intervals to generate better information for strategic decision-making.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1395035-ensemble-based-parameter-estimation-coupled-gcm-using-adaptive-spatial-average-method','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1395035-ensemble-based-parameter-estimation-coupled-gcm-using-adaptive-spatial-average-method"><span>Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Liu, Y.; Liu, Z.; Zhang, S.; ...</p> <p>2014-05-29</p> <p>Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..16..247G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..16..247G"><span>Application Bayesian Model Averaging method for ensemble system for Poland</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guzikowski, Jakub; Czerwinska, Agnieszka</p> <p>2014-05-01</p> <p>The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation probabilistic data The Brier Score (BS) and Continuous Ranked Probability Score (CRPS) were used. Finally comparison between BMA calibrated data and data from ensemble members will be displayed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20197040','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20197040"><span>Determination of ensemble-average pairwise root mean-square deviation from experimental B-factors.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kuzmanic, Antonija; Zagrovic, Bojan</p> <p>2010-03-03</p> <p>Root mean-square deviation (RMSD) after roto-translational least-squares fitting is a measure of global structural similarity of macromolecules used commonly. On the other hand, experimental x-ray B-factors are used frequently to study local structural heterogeneity and dynamics in macromolecules by providing direct information about root mean-square fluctuations (RMSF) that can also be calculated from molecular dynamics simulations. We provide a mathematical derivation showing that, given a set of conservative assumptions, a root mean-square ensemble-average of an all-against-all distribution of pairwise RMSD for a single molecular species, <RMSD(2)>(1/2), is directly related to average B-factors (<B>) and <RMSF(2)>(1/2). We show this relationship and explore its limits of validity on a heterogeneous ensemble of structures taken from molecular dynamics simulations of villin headpiece generated using distributed-computing techniques and the Folding@Home cluster. Our results provide a basis for quantifying global structural diversity of macromolecules in crystals directly from x-ray experiments, and we show this on a large set of structures taken from the Protein Data Bank. In particular, we show that the ensemble-average pairwise backbone RMSD for a microscopic ensemble underlying a typical protein x-ray structure is approximately 1.1 A, under the assumption that the principal contribution to experimental B-factors is conformational variability. 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2830444','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2830444"><span>Determination of Ensemble-Average Pairwise Root Mean-Square Deviation from Experimental B-Factors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kuzmanic, Antonija; Zagrovic, Bojan</p> <p>2010-01-01</p> <p>Abstract Root mean-square deviation (RMSD) after roto-translational least-squares fitting is a measure of global structural similarity of macromolecules used commonly. On the other hand, experimental x-ray B-factors are used frequently to study local structural heterogeneity and dynamics in macromolecules by providing direct information about root mean-square fluctuations (RMSF) that can also be calculated from molecular dynamics simulations. We provide a mathematical derivation showing that, given a set of conservative assumptions, a root mean-square ensemble-average of an all-against-all distribution of pairwise RMSD for a single molecular species, <RMSD2>1/2, is directly related to average B-factors (<B>) and <RMSF2>1/2. We show this relationship and explore its limits of validity on a heterogeneous ensemble of structures taken from molecular dynamics simulations of villin headpiece generated using distributed-computing techniques and the Folding@Home cluster. Our results provide a basis for quantifying global structural diversity of macromolecules in crystals directly from x-ray experiments, and we show this on a large set of structures taken from the Protein Data Bank. In particular, we show that the ensemble-average pairwise backbone RMSD for a microscopic ensemble underlying a typical protein x-ray structure is ∼1.1 Å, under the assumption that the principal contribution to experimental B-factors is conformational variability. PMID:20197040</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li class="active"><span>4</span></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_4 --> <div id="page_5" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li class="active"><span>5</span></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="81"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000PhRvE..62.6126L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000PhRvE..62.6126L"><span>Variety and volatility in financial markets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lillo, Fabrizio; Mantegna, Rosario N.</p> <p>2000-11-01</p> <p>We study the price dynamics of stocks traded in a financial market by considering the statistical properties of both a single time series and an ensemble of stocks traded simultaneously. We use the n stocks traded on the New York Stock Exchange to form a statistical ensemble of daily stock returns. For each trading day of our database, we study the ensemble return distribution. We find that a typical ensemble return distribution exists in most of the trading days with the exception of crash and rally days and of the days following these extreme events. We analyze each ensemble return distribution by extracting its first two central moments. We observe that these moments fluctuate in time and are stochastic processes, themselves. We characterize the statistical properties of ensemble return distribution central moments by investigating their probability density functions and temporal correlation properties. In general, time-averaged and portfolio-averaged price returns have different statistical properties. We infer from these differences information about the relative strength of correlation between stocks and between different trading days. Last, we compare our empirical results with those predicted by the single-index model and we conclude that this simple model cannot explain the statistical properties of the second moment of the ensemble return distribution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AtmRe.176...75F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AtmRe.176...75F"><span>Applications of Bayesian Procrustes shape analysis to ensemble radar reflectivity nowcast verification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fox, Neil I.; Micheas, Athanasios C.; Peng, Yuqiang</p> <p>2016-07-01</p> <p>This paper introduces the use of Bayesian full Procrustes shape analysis in object-oriented meteorological applications. In particular, the Procrustes methodology is used to generate mean forecast precipitation fields from a set of ensemble forecasts. This approach has advantages over other ensemble averaging techniques in that it can produce a forecast that retains the morphological features of the precipitation structures and present the range of forecast outcomes represented by the ensemble. The production of the ensemble mean avoids the problems of smoothing that result from simple pixel or cell averaging, while producing credible sets that retain information on ensemble spread. Also in this paper, the full Bayesian Procrustes scheme is used as an object verification tool for precipitation forecasts. This is an extension of a previously presented Procrustes shape analysis based verification approach into a full Bayesian format designed to handle the verification of precipitation forecasts that match objects from an ensemble of forecast fields to a single truth image. The methodology is tested on radar reflectivity nowcasts produced in the Warning Decision Support System - Integrated Information (WDSS-II) by varying parameters in the K-means cluster tracking scheme.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29679837','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29679837"><span>Development of a stacked ensemble model for forecasting and analyzing daily average PM2.5 concentrations in Beijing, China.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhai, Binxu; Chen, Jianguo</p> <p>2018-04-18</p> <p>A stacked ensemble model is developed for forecasting and analyzing the daily average concentrations of fine particulate matter (PM 2.5 ) in Beijing, China. Special feature extraction procedures, including those of simplification, polynomial, transformation and combination, are conducted before modeling to identify potentially significant features based on an exploratory data analysis. Stability feature selection and tree-based feature selection methods are applied to select important variables and evaluate the degrees of feature importance. Single models including LASSO, Adaboost, XGBoost and multi-layer perceptron optimized by the genetic algorithm (GA-MLP) are established in the level 0 space and are then integrated by support vector regression (SVR) in the level 1 space via stacked generalization. A feature importance analysis reveals that nitrogen dioxide (NO 2 ) and carbon monoxide (CO) concentrations measured from the city of Zhangjiakou are taken as the most important elements of pollution factors for forecasting PM 2.5 concentrations. Local extreme wind speeds and maximal wind speeds are considered to extend the most effects of meteorological factors to the cross-regional transportation of contaminants. Pollutants found in the cities of Zhangjiakou and Chengde have a stronger impact on air quality in Beijing than other surrounding factors. Our model evaluation shows that the ensemble model generally performs better than a single nonlinear forecasting model when applied to new data with a coefficient of determination (R 2 ) of 0.90 and a root mean squared error (RMSE) of 23.69μg/m 3 . For single pollutant grade recognition, the proposed model performs better when applied to days characterized by good air quality than when applied to days registering high levels of pollution. The overall classification accuracy level is 73.93%, with most misclassifications made among adjacent categories. The results demonstrate the interpretability and generalizability of the stacked ensemble model. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22264078-weak-ergodicity-breaking-irreproducibility-ageing-anomalous-diffusion-processes','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22264078-weak-ergodicity-breaking-irreproducibility-ageing-anomalous-diffusion-processes"><span>Weak ergodicity breaking, irreproducibility, and ageing in anomalous diffusion processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Metzler, Ralf</p> <p>2014-01-14</p> <p>Single particle traces are standardly evaluated in terms of time averages of the second moment of the position time series r(t). For ergodic processes, one can interpret such results in terms of the known theories for the corresponding ensemble averaged quantities. In anomalous diffusion processes, that are widely observed in nature over many orders of magnitude, the equivalence between (long) time and ensemble averages may be broken (weak ergodicity breaking), and these time averages may no longer be interpreted in terms of ensemble theories. Here we detail some recent results on weakly non-ergodic systems with respect to the time averagedmore » mean squared displacement, the inherent irreproducibility of individual measurements, and methods to determine the exact underlying stochastic process. We also address the phenomenon of ageing, the dependence of physical observables on the time span between initial preparation of the system and the start of the measurement.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NHESS..16.1821K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NHESS..16.1821K"><span>Ensemble flood simulation for a small dam catchment in Japan using 10 and 2 km resolution nonhydrostatic model rainfalls</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kobayashi, Kenichiro; Otsuka, Shigenori; Apip; Saito, Kazuo</p> <p>2016-08-01</p> <p>This paper presents a study on short-term ensemble flood forecasting specifically for small dam catchments in Japan. Numerical ensemble simulations of rainfall from the Japan Meteorological Agency nonhydrostatic model (JMA-NHM) are used as the input data to a rainfall-runoff model for predicting river discharge into a dam. The ensemble weather simulations use a conventional 10 km and a high-resolution 2 km spatial resolutions. A distributed rainfall-runoff model is constructed for the Kasahori dam catchment (approx. 70 km2) and applied with the ensemble rainfalls. The results show that the hourly maximum and cumulative catchment-average rainfalls of the 2 km resolution JMA-NHM ensemble simulation are more appropriate than the 10 km resolution rainfalls. All the simulated inflows based on the 2 and 10 km rainfalls become larger than the flood discharge of 140 m3 s-1, a threshold value for flood control. The inflows with the 10 km resolution ensemble rainfall are all considerably smaller than the observations, while at least one simulated discharge out of 11 ensemble members with the 2 km resolution rainfalls reproduces the first peak of the inflow at the Kasahori dam with similar amplitude to observations, although there are spatiotemporal lags between simulation and observation. To take positional lags into account of the ensemble discharge simulation, the rainfall distribution in each ensemble member is shifted so that the catchment-averaged cumulative rainfall of the Kasahori dam maximizes. The runoff simulation with the position-shifted rainfalls shows much better results than the original ensemble discharge simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1544316','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1544316"><span>Toward an Accurate Theoretical Framework for Describing Ensembles for Proteins under Strongly Denaturing Conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Tran, Hoang T.; Pappu, Rohit V.</p> <p>2006-01-01</p> <p>Our focus is on an appropriate theoretical framework for describing highly denatured proteins. In high concentrations of denaturants, proteins behave like polymers in a good solvent and ensembles for denatured proteins can be modeled by ignoring all interactions except excluded volume (EV) effects. To assay conformational preferences of highly denatured proteins, we quantify a variety of properties for EV-limit ensembles of 23 two-state proteins. We find that modeled denatured proteins can be best described as follows. Average shapes are consistent with prolate ellipsoids. Ensembles are characterized by large correlated fluctuations. Sequence-specific conformational preferences are restricted to local length scales that span five to nine residues. Beyond local length scales, chain properties follow well-defined power laws that are expected for generic polymers in the EV limit. The average available volume is filled inefficiently, and cavities of all sizes are found within the interiors of denatured proteins. All properties characterized from simulated ensembles match predictions from rigorous field theories. We use our results to resolve between conflicting proposals for structure in ensembles for highly denatured states. PMID:16766618</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010PhRvL.104s0601B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010PhRvL.104s0601B"><span>Enhanced Sampling in the Well-Tempered Ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bonomi, M.; Parrinello, M.</p> <p>2010-05-01</p> <p>We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi , J. Comput. Chem. 30, 1615 (2009)JCCHDD0192-865110.1002/jcc.21305]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20866953','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20866953"><span>Enhanced sampling in the well-tempered ensemble.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bonomi, M; Parrinello, M</p> <p>2010-05-14</p> <p>We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi, J. Comput. Chem. 30, 1615 (2009)]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvE..94e2142B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvE..94e2142B"><span>Inhomogeneous diffusion and ergodicity breaking induced by global memory effects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Budini, Adrián A.</p> <p>2016-11-01</p> <p>We introduce a class of discrete random-walk model driven by global memory effects. At any time, the right-left transitions depend on the whole previous history of the walker, being defined by an urnlike memory mechanism. The characteristic function is calculated in an exact way, which allows us to demonstrate that the ensemble of realizations is ballistic. Asymptotically, each realization is equivalent to that of a biased Markovian diffusion process with transition rates that strongly differs from one trajectory to another. Using this "inhomogeneous diffusion" feature, the ergodic properties of the dynamics are analytically studied through the time-averaged moments. Even in the long-time regime, they remain random objects. While their average over realizations recovers the corresponding ensemble averages, departure between time and ensemble averages is explicitly shown through their probability densities. For the density of the second time-averaged moment, an ergodic limit and the limit of infinite lag times do not commutate. All these effects are induced by the memory effects. A generalized Einstein fluctuation-dissipation relation is also obtained for the time-averaged moments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..15..521A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..15..521A"><span>Constructing optimal ensemble projections for predictive environmental modelling in Northern Eurasia</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anisimov, Oleg; Kokorev, Vasily</p> <p>2013-04-01</p> <p>Large uncertainties in climate impact modelling are associated with the forcing climate data. This study is targeted at the evaluation of the quality of GCM-based climatic projections in the specific context of predictive environmental modelling in Northern Eurasia. To accomplish this task, we used the output from 36 CMIP5 GCMs from the IPCC AR-5 data base for the control period 1975-2005 and calculated several climatic characteristics and indexes that are most often used in the impact models, i.e. the summer warmth index, duration of the vegetation growth period, precipitation sums, dryness index, thawing degree-day sums, and the annual temperature amplitude. We used data from 744 weather stations in Russia and neighbouring countries to analyze the spatial patterns of modern climatic change and to delineate 17 large regions with coherent temperature changes in the past few decades. GSM results and observational data were averaged over the coherent regions and compared with each other. Ultimately, we evaluated the skills of individual models, ranked them in the context of regional impact modelling and identified top-end GCMs that "better than average" reproduce modern regional changes of the selected meteorological parameters and climatic indexes. Selected top-end GCMs were used to compose several ensembles, each combining results from the different number of models. Ensembles were ranked using the same algorithm and outliers eliminated. We then used data from top-end ensembles for the 2000-2100 period to construct the climatic projections that are likely to be "better than average" in predicting climatic parameters that govern the state of environment in Northern Eurasia. The ultimate conclusions of our study are the following. • High-end GCMs that demonstrate excellent skills in conventional atmospheric model intercomparison experiments are not necessarily the best in replicating climatic characteristics that govern the state of environment in Northern Eurasia, and independent model evaluation on regional level is necessary to identify "better than average" GCMs. • Each of the ensembles combining results from several "better than average" models replicate selected meteorological parameters and climatic indexes better than any single GCM. The ensemble skills are parameter-specific and depend on models it consists of. The best results are not necessarily those based on the ensemble comprised by all "better than average" models. • Comprehensive evaluation of climatic scenarios using specific criteria narrows the range of uncertainties in environmental projections.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28268487','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28268487"><span>Optimal weighted averaging of event related activity from acquisitions with artifacts.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vollero, Luca; Petrichella, Sara; Innello, Giulio</p> <p>2016-08-01</p> <p>In several biomedical applications that require the signal processing of biological data, the starting procedure for noise reduction is the ensemble averaging of multiple repeated acquisitions (trials). This method is based on the assumption that each trial is composed of two additive components: (i) a time-locked activity related to some sensitive/stimulation phenomenon (ERA, Event Related Activity in the following) and (ii) a sum of several other non time-locked background activities. The averaging aims at estimating the ERA activity under very low Signal to Noise and Interference Ratio (SNIR). Although averaging is a well established tool, its performance can be improved in the presence of high-power disturbances (artifacts) by a trials classification and removal stage. In this paper we propose, model and evaluate a new approach that avoids trials removal, managing trials classified as artifact-free and artifact-prone with two different weights. Based on the model, a weights tuning is possible and through modeling and simulations we show that, when optimally configured, the proposed solution outperforms classical approaches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018NJPh...20c1001S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018NJPh...20c1001S"><span>The power of a single trajectory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schnellbächer, Nikolas D.; Schwarz, Ulrich S.</p> <p>2018-03-01</p> <p>Random walks are often evaluated in terms of their mean squared displacements, either for a large number of trajectories or for one very long trajectory. An alternative evaluation is based on the power spectral density, but here it is less clear which information can be extracted from a single trajectory. For continuous-time Brownian motion, Krapf et al now have mathematically proven that the one property that can be reliably extracted from a single trajectory is the frequency dependence of the ensemble-averaged power spectral density (Krapf et al 2018 New J. Phys. 20 023029). Their mathematical analysis also identifies the appropriate frequency window for this procedure and shows that the diffusion coefficient can be extracted by averaging over a small number of trajectories. The authors have verified their analytical results both by computer simulations and experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhyA..482....1M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhyA..482....1M"><span>Quantum canonical ensemble: A projection operator approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Magnus, Wim; Lemmens, Lucien; Brosens, Fons</p> <p>2017-09-01</p> <p>Knowing the exact number of particles N, and taking this knowledge into account, the quantum canonical ensemble imposes a constraint on the occupation number operators. The constraint particularly hampers the systematic calculation of the partition function and any relevant thermodynamic expectation value for arbitrary but fixed N. On the other hand, fixing only the average number of particles, one may remove the above constraint and simply factorize the traces in Fock space into traces over single-particle states. As is well known, that would be the strategy of the grand-canonical ensemble which, however, comes with an additional Lagrange multiplier to impose the average number of particles. The appearance of this multiplier can be avoided by invoking a projection operator that enables a constraint-free computation of the partition function and its derived quantities in the canonical ensemble, at the price of an angular or contour integration. Introduced in the recent past to handle various issues related to particle-number projected statistics, the projection operator approach proves beneficial to a wide variety of problems in condensed matter physics for which the canonical ensemble offers a natural and appropriate environment. In this light, we present a systematic treatment of the canonical ensemble that embeds the projection operator into the formalism of second quantization while explicitly fixing N, the very number of particles rather than the average. Being applicable to both bosonic and fermionic systems in arbitrary dimensions, transparent integral representations are provided for the partition function ZN and the Helmholtz free energy FN as well as for two- and four-point correlation functions. The chemical potential is not a Lagrange multiplier regulating the average particle number but can be extracted from FN+1 -FN, as illustrated for a two-dimensional fermion gas.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25904973','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25904973"><span>A virtual pebble game to ensemble average graph rigidity.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J</p> <p>2015-01-01</p> <p>The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most accurate but slowest method of ensemble averaging over hundreds to thousands of independent PG runs, and the fastest but least accurate MCC.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22558094','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22558094"><span>A stochastic Markov chain model to describe lung cancer growth and metastasis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila A; Nieva, Jorge; Kuhn, Peter</p> <p>2012-01-01</p> <p>A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities) interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold). Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately) normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CG....104...75V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CG....104...75V"><span>Ensemble predictive model for more accurate soil organic carbon spectroscopic estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vašát, Radim; Kodešová, Radka; Borůvka, Luboš</p> <p>2017-07-01</p> <p>A myriad of signal pre-processing strategies and multivariate calibration techniques has been explored in attempt to improve the spectroscopic prediction of soil organic carbon (SOC) over the last few decades. Therefore, to come up with a novel, more powerful, and accurate predictive approach to beat the rank becomes a challenging task. However, there may be a way, so that combine several individual predictions into a single final one (according to ensemble learning theory). As this approach performs best when combining in nature different predictive algorithms that are calibrated with structurally different predictor variables, we tested predictors of two different kinds: 1) reflectance values (or transforms) at each wavelength and 2) absorption feature parameters. Consequently we applied four different calibration techniques, two per each type of predictors: a) partial least squares regression and support vector machines for type 1, and b) multiple linear regression and random forest for type 2. The weights to be assigned to individual predictions within the ensemble model (constructed as a weighted average) were determined by an automated procedure that ensured the best solution among all possible was selected. The approach was tested at soil samples taken from surface horizon of four sites differing in the prevailing soil units. By employing the ensemble predictive model the prediction accuracy of SOC improved at all four sites. The coefficient of determination in cross-validation (R2cv) increased from 0.849, 0.611, 0.811 and 0.644 (the best individual predictions) to 0.864, 0.650, 0.824 and 0.698 for Site 1, 2, 3 and 4, respectively. Generally, the ensemble model affected the final prediction so that the maximal deviations of predicted vs. observed values of the individual predictions were reduced, and thus the correlation cloud became thinner as desired.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8895E..06K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8895E..06K"><span>Creation of the BMA ensemble for SST using a parallel processing technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kim, Kwangjin; Lee, Yang Won</p> <p>2013-10-01</p> <p>Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUSMGC22A..03C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUSMGC22A..03C"><span>How well the Reliable Ensemble Averaging Method (REA) for 15 CMIP5 GCMs simulations works for Mexico?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Colorado, G.; Salinas, J. A.; Cavazos, T.; de Grau, P.</p> <p>2013-05-01</p> <p>15 CMIP5 GCMs precipitation simulations were combined in a weighted ensemble using the Reliable Ensemble Averaging (REA) method, obtaining the weight of each model. This was done for a historical period (1961-2000) and for the future emissions based on low (RCP4.5) and high (RCP8.5) radiating forcing for the period 2075-2099. The annual cycle of simple ensemble of the historical GCMs simulations, the historical REA average and the Climate Research Unit (CRU TS3.1) database was compared in four zones of México. In the case of precipitation we can see the improvements by using the REA method, especially in the two northern zones of México where the REA average is more close to the observations (CRU) that the simple average. However in the southern zones although there is an improvement it is not as good as it is in the north, particularly in the southeast where instead of the REA average is able to reproduce qualitatively good the annual cycle with the mid-summer drought it was greatly underestimated. The main reason is because the precipitation is underestimated for all the models and the mid-summer drought do not even exists in some models. In the REA average of the future scenarios, as we can expected, the most drastic decrease in precipitation was simulated using the RCP8.5 especially in the monsoon area and in the south of Mexico in summer and in winter. In the center and southern of Mexico however, the same scenario in autumn simulates an increase of precipitation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvE..94b2214D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvE..94b2214D"><span>Quantifying nonergodicity in nonautonomous dissipative dynamical systems: An application to climate change</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Drótos, Gábor; Bódai, Tamás; Tél, Tamás</p> <p>2016-08-01</p> <p>In nonautonomous dynamical systems, like in climate dynamics, an ensemble of trajectories initiated in the remote past defines a unique probability distribution, the natural measure of a snapshot attractor, for any instant of time, but this distribution typically changes in time. In cases with an aperiodic driving, temporal averages taken along a single trajectory would differ from the corresponding ensemble averages even in the infinite-time limit: ergodicity does not hold. It is worth considering this difference, which we call the nonergodic mismatch, by taking time windows of finite length for temporal averaging. We point out that the probability distribution of the nonergodic mismatch is qualitatively different in ergodic and nonergodic cases: its average is zero and typically nonzero, respectively. A main conclusion is that the difference of the average from zero, which we call the bias, is a useful measure of nonergodicity, for any window length. In contrast, the standard deviation of the nonergodic mismatch, which characterizes the spread between different realizations, exhibits a power-law decrease with increasing window length in both ergodic and nonergodic cases, and this implies that temporal and ensemble averages differ in dynamical systems with finite window lengths. It is the average modulus of the nonergodic mismatch, which we call the ergodicity deficit, that represents the expected deviation from fulfilling the equality of temporal and ensemble averages. As an important finding, we demonstrate that the ergodicity deficit cannot be reduced arbitrarily in nonergodic systems. We illustrate via a conceptual climate model that the nonergodic framework may be useful in Earth system dynamics, within which we propose the measure of nonergodicity, i.e., the bias, as an order-parameter-like quantifier of climate change.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23376135','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23376135"><span>Ensemble representations: effects of set size and item heterogeneity on average size perception.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W</p> <p>2013-02-01</p> <p>Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li class="active"><span>5</span></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_5 --> <div id="page_6" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li class="active"><span>6</span></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="101"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ClDy...40.1841F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ClDy...40.1841F"><span>Assessment of a stochastic downscaling methodology in generating an ensemble of hourly future climate time series</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fatichi, S.; Ivanov, V. Y.; Caporali, E.</p> <p>2013-04-01</p> <p>This study extends a stochastic downscaling methodology to generation of an ensemble of hourly time series of meteorological variables that express possible future climate conditions at a point-scale. The stochastic downscaling uses general circulation model (GCM) realizations and an hourly weather generator, the Advanced WEather GENerator (AWE-GEN). Marginal distributions of factors of change are computed for several climate statistics using a Bayesian methodology that can weight GCM realizations based on the model relative performance with respect to a historical climate and a degree of disagreement in projecting future conditions. A Monte Carlo technique is used to sample the factors of change from their respective marginal distributions. As a comparison with traditional approaches, factors of change are also estimated by averaging GCM realizations. With either approach, the derived factors of change are applied to the climate statistics inferred from historical observations to re-evaluate parameters of the weather generator. The re-parameterized generator yields hourly time series of meteorological variables that can be considered to be representative of future climate conditions. In this study, the time series are generated in an ensemble mode to fully reflect the uncertainty of GCM projections, climate stochasticity, as well as uncertainties of the downscaling procedure. Applications of the methodology in reproducing future climate conditions for the periods of 2000-2009, 2046-2065 and 2081-2100, using the period of 1962-1992 as the historical baseline are discussed for the location of Firenze (Italy). The inferences of the methodology for the period of 2000-2009 are tested against observations to assess reliability of the stochastic downscaling procedure in reproducing statistics of meteorological variables at different time scales.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930072240&hterms=balance+sheet&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dbalance%2Bsheet','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930072240&hterms=balance+sheet&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dbalance%2Bsheet"><span>Characteristics of ion flow in the quiet state of the inner plasma sheet</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Angelopoulos, V.; Kennel, C. F.; Coroniti, F. V.; Pellat, R.; Spence, H. E.; Kivelson, M. G.; Walker, R. J.; Baumjohann, W.; Feldman, W. C.; Gosling, J. T.</p> <p>1993-01-01</p> <p>We use AMPTE/IRM and ISEE 2 data to study the properties of the high beta plasma sheet, the inner plasma sheet (IPS). Bursty bulk flows (BBFs) are excised from the two databases, and the average flow pattern in the non-BBF (quiet) IPS is constructed. At local midnight this ensemble-average flow is predominantly duskward; closer to the flanks it is mostly earthward. The flow pattern agrees qualitatively with calculations based on the Tsyganenko (1987) model (T87), where the earthward flow is due to the ensemble-average cross tail electric field and the duskward flow is the diamagnetic drift due to an inward pressure gradient. The IPS is on the average in pressure equilibrium with the lobes. Because of its large variance the average flow does not represent the instantaneous flow field. Case studies also show that the non-BBF flow is highly irregular and inherently unsteady, a reason why earthward convection can avoid a pressure balance inconsistency with the lobes. The ensemble distribution of velocities is a fundamental observable of the quiet plasma sheet flow field.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008AIPC.1084..293G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008AIPC.1084..293G"><span>Transient Macroscopic Chemistry in the DSMC Method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Goldsworthy, M. J.; Macrossan, M. N.; Abdel-Jawad, M.</p> <p>2008-12-01</p> <p>In the Direct Simulation Monte Carlo method, a combination of statistical and deterministic procedures applied to a finite number of `simulator' particles are used to model rarefied gas-kinetic processes. Traditionally, chemical reactions are modelled using information from specific colliding particle pairs. In the Macroscopic Chemistry Method (MCM), the reactions are decoupled from the specific particle pairs selected for collisions. Information from all of the particles within a cell is used to determine a reaction rate coefficient for that cell. MCM has previously been applied to steady flow DSMC simulations. Here we show how MCM can be used to model chemical kinetics in DSMC simulations of unsteady flow. Results are compared with a collision-based chemistry procedure for two binary reactions in a 1-D unsteady shock-expansion tube simulation and during the unsteady development of 2-D flow through a cavity. For the shock tube simulation, close agreement is demonstrated between the two methods for instantaneous, ensemble-averaged profiles of temperature and species mole fractions. For the cavity flow, a high degree of thermal non-equilibrium is present and non-equilibrium reaction rate correction factors are employed in MCM. Very close agreement is demonstrated for ensemble averaged mole fraction contours predicted by the particle and macroscopic methods at three different flow-times. A comparison of the accumulated number of net reactions per cell shows that both methods compute identical numbers of reaction events. For the 2-D flow, MCM required similar CPU and memory resources to the particle chemistry method. The Macroscopic Chemistry Method is applicable to any general DSMC code using any viscosity or non-reacting collision models and any non-reacting energy exchange models. MCM can be used to implement any reaction rate formulations, whether these be from experimental or theoretical studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010IJMPB..24.5309F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010IJMPB..24.5309F"><span>Application of Generalized Feynman-Hellmann Theorem in Quantization of LC Circuit in Thermo Bath</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fan, Hong-Yi; Tang, Xu-Bing</p> <p></p> <p>For the quantized LC electric circuit, when taking the Joule thermal effect into account, we think that physical observables should be evaluated in the context of ensemble average. We then use the generalized Feynman-Hellmann theorem for ensemble average to calculate them, which seems convenient. Fluctuation of observables in various LC electric circuits in the presence of thermo bath growing with temperature is exhibited.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22383947','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22383947"><span>Calculating ensemble averaged descriptions of protein rigidity without sampling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J</p> <p>2012-01-01</p> <p>Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JHyd..546..476K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JHyd..546..476K"><span>Towards an improved ensemble precipitation forecast: A probabilistic post-processing approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Khajehei, Sepideh; Moradkhani, Hamid</p> <p>2017-03-01</p> <p>Recently, ensemble post-processing (EPP) has become a commonly used approach for reducing the uncertainty in forcing data and hence hydrologic simulation. The procedure was introduced to build ensemble precipitation forecasts based on the statistical relationship between observations and forecasts. More specifically, the approach relies on a transfer function that is developed based on a bivariate joint distribution between the observations and the simulations in the historical period. The transfer function is used to post-process the forecast. In this study, we propose a Bayesian EPP approach based on copula functions (COP-EPP) to improve the reliability of the precipitation ensemble forecast. Evaluation of the copula-based method is carried out by comparing the performance of the generated ensemble precipitation with the outputs from an existing procedure, i.e. mixed type meta-Gaussian distribution. Monthly precipitation from Climate Forecast System Reanalysis (CFS) and gridded observation from Parameter-Elevation Relationships on Independent Slopes Model (PRISM) have been employed to generate the post-processed ensemble precipitation. Deterministic and probabilistic verification frameworks are utilized in order to evaluate the outputs from the proposed technique. Distribution of seasonal precipitation for the generated ensemble from the copula-based technique is compared to the observation and raw forecasts for three sub-basins located in the Western United States. Results show that both techniques are successful in producing reliable and unbiased ensemble forecast, however, the COP-EPP demonstrates considerable improvement in the ensemble forecast in both deterministic and probabilistic verification, in particular in characterizing the extreme events in wet seasons.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29683661','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29683661"><span>Plasticity of the Binding Site of Renin: Optimized Selection of Protein Structures for Ensemble Docking.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Strecker, Claas; Meyer, Bernd</p> <p>2018-05-29</p> <p>Protein flexibility poses a major challenge to docking of potential ligands in that the binding site can adopt different shapes. Docking algorithms usually keep the protein rigid and only allow the ligand to be treated as flexible. However, a wrong assessment of the shape of the binding pocket can prevent a ligand from adapting a correct pose. Ensemble docking is a simple yet promising method to solve this problem: Ligands are docked into multiple structures, and the results are subsequently merged. Selection of protein structures is a significant factor for this approach. In this work we perform a comprehensive and comparative study evaluating the impact of structure selection on ensemble docking. We perform ensemble docking with several crystal structures and with structures derived from molecular dynamics simulations of renin, an attractive target for antihypertensive drugs. Here, 500 ns of MD simulations revealed binding site shapes not found in any available crystal structure. We evaluate the importance of structure selection for ensemble docking by comparing binding pose prediction, ability to rank actives above nonactives (screening utility), and scoring accuracy. As a result, for ensemble definition k-means clustering appears to be better suited than hierarchical clustering with average linkage. The best performing ensemble consists of four crystal structures and is able to reproduce the native ligand poses better than any individual crystal structure. Moreover this ensemble outperforms 88% of all individual crystal structures in terms of screening utility as well as scoring accuracy. Similarly, ensembles of MD-derived structures perform on average better than 75% of any individual crystal structure in terms of scoring accuracy at all inspected ensembles sizes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010APJAS..46..135E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010APJAS..46..135E"><span>Predictability of tropical cyclone events on intraseasonal timescales with the ECMWF monthly forecast model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Elsberry, Russell L.; Jordan, Mary S.; Vitart, Frederic</p> <p>2010-05-01</p> <p>The objective of this study is to provide evidence of predictability on intraseasonal time scales (10-30 days) for western North Pacific tropical cyclone formation and subsequent tracks using the 51-member ECMWF 32-day forecasts made once a week from 5 June through 25 December 2008. Ensemble storms are defined by grouping ensemble member vortices whose positions are within a specified separation distance that is equal to 180 n mi at the initial forecast time t and increases linearly to 420 n mi at Day 14 and then is constant. The 12-h track segments are calculated with a Weighted-Mean Vector Motion technique in which the weighting factor is inversely proportional to the distance from the endpoint of the previous 12-h motion vector. Seventy-six percent of the ensemble storms had five or fewer member vortices. On average, the ensemble storms begin 2.5 days before the first entry of the Joint Typhoon Warning Center (JTWC) best-track file, tend to translate too slowly in the deep tropics, and persist for longer periods over land. A strict objective matching technique with the JTWC storms is combined with a second subjective procedure that is then applied to identify nearby ensemble storms that would indicate a greater likelihood of a tropical cyclone developing in that region with that track orientation. The ensemble storms identified in the ECMWF 32-day forecasts provided guidance on intraseasonal timescales of the formations and tracks of the three strongest typhoons and two other typhoons, but not for two early season typhoons and the late season Dolphin. Four strong tropical storms were predicted consistently over Week-1 through Week-4, as was one weak tropical storm. Two other weak tropical storms, three tropical cyclones that developed from precursor baroclinic systems, and three other tropical depressions were not predicted on intraseasonal timescales. At least for the strongest tropical cyclones during the peak season, the ECMWF 32-day ensemble provides guidance of formation and tracks on 10-30 day timescales.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JChPh.134m4108W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JChPh.134m4108W"><span>Toward canonical ensemble distribution from self-guided Langevin dynamics simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, Xiongwu; Brooks, Bernard R.</p> <p>2011-04-01</p> <p>This work derives a quantitative description of the conformational distribution in self-guided Langevin dynamics (SGLD) simulations. SGLD simulations employ guiding forces calculated from local average momentums to enhance low-frequency motion. This enhancement in low-frequency motion dramatically accelerates conformational search efficiency, but also induces certain perturbations in conformational distribution. Through the local averaging, we separate properties of molecular systems into low-frequency and high-frequency portions. The guiding force effect on the conformational distribution is quantitatively described using these low-frequency and high-frequency properties. This quantitative relation provides a way to convert between a canonical ensemble and a self-guided ensemble. Using example systems, we demonstrated how to utilize the relation to obtain canonical ensemble properties and conformational distributions from SGLD simulations. This development makes SGLD not only an efficient approach for conformational searching, but also an accurate means for conformational sampling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JBO....23b5003L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JBO....23b5003L"><span>Topography and refractometry of sperm cells using spatial light interference microscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Lina; Kandel, Mikhail E.; Rubessa, Marcello; Schreiber, Sierra; Wheeler, Mathew B.; Popescu, Gabriel</p> <p>2018-02-01</p> <p>Characterization of spermatozoon viability is a common test in treating infertility. Recently, it has been shown that label-free, phase-sensitive imaging can provide a valuable alternative for this type of assay. We employ spatial light interference microscopy (SLIM) to perform high-accuracy single-cell phase imaging and decouple the average thickness and refractive index information for the population. This procedure was enabled by quantitative-phase imaging cells on media of two different refractive indices and using a numerical tool to remove the curvature from the cell tails. This way, we achieved ensemble averaging of topography and refractometry of 100 cells in each of the two groups. The results show that the thickness profile of the cell tail goes down to 150 nm and the refractive index can reach values of 1.6 close to the head.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.7172S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.7172S"><span>Post-processing method for wind speed ensemble forecast using wind speed and direction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sofie Eide, Siri; Bjørnar Bremnes, John; Steinsland, Ingelin</p> <p>2017-04-01</p> <p>Statistical methods are widely applied to enhance the quality of both deterministic and ensemble NWP forecasts. In many situations, like wind speed forecasting, most of the predictive information is contained in one variable in the NWP models. However, in statistical calibration of deterministic forecasts it is often seen that including more variables can further improve forecast skill. For ensembles this is rarely taken advantage of, mainly due to that it is generally not straightforward how to include multiple variables. In this study, it is demonstrated how multiple variables can be included in Bayesian model averaging (BMA) by using a flexible regression method for estimating the conditional means. The method is applied to wind speed forecasting at 204 Norwegian stations based on wind speed and direction forecasts from the ECMWF ensemble system. At about 85 % of the sites the ensemble forecasts were improved in terms of CRPS by adding wind direction as predictor compared to only using wind speed. On average the improvements were about 5 %, but mainly for moderate to strong wind situations. For weak wind speeds adding wind direction had more or less neutral impact.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AnGeo..29.1295S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AnGeo..29.1295S"><span>Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Soltanzadeh, I.; Azadi, M.; Vakili, G. A.</p> <p>2011-07-01</p> <p>Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvE..94a2109M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvE..94a2109M"><span>Langevin equation with fluctuating diffusivity: A two-state model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Miyaguchi, Tomoshige; Akimoto, Takuma; Yamamoto, Eiji</p> <p>2016-07-01</p> <p>Recently, anomalous subdiffusion, aging, and scatter of the diffusion coefficient have been reported in many single-particle-tracking experiments, though the origins of these behaviors are still elusive. Here, as a model to describe such phenomena, we investigate a Langevin equation with diffusivity fluctuating between a fast and a slow state. Namely, the diffusivity follows a dichotomous stochastic process. We assume that the sojourn time distributions of these two states are given by power laws. It is shown that, for a nonequilibrium ensemble, the ensemble-averaged mean-square displacement (MSD) shows transient subdiffusion. In contrast, the time-averaged MSD shows normal diffusion, but an effective diffusion coefficient transiently shows aging behavior. The propagator is non-Gaussian for short time and converges to a Gaussian distribution in a long-time limit; this convergence to Gaussian is extremely slow for some parameter values. For equilibrium ensembles, both ensemble-averaged and time-averaged MSDs show only normal diffusion and thus we cannot detect any traces of the fluctuating diffusivity with these MSDs. Therefore, as an alternative approach to characterizing the fluctuating diffusivity, the relative standard deviation (RSD) of the time-averaged MSD is utilized and it is shown that the RSD exhibits slow relaxation as a signature of the long-time correlation in the fluctuating diffusivity. Furthermore, it is shown that the RSD is related to a non-Gaussian parameter of the propagator. To obtain these theoretical results, we develop a two-state renewal theory as an analytical tool.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMGC21A1057R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMGC21A1057R"><span>Cloudy Windows: What GCM Ensembles, Reanalyses and Observations Tell Us About Uncertainty in Greenland's Future Climate and Surface Melting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Reusch, D. B.</p> <p>2016-12-01</p> <p>Any analysis that wants to use a GCM-based scenario of future climate benefits from knowing how much uncertainty the GCM's inherent variability adds to the development of climate change predictions. This is extra relevant in the polar regions due to the potential of global impacts (e.g., sea level rise) from local (ice sheet) climate changes such as more frequent/intense surface melting. High-resolution, regional-scale models using GCMs for boundary/initial conditions in future scenarios inherit a measure of GCM-derived externally-driven uncertainty. We investigate these uncertainties for the Greenland ice sheet using the 30-member CESM1.0-CAM5-BGC Large Ensemble (CESMLE) for recent (1981-2000) and future (2081-2100, RCP 8.5) decades. Recent simulations are skill-tested against the ERA-Interim reanalysis and AWS observations with results informing future scenarios. We focus on key variables influencing surface melting through decadal climatologies, nonlinear analysis of variability with self-organizing maps (SOMs), regional-scale modeling (Polar WRF), and simple melt models. Relative to the ensemble average, spatially averaged climatological July temperature anomalies over a Greenland ice-sheet/ocean domain are mostly between +/- 0.2 °C. The spatial average hides larger local anomalies of up to +/- 2 °C. The ensemble average itself is 2 °C cooler than ERA-Interim. SOMs extend our diagnostics by providing a concise, objective summary of model variability as a set of generalized patterns. For CESMLE, the SOM patterns summarize the variability of multiple realizations of climate. Changes in pattern frequency by ensemble member show the influence of initial conditions. For example, basic statistical analysis of pattern frequency yields interquartile ranges of 2-4% for individual patterns across the ensemble. In climate terms, this tells us about climate state variability through the range of the ensemble, a potentially significant source of melt-prediction uncertainty. SOMs can also capture the different trajectories of climate due to intramodel variability over time. Polar WRF provides higher resolution regional modeling with improved, polar-centric model physics. Simple melt models allow us to characterize impacts of the upstream uncertainties on estimates of surface melting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27575073','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27575073"><span>Mixed-order phase transition in a minimal, diffusion-based spin model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fronczak, Agata; Fronczak, Piotr</p> <p>2016-07-01</p> <p>In this paper we exactly solve, within the grand canonical ensemble, a minimal spin model with the hybrid phase transition. We call the model diffusion based because its Hamiltonian can be recovered from a simple dynamic procedure, which can be seen as an equilibrium statistical mechanics representation of a biased random walk. We outline the derivation of the phase diagram of the model, in which the triple point has the hallmarks of the hybrid transition: discontinuity in the average magnetization and algebraically diverging susceptibilities. At this point, two second-order transition curves meet in equilibrium with the first-order curve, resulting in a prototypical mixed-order behavior.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016GMD.....9.1697P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016GMD.....9.1697P"><span>Large ensemble modeling of the last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert</p> <p>2016-05-01</p> <p>A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29630571','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29630571"><span>Training set extension for SVM ensemble in P300-speller with familiar face paradigm.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Qi; Shi, Kaiyang; Gao, Ning; Li, Jian; Bai, Ou</p> <p>2018-03-27</p> <p>P300-spellers are brain-computer interface (BCI)-based character input systems. Support vector machine (SVM) ensembles are trained with large-scale training sets and used as classifiers in these systems. However, the required large-scale training data necessitate a prolonged collection time for each subject, which results in data collected toward the end of the period being contaminated by the subject's fatigue. This study aimed to develop a method for acquiring more training data based on a collected small training set. A new method was developed in which two corresponding training datasets in two sequences are superposed and averaged to extend the training set. The proposed method was tested offline on a P300-speller with the familiar face paradigm. The SVM ensemble with extended training set achieved 85% classification accuracy for the averaged results of four sequences, and 100% for 11 sequences in the P300-speller. In contrast, the conventional SVM ensemble with non-extended training set achieved only 65% accuracy for four sequences, and 92% for 11 sequences. The SVM ensemble with extended training set achieves higher classification accuracies than the conventional SVM ensemble, which verifies that the proposed method effectively improves the classification performance of BCI P300-spellers, thus enhancing their practicality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ThApC.129..243O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ThApC.129..243O"><span>Comparison of projection skills of deterministic ensemble methods using pseudo-simulation data generated from multivariate Gaussian distribution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Oh, Seok-Geun; Suh, Myoung-Seok</p> <p>2017-07-01</p> <p>The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvE..96b2156R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvE..96b2156R"><span>Self-averaging and weak ergodicity breaking of diffusion in heterogeneous media</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Russian, Anna; Dentz, Marco; Gouze, Philippe</p> <p>2017-08-01</p> <p>Diffusion in natural and engineered media is quantified in terms of stochastic models for the heterogeneity-induced fluctuations of particle motion. However, fundamental properties such as ergodicity and self-averaging and their dependence on the disorder distribution are often not known. Here, we investigate these questions for diffusion in quenched disordered media characterized by spatially varying retardation properties, which account for particle retention due to physical or chemical interactions with the medium. We link self-averaging and ergodicity to the disorder sampling efficiency Rn, which quantifies the number of disorder realizations a noise ensemble may sample in a single disorder realization. Diffusion for disorder scenarios characterized by a finite mean transition time is ergodic and self-averaging for any dimension. The strength of the sample to sample fluctuations decreases with increasing spatial dimension. For an infinite mean transition time, particle motion is weakly ergodicity breaking in any dimension because single particles cannot sample the heterogeneity spectrum in finite time. However, even though the noise ensemble is not representative of the single-particle time statistics, subdiffusive motion in q ≥2 dimensions is self-averaging, which means that the noise ensemble in a single realization samples a representative part of the heterogeneity spectrum.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JHyd..497...80R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JHyd..497...80R"><span>Short-term ensemble streamflow forecasting using operationally-produced single-valued streamflow forecasts - A Hydrologic Model Output Statistics (HMOS) approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Regonda, Satish Kumar; Seo, Dong-Jun; Lawrence, Bill; Brown, James D.; Demargne, Julie</p> <p>2013-08-01</p> <p>We present a statistical procedure for generating short-term ensemble streamflow forecasts from single-valued, or deterministic, streamflow forecasts produced operationally by the U.S. National Weather Service (NWS) River Forecast Centers (RFCs). The resulting ensemble streamflow forecast provides an estimate of the predictive uncertainty associated with the single-valued forecast to support risk-based decision making by the forecasters and by the users of the forecast products, such as emergency managers. Forced by single-valued quantitative precipitation and temperature forecasts (QPF, QTF), the single-valued streamflow forecasts are produced at a 6-h time step nominally out to 5 days into the future. The single-valued streamflow forecasts reflect various run-time modifications, or "manual data assimilation", applied by the human forecasters in an attempt to reduce error from various sources in the end-to-end forecast process. The proposed procedure generates ensemble traces of streamflow from a parsimonious approximation of the conditional multivariate probability distribution of future streamflow given the single-valued streamflow forecast, QPF, and the most recent streamflow observation. For parameter estimation and evaluation, we used a multiyear archive of the single-valued river stage forecast produced operationally by the NWS Arkansas-Red River Basin River Forecast Center (ABRFC) in Tulsa, Oklahoma. As a by-product of parameter estimation, the procedure provides a categorical assessment of the effective lead time of the operational hydrologic forecasts for different QPF and forecast flow conditions. To evaluate the procedure, we carried out hindcasting experiments in dependent and cross-validation modes. The results indicate that the short-term streamflow ensemble hindcasts generated from the procedure are generally reliable within the effective lead time of the single-valued forecasts and well capture the skill of the single-valued forecasts. For smaller basins, however, the effective lead time is significantly reduced by short basin memory and reduced skill in the single-valued QPF.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li class="active"><span>6</span></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_6 --> <div id="page_7" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="121"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4274230','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4274230"><span>Applying an Ensemble Classification Tree Approach to the Prediction of Completion of a 12-Step Facilitation Intervention with Stimulant Abusers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Doyle, Suzanne R.; Donovan, Dennis M.</p> <p>2014-01-01</p> <p>Aims The purpose of this study was to explore the selection of predictor variables in the evaluation of drug treatment completion using an ensemble approach with classification trees. The basic methodology is reviewed and the subagging procedure of random subsampling is applied. Methods Among 234 individuals with stimulant use disorders randomized to a 12-Step facilitative intervention shown to increase stimulant use abstinence, 67.52% were classified as treatment completers. A total of 122 baseline variables were used to identify factors associated with completion. Findings The number of types of self-help activity involvement prior to treatment was the predominant predictor. Other effective predictors included better coping self-efficacy for substance use in high-risk situations, more days of prior meeting attendance, greater acceptance of the Disease model, higher confidence for not resuming use following discharge, lower ASI Drug and Alcohol composite scores, negative urine screens for cocaine or marijuana, and fewer employment problems. Conclusions The application of an ensemble subsampling regression tree method utilizes the fact that classification trees are unstable but, on average, produce an improved prediction of the completion of drug abuse treatment. The results support the notion there are early indicators of treatment completion that may allow for modification of approaches more tailored to fitting the needs of individuals and potentially provide more successful treatment engagement and improved outcomes. PMID:25134038</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20070023651&hterms=ensemble&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Densemble','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20070023651&hterms=ensemble&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Densemble"><span>Ensemble Weight Enumerators for Protograph LDPC Codes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Divsalar, Dariush</p> <p>2006-01-01</p> <p>Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20070031775&hterms=Database+uses&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3DDatabase%2Buses','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20070031775&hterms=Database+uses&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3DDatabase%2Buses"><span>New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Laher, Russ; Rector, John</p> <p>2004-01-01</p> <p>Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA486216','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA486216"><span>General Procedure for Protective Cooling and Equipment Evaluations Relative to Heat and Cold Stress</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2008-09-01</p> <p>climatic chamber housing the manikin. The most widely accepted test procedures for the operation of a TM are published by the American Society for...describes measurement of the clo value of a complete clothing ensemble. It requires a TM surface temperature of 35ºC and a climatic chamber controlled...Clothing Using a Sweating Manikin” (1) measures the im of a complete clothing ensemble. It requires a TM surface temperature of 35ºC and a climatic</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ThApC.tmp..465W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ThApC.tmp..465W"><span>Effect of land model ensemble versus coupled model ensemble on the simulation of precipitation climatology and variability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wei, Jiangfeng; Dirmeyer, Paul A.; Yang, Zong-Liang; Chen, Haishan</p> <p>2017-10-01</p> <p>Through a series of model simulations with an atmospheric general circulation model coupled to three different land surface models, this study investigates the impacts of land model ensembles and coupled model ensemble on precipitation simulation. It is found that coupling an ensemble of land models to an atmospheric model has a very minor impact on the improvement of precipitation climatology and variability, but a simple ensemble average of the precipitation from three individually coupled land-atmosphere models produces better results, especially for precipitation variability. The generally weak impact of land processes on precipitation should be the main reason that the land model ensembles do not improve precipitation simulation. However, if there are big biases in the land surface model or land surface data set, correcting them could improve the simulated climate, especially for well-constrained regional climate simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001Natur.409..641V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001Natur.409..641V"><span>Three key residues form a critical contact network in a protein folding transition state</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vendruscolo, Michele; Paci, Emanuele; Dobson, Christopher M.; Karplus, Martin</p> <p>2001-02-01</p> <p>Determining how a protein folds is a central problem in structural biology. The rate of folding of many proteins is determined by the transition state, so that a knowledge of its structure is essential for understanding the protein folding reaction. Here we use mutation measurements-which determine the role of individual residues in stabilizing the transition state-as restraints in a Monte Carlo sampling procedure to determine the ensemble of structures that make up the transition state. We apply this approach to the experimental data for the 98-residue protein acylphosphatase, and obtain a transition-state ensemble with the native-state topology and an average root-mean-square deviation of 6Å from the native structure. Although about 20 residues with small positional fluctuations form the structural core of this transition state, the native-like contact network of only three of these residues is sufficient to determine the overall fold of the protein. This result reveals how a nucleation mechanism involving a small number of key residues can lead to folding of a polypeptide chain to its unique native-state structure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4916247','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4916247"><span>Distinct Fos-Expressing Neuronal Ensembles in the Ventromedial Prefrontal Cortex Mediate Food Reward and Extinction Memories</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Warren, Brandon L.; Mendoza, Michael P.; Cruz, Fabio C.; Leao, Rodrigo M.; Caprioli, Daniele; Rubio, F. Javier; Whitaker, Leslie R.; McPherson, Kylie B.; Bossert, Jennifer M.; Shaham, Yavin</p> <p>2016-01-01</p> <p>In operant learning, initial reward-associated memories are thought to be distinct from subsequent extinction-associated memories. Memories formed during operant learning are thought to be stored in “neuronal ensembles.” Thus, we hypothesize that different neuronal ensembles encode reward- and extinction-associated memories. Here, we examined prefrontal cortex neuronal ensembles involved in the recall of reward and extinction memories of food self-administration. We first trained rats to lever press for palatable food pellets for 7 d (1 h/d) and then exposed them to 0, 2, or 7 daily extinction sessions in which lever presses were not reinforced. Twenty-four hours after the last training or extinction session, we exposed the rats to either a short 15 min extinction test session or left them in their homecage (a control condition). We found maximal Fos (a neuronal activity marker) immunoreactivity in the ventral medial prefrontal cortex of rats that previously received 2 extinction sessions, suggesting that neuronal ensembles in this area encode extinction memories. We then used the Daun02 inactivation procedure to selectively disrupt ventral medial prefrontal cortex neuronal ensembles that were activated during the 15 min extinction session following 0 (no extinction) or 2 prior extinction sessions to determine the effects of inactivating the putative food reward and extinction ensembles, respectively, on subsequent nonreinforced food seeking 2 d later. Inactivation of the food reward ensembles decreased food seeking, whereas inactivation of the extinction ensembles increased food seeking. Our results indicate that distinct neuronal ensembles encoding operant reward and extinction memories intermingle within the same cortical area. SIGNIFICANCE STATEMENT A current popular hypothesis is that neuronal ensembles in different prefrontal cortex areas control reward-associated versus extinction-associated memories: the dorsal medial prefrontal cortex (mPFC) promotes reward seeking, whereas the ventral mPFC inhibits reward seeking. In this paper, we use the Daun02 chemogenetic inactivation procedure to demonstrate that Fos-expressing neuronal ensembles mediating both food reward and extinction memories intermingle within the same ventral mPFC area. PMID:27335401</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27335401','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27335401"><span>Distinct Fos-Expressing Neuronal Ensembles in the Ventromedial Prefrontal Cortex Mediate Food Reward and Extinction Memories.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Warren, Brandon L; Mendoza, Michael P; Cruz, Fabio C; Leao, Rodrigo M; Caprioli, Daniele; Rubio, F Javier; Whitaker, Leslie R; McPherson, Kylie B; Bossert, Jennifer M; Shaham, Yavin; Hope, Bruce T</p> <p>2016-06-22</p> <p>In operant learning, initial reward-associated memories are thought to be distinct from subsequent extinction-associated memories. Memories formed during operant learning are thought to be stored in "neuronal ensembles." Thus, we hypothesize that different neuronal ensembles encode reward- and extinction-associated memories. Here, we examined prefrontal cortex neuronal ensembles involved in the recall of reward and extinction memories of food self-administration. We first trained rats to lever press for palatable food pellets for 7 d (1 h/d) and then exposed them to 0, 2, or 7 daily extinction sessions in which lever presses were not reinforced. Twenty-four hours after the last training or extinction session, we exposed the rats to either a short 15 min extinction test session or left them in their homecage (a control condition). We found maximal Fos (a neuronal activity marker) immunoreactivity in the ventral medial prefrontal cortex of rats that previously received 2 extinction sessions, suggesting that neuronal ensembles in this area encode extinction memories. We then used the Daun02 inactivation procedure to selectively disrupt ventral medial prefrontal cortex neuronal ensembles that were activated during the 15 min extinction session following 0 (no extinction) or 2 prior extinction sessions to determine the effects of inactivating the putative food reward and extinction ensembles, respectively, on subsequent nonreinforced food seeking 2 d later. Inactivation of the food reward ensembles decreased food seeking, whereas inactivation of the extinction ensembles increased food seeking. Our results indicate that distinct neuronal ensembles encoding operant reward and extinction memories intermingle within the same cortical area. A current popular hypothesis is that neuronal ensembles in different prefrontal cortex areas control reward-associated versus extinction-associated memories: the dorsal medial prefrontal cortex (mPFC) promotes reward seeking, whereas the ventral mPFC inhibits reward seeking. In this paper, we use the Daun02 chemogenetic inactivation procedure to demonstrate that Fos-expressing neuronal ensembles mediating both food reward and extinction memories intermingle within the same ventral mPFC area. Copyright © 2016 the authors 0270-6474/16/366691-13$15.00/0.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1357499','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1357499"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ortoleva, Peter J.</p> <p></p> <p>Illustrative embodiments of systems and methods for the deductive multiscale simulation of macromolecules are disclosed. In one illustrative embodiment, a deductive multiscale simulation method may include (i) constructing a set of order parameters that model one or more structural characteristics of a macromolecule, (ii) simulating an ensemble of atomistic configurations for the macromolecule using instantaneous values of the set of order parameters, (iii) simulating thermal-average forces and diffusivities for the ensemble of atomistic configurations, and (iv) evolving the set of order parameters via Langevin dynamics using the thermal-average forces and diffusivities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=313464','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=313464"><span>Optimal averaging of soil moisture predictions from ensemble land surface model simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>The correct interpretation of ensemble information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an instrumental variabl...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1421334','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1421334"><span>Investigation of short-term effective radiative forcing of fire aerosols over North America using nudged hindcast ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Liu, Yawen; Zhang, Kai; Qian, Yun</p> <p></p> <p>Aerosols from fire emissions can potentially have large impact on clouds and radiation. However, fire aerosol sources are often intermittent, and their effect on weather and climate is difficult to quantify. Here we investigated the short-term effective radiative forcing of fire aerosols using the global aerosol–climate model Community Atmosphere Model version 5 (CAM5). Different from previous studies, we used nudged hindcast ensembles to quantify the forcing uncertainty due to the chaotic response to small perturbations in the atmosphere state. Daily mean emissions from three fire inventories were used to consider the uncertainty in emission strength and injection heights. The simulated aerosolmore » optical depth (AOD) and mass concentrations were evaluated against in situ measurements and reanalysis data. Overall, the results show the model has reasonably good predicting skills. Short (10-day) nudged ensemble simulations were then performed with and without fire emissions to estimate the effective radiative forcing. Results show fire aerosols have large effects on both liquid and ice clouds over the two selected regions in April 2009. Ensemble mean results show strong negative shortwave cloud radiative effect (SCRE) over almost the entirety of southern Mexico, with a 10-day regional mean value of –3.0 W m –2. Over the central US, the SCRE is positive in the north but negative in the south, and the regional mean SCRE is small (–0.56 W m –2). For the 10-day average, we found a large ensemble spread of regional mean shortwave cloud radiative effect over southern Mexico (15.6 % of the corresponding ensemble mean) and the central US (64.3 %), despite the regional mean AOD time series being almost indistinguishable during the 10-day period. Moreover, the ensemble spread is much larger when using daily averages instead of 10-day averages. In conclusion, this demonstrates the importance of using a large ensemble of simulations to estimate the short-term aerosol effective radiative forcing.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1421334-investigation-short-term-effective-radiative-forcing-fire-aerosols-over-north-america-using-nudged-hindcast-ensembles','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1421334-investigation-short-term-effective-radiative-forcing-fire-aerosols-over-north-america-using-nudged-hindcast-ensembles"><span>Investigation of short-term effective radiative forcing of fire aerosols over North America using nudged hindcast ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Liu, Yawen; Zhang, Kai; Qian, Yun; ...</p> <p>2018-01-03</p> <p>Aerosols from fire emissions can potentially have large impact on clouds and radiation. However, fire aerosol sources are often intermittent, and their effect on weather and climate is difficult to quantify. Here we investigated the short-term effective radiative forcing of fire aerosols using the global aerosol–climate model Community Atmosphere Model version 5 (CAM5). Different from previous studies, we used nudged hindcast ensembles to quantify the forcing uncertainty due to the chaotic response to small perturbations in the atmosphere state. Daily mean emissions from three fire inventories were used to consider the uncertainty in emission strength and injection heights. The simulated aerosolmore » optical depth (AOD) and mass concentrations were evaluated against in situ measurements and reanalysis data. Overall, the results show the model has reasonably good predicting skills. Short (10-day) nudged ensemble simulations were then performed with and without fire emissions to estimate the effective radiative forcing. Results show fire aerosols have large effects on both liquid and ice clouds over the two selected regions in April 2009. Ensemble mean results show strong negative shortwave cloud radiative effect (SCRE) over almost the entirety of southern Mexico, with a 10-day regional mean value of –3.0 W m –2. Over the central US, the SCRE is positive in the north but negative in the south, and the regional mean SCRE is small (–0.56 W m –2). For the 10-day average, we found a large ensemble spread of regional mean shortwave cloud radiative effect over southern Mexico (15.6 % of the corresponding ensemble mean) and the central US (64.3 %), despite the regional mean AOD time series being almost indistinguishable during the 10-day period. Moreover, the ensemble spread is much larger when using daily averages instead of 10-day averages. In conclusion, this demonstrates the importance of using a large ensemble of simulations to estimate the short-term aerosol effective radiative forcing.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25571123','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25571123"><span>Performance analysis of a Principal Component Analysis ensemble classifier for Emotiv headset P300 spellers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M</p> <p>2014-01-01</p> <p>The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFMSH43A4178M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFMSH43A4178M"><span>Real­-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.</p> <p>2014-12-01</p> <p>Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70035772','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70035772"><span>Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM) III: Scenario analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.</p> <p>2009-01-01</p> <p>An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29488366','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29488366"><span>Topography and refractometry of sperm cells using spatial light interference microscopy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Lina; Kandel, Mikhail E; Rubessa, Marcello; Schreiber, Sierra; Wheeler, Mathew B; Popescu, Gabriel</p> <p>2018-02-01</p> <p>Characterization of spermatozoon viability is a common test in treating infertility. Recently, it has been shown that label-free, phase-sensitive imaging can provide a valuable alternative for this type of assay. We employ spatial light interference microscopy (SLIM) to perform high-accuracy single-cell phase imaging and decouple the average thickness and refractive index information for the population. This procedure was enabled by quantitative-phase imaging cells on media of two different refractive indices and using a numerical tool to remove the curvature from the cell tails. This way, we achieved ensemble averaging of topography and refractometry of 100 cells in each of the two groups. The results show that the thickness profile of the cell tail goes down to 150 nm and the refractive index can reach values of 1.6 close to the head. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19289033','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19289033"><span>Fluorescence correlation spectroscopy: the case of subdiffusion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lubelski, Ariel; Klafter, Joseph</p> <p>2009-03-18</p> <p>The theory of fluorescence correlation spectroscopy is revisited here for the case of subdiffusing molecules. Subdiffusion is assumed to stem from a continuous-time random walk process with a fat-tailed distribution of waiting times and can therefore be formulated in terms of a fractional diffusion equation (FDE). The FDE plays the central role in developing the fluorescence correlation spectroscopy expressions, analogous to the role played by the simple diffusion equation for regular systems. Due to the nonstationary nature of the continuous-time random walk/FDE, some interesting properties emerge that are amenable to experimental verification and may help in discriminating among subdiffusion mechanisms. In particular, the current approach predicts 1), a strong dependence of correlation functions on the initial time (aging); 2), sensitivity of correlation functions to the averaging procedure, ensemble versus time averaging (ergodicity breaking); and 3), that the basic mean-squared displacement observable depends on how the mean is taken.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26068738','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26068738"><span>Quantifying Nucleic Acid Ensembles with X-ray Scattering Interferometry.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shi, Xuesong; Bonilla, Steve; Herschlag, Daniel; Harbury, Pehr</p> <p>2015-01-01</p> <p>The conformational ensemble of a macromolecule is the complete description of the macromolecule's solution structures and can reveal important aspects of macromolecular folding, recognition, and function. However, most experimental approaches determine an average or predominant structure, or follow transitions between states that each can only be described by an average structure. Ensembles have been extremely difficult to experimentally characterize. We present the unique advantages and capabilities of a new biophysical technique, X-ray scattering interferometry (XSI), for probing and quantifying structural ensembles. XSI measures the interference of scattered waves from two heavy metal probes attached site specifically to a macromolecule. A Fourier transform of the interference pattern gives the fractional abundance of different probe separations directly representing the multiple conformation states populated by the macromolecule. These probe-probe distance distributions can then be used to define the structural ensemble of the macromolecule. XSI provides accurate, calibrated distance in a model-independent fashion with angstrom scale sensitivity in distances. XSI data can be compared in a straightforward manner to atomic coordinates determined experimentally or predicted by molecular dynamics simulations. We describe the conceptual framework for XSI and provide a detailed protocol for carrying out an XSI experiment. © 2015 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27028235','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27028235"><span>Probing RNA Native Conformational Ensembles with Structural Constraints.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fonseca, Rasmus; van den Bedem, Henry; Bernauer, Julie</p> <p>2016-05-01</p> <p>Noncoding ribonucleic acids (RNA) play a critical role in a wide variety of cellular processes, ranging from regulating gene expression to post-translational modification and protein synthesis. Their activity is modulated by highly dynamic exchanges between three-dimensional conformational substates, which are difficult to characterize experimentally and computationally. Here, we present an innovative, entirely kinematic computational procedure to efficiently explore the native ensemble of RNA molecules. Our procedure projects degrees of freedom onto a subspace of conformation space defined by distance constraints in the tertiary structure. The dimensionality reduction enables efficient exploration of conformational space. We show that the conformational distributions obtained with our method broadly sample the conformational landscape observed in NMR experiments. Compared to normal mode analysis-based exploration, our procedure diffuses faster through the experimental ensemble while also accessing conformational substates to greater precision. Our results suggest that conformational sampling with a highly reduced but fully atomistic representation of noncoding RNA expresses key features of their dynamic nature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.B33C0194K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.B33C0194K"><span>Large Scale Crop Classification in Ukraine using Multi-temporal Landsat-8 Images with Missing Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kussul, N.; Skakun, S.; Shelestov, A.; Lavreniuk, M. S.</p> <p>2014-12-01</p> <p>At present, there are no globally available Earth observation (EO) derived products on crop maps. This issue is being addressed within the Sentinel-2 for Agriculture initiative where a number of test sites (including from JECAM) participate to provide coherent protocols and best practices for various global agriculture systems, and subsequently crop maps from Sentinel-2. One of the problems in dealing with optical images for large territories (more than 10,000 sq. km) is the presence of clouds and shadows that result in having missing values in data sets. In this abstract, a new approach to classification of multi-temporal optical satellite imagery with missing data due to clouds and shadows is proposed. First, self-organizing Kohonen maps (SOMs) are used to restore missing pixel values in a time series of satellite imagery. SOMs are trained for each spectral band separately using non-missing values. Missing values are restored through a special procedure that substitutes input sample's missing components with neuron's weight coefficients. After missing data restoration, a supervised classification is performed for multi-temporal satellite images. For this, an ensemble of neural networks, in particular multilayer perceptrons (MLPs), is proposed. Ensembling of neural networks is done by the technique of average committee, i.e. to calculate the average class probability over classifiers and select the class with the highest average posterior probability for the given input sample. The proposed approach is applied for large scale crop classification using multi temporal Landsat-8 images for the JECAM test site in Ukraine [1-2]. It is shown that ensemble of MLPs provides better performance than a single neural network in terms of overall classification accuracy and kappa coefficient. The obtained classification map is also validated through estimated crop and forest areas and comparison to official statistics. 1. A.Yu. Shelestov et al., "Geospatial information system for agricultural monitoring," Cybernetics Syst. Anal., vol. 49, no. 1, pp. 124-132, 2013. 2. J. Gallego et al., "Efficiency Assessment of Different Approaches to Crop Classification Based on Satellite and Ground Observations," J. Autom. Inform. Scie., vol. 44, no. 5, pp. 67-80, 2012.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_7 --> <div id="page_8" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="141"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=315197','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=315197"><span>Optimal averaging of soil moisture predictions from ensemble land surface model simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>The correct interpretation of ensemble 3 soil moisture information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an inst...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21531475','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21531475"><span>Classifier ensemble construction with rotation forest to improve medical diagnosis performance of machine learning algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ozcift, Akin; Gulten, Arif</p> <p>2011-12-01</p> <p>Improving accuracies of machine learning algorithms is vital in designing high performance computer-aided diagnosis (CADx) systems. Researches have shown that a base classifier performance might be enhanced by ensemble classification strategies. In this study, we construct rotation forest (RF) ensemble classifiers of 30 machine learning algorithms to evaluate their classification performances using Parkinson's, diabetes and heart diseases from literature. While making experiments, first the feature dimension of three datasets is reduced using correlation based feature selection (CFS) algorithm. Second, classification performances of 30 machine learning algorithms are calculated for three datasets. Third, 30 classifier ensembles are constructed based on RF algorithm to assess performances of respective classifiers with the same disease data. All the experiments are carried out with leave-one-out validation strategy and the performances of the 60 algorithms are evaluated using three metrics; classification accuracy (ACC), kappa error (KE) and area under the receiver operating characteristic (ROC) curve (AUC). Base classifiers succeeded 72.15%, 77.52% and 84.43% average accuracies for diabetes, heart and Parkinson's datasets, respectively. As for RF classifier ensembles, they produced average accuracies of 74.47%, 80.49% and 87.13% for respective diseases. RF, a newly proposed classifier ensemble algorithm, might be used to improve accuracy of miscellaneous machine learning algorithms to design advanced CADx systems. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5048093','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5048093"><span>Ensemble Deep Learning for Biomedical Time Series Classification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2016-01-01</p> <p>Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost. PMID:27725828</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29495774','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29495774"><span>Equipartition terms in transition path ensemble: Insights from molecular dynamics simulations of alanine dipeptide.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Wenjin</p> <p>2018-02-28</p> <p>Transition path ensemble consists of reactive trajectories and possesses all the information necessary for the understanding of the mechanism and dynamics of important condensed phase processes. However, quantitative description of the properties of the transition path ensemble is far from being established. Here, with numerical calculations on a model system, the equipartition terms defined in thermal equilibrium were for the first time estimated in the transition path ensemble. It was not surprising to observe that the energy was not equally distributed among all the coordinates. However, the energies distributed on a pair of conjugated coordinates remained equal. Higher energies were observed to be distributed on several coordinates, which are highly coupled to the reaction coordinate, while the rest were almost equally distributed. In addition, the ensemble-averaged energy on each coordinate as a function of time was also quantified. These quantitative analyses on energy distributions provided new insights into the transition path ensemble.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27918894','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27918894"><span>Perception of ensemble statistics requires attention.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jackson-Nielsen, Molly; Cohen, Michael A; Pitts, Michael A</p> <p>2017-02-01</p> <p>To overcome inherent limitations in perceptual bandwidth, many aspects of the visual world are represented as summary statistics (e.g., average size, orientation, or density of objects). Here, we investigated the relationship between summary (ensemble) statistics and visual attention. Recently, it was claimed that one ensemble statistic in particular, color diversity, can be perceived without focal attention. However, a broader debate exists over the attentional requirements of conscious perception, and it is possible that some form of attention is necessary for ensemble perception. To test this idea, we employed a modified inattentional blindness paradigm and found that multiple types of summary statistics (color and size) often go unnoticed without attention. In addition, we found attentional costs in dual-task situations, further implicating a role for attention in statistical perception. Overall, we conclude that while visual ensembles may be processed efficiently, some amount of attention is necessary for conscious perception of ensemble statistics. Copyright © 2016 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25810748','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25810748"><span>Genetic programming based ensemble system for microarray data classification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To</p> <p>2015-01-01</p> <p>Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4355811','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4355811"><span>Genetic Programming Based Ensemble System for Microarray Data Classification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To</p> <p>2015-01-01</p> <p>Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved. PMID:25810748</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=ensemble&pg=3&id=EJ1020561','ERIC'); return false;" href="https://eric.ed.gov/?q=ensemble&pg=3&id=EJ1020561"><span>Evaluative and Behavioral Correlates to Intrarehearsal Achievement in High School Bands</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Montemayor, Mark</p> <p>2014-01-01</p> <p>The purpose of this study was to investigate relationships of teaching effectiveness, ensemble performance quality, and selected rehearsal procedures to various measures of intrarehearsal achievement (i.e., musical improvement exhibited by an ensemble during the course of a single rehearsal). Twenty-nine high school bands were observed in two…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AIPC.1323....6B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AIPC.1323....6B"><span>Fidelity decay of the two-level bosonic embedded ensembles of random matrices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.</p> <p>2010-12-01</p> <p>We study the fidelity decay of the k-body embedded ensembles of random matrices for bosons distributed over two single-particle states. Fidelity is defined in terms of a reference Hamiltonian, which is a purely diagonal matrix consisting of a fixed one-body term and includes the diagonal of the perturbing k-body embedded ensemble matrix, and the perturbed Hamiltonian which includes the residual off-diagonal elements of the k-body interaction. This choice mimics the typical mean-field basis used in many calculations. We study separately the cases k = 2 and 3. We compute the ensemble-averaged fidelity decay as well as the fidelity of typical members with respect to an initial random state. Average fidelity displays a revival at the Heisenberg time, t = tH = 1, and a freeze in the fidelity decay, during which periodic revivals of period tH are observed. We obtain the relevant scaling properties with respect to the number of bosons and the strength of the perturbation. For certain members of the ensemble, we find that the period of the revivals during the freeze of fidelity occurs at fractional times of tH. These fractional periodic revivals are related to the dominance of specific k-body terms in the perturbation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22416042-canonical-ensemble-state-averaged-complete-active-space-self-consistent-field-sa-casscf-strategy-problems-more-diabatic-than-adiabatic-states-charge-bond-resonance-monomethine-cyanines','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22416042-canonical-ensemble-state-averaged-complete-active-space-self-consistent-field-sa-casscf-strategy-problems-more-diabatic-than-adiabatic-states-charge-bond-resonance-monomethine-cyanines"><span>Canonical-ensemble state-averaged complete active space self-consistent field (SA-CASSCF) strategy for problems with more diabatic than adiabatic states: Charge-bond resonance in monomethine cyanines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Olsen, Seth, E-mail: seth.olsen@uq.edu.au</p> <p>2015-01-28</p> <p>This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed (“microcanonical”) SA-CASSCF ensembles, self-consistency is invariant tomore » any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with “more diabatic than adiabatic” states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse “temperature,” unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler’s hydrol blue. The diabatic CASVB representation is shown to vary weakly for “temperatures” corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25637978','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25637978"><span>Canonical-ensemble state-averaged complete active space self-consistent field (SA-CASSCF) strategy for problems with more diabatic than adiabatic states: charge-bond resonance in monomethine cyanines.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Olsen, Seth</p> <p>2015-01-28</p> <p>This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed ("microcanonical") SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with "more diabatic than adiabatic" states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse "temperature," unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler's hydrol blue. The diabatic CASVB representation is shown to vary weakly for "temperatures" corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20852898','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20852898"><span>Determining optimal clothing ensembles based on weather forecasts, with particular reference to outdoor winter military activities.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Morabito, Marco; Pavlinic, Daniela Z; Crisci, Alfonso; Capecchi, Valerio; Orlandini, Simone; Mekjavic, Igor B</p> <p>2011-07-01</p> <p>Military and civil defense personnel are often involved in complex activities in a variety of outdoor environments. The choice of appropriate clothing ensembles represents an important strategy to establish the success of a military mission. The main aim of this study was to compare the known clothing insulation of the garment ensembles worn by soldiers during two winter outdoor field trials (hike and guard duty) with the estimated optimal clothing thermal insulations recommended to maintain thermoneutrality, assessed by using two different biometeorological procedures. The overall aim was to assess the applicability of such biometeorological procedures to weather forecast systems, thereby developing a comprehensive biometeorological tool for military operational forecast purposes. Military trials were carried out during winter 2006 in Pokljuka (Slovenia) by Slovene Armed Forces personnel. Gastrointestinal temperature, heart rate and environmental parameters were measured with portable data acquisition systems. The thermal characteristics of the clothing ensembles worn by the soldiers, namely thermal resistance, were determined with a sweating thermal manikin. Results showed that the clothing ensemble worn by the military was appropriate during guard duty but generally inappropriate during the hike. A general under-estimation of the biometeorological forecast model in predicting the optimal clothing insulation value was observed and an additional post-processing calibration might further improve forecast accuracy. This study represents the first step in the development of a comprehensive personalized biometeorological forecast system aimed at improving recommendations regarding the optimal thermal insulation of military garment ensembles for winter activities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMSH53A2143M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMSH53A2143M"><span>Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.</p> <p>2013-12-01</p> <p>Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27329703','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27329703"><span>Robustness of the far-field response of nonlocal plasmonic ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tserkezis, Christos; Maack, Johan R; Liu, Zhaowei; Wubs, Martijn; Mortensen, N Asger</p> <p>2016-06-22</p> <p>Contrary to classical predictions, the optical response of few-nm plasmonic particles depends on particle size due to effects such as nonlocality and electron spill-out. Ensembles of such nanoparticles are therefore expected to exhibit a nonclassical inhomogeneous spectral broadening due to size distribution. For a normal distribution of free-electron nanoparticles, and within the simple nonlocal hydrodynamic Drude model, both the nonlocal blueshift and the plasmon linewidth are shown to be considerably affected by ensemble averaging. Size-variance effects tend however to conceal nonlocality to a lesser extent when the homogeneous size-dependent broadening of individual nanoparticles is taken into account, either through a local size-dependent damping model or through the Generalized Nonlocal Optical Response theory. The role of ensemble averaging is further explored in realistic distributions of isolated or weakly-interacting noble-metal nanoparticles, as encountered in experiments, while an analytical expression to evaluate the importance of inhomogeneous broadening through measurable quantities is developed. Our findings are independent of the specific nonclassical theory used, thus providing important insight into a large range of experiments on nanoscale and quantum plasmonics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..1613434Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..1613434Y"><span>A variational ensemble scheme for noisy image data assimilation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne</p> <p>2014-05-01</p> <p>Data assimilation techniques aim at recovering a system state variables trajectory denoted as X, along time from partially observed noisy measurements of the system denoted as Y. These procedures, which couple dynamics and noisy measurements of the system, fulfill indeed a twofold objective. On one hand, they provide a denoising - or reconstruction - procedure of the data through a given model framework and on the other hand, they provide estimation procedures for unknown parameters of the dynamics. A standard variational data assimilation problem can be formulated as the minimization of the following objective function with respect to the initial discrepancy, η, from the background initial guess: δ« J(η(x)) = 1∥Xb (x) - X (t ,x)∥2 + 1 tf∥H(X (t,x ))- Y (t,x)∥2dt. 2 0 0 B 2 t0 R (1) where the observation operator H links the state variable and the measurements. The cost function can be interpreted as the log likelihood function associated to the a posteriori distribution of the state given the past history of measurements and the background. In this work, we aim at studying ensemble based optimal control strategies for data assimilation. Such formulation nicely combines the ingredients of ensemble Kalman filters and variational data assimilation (4DVar). It is also formulated as the minimization of the objective function (1), but similarly to ensemble filter, it introduces in its objective function an empirical ensemble-based background-error covariance defined as: B ≡ <(Xb - <Xb>)(Xb - <Xb >)T>. (2) Thus, it works in an off-line smoothing mode rather than on the fly like sequential filters. Such resulting ensemble variational data assimilation technique corresponds to a relatively new family of methods [1,2,3]. It presents two main advantages: first, it does not require anymore to construct the adjoint of the dynamics tangent linear operator, which is a considerable advantage with respect to the method's implementation, and second, it enables the handling of a flow-dependent background error covariance matrix that can be consistently adjusted to the background error. These nice advantages come however at the cost of a reduced rank modeling of the solution space. The B matrix is at most of rank N - 1 (N is the size of the ensemble) which is considerably lower than the dimension of state space. This rank deficiency may introduce spurious correlation errors, which particularly impact the quality of results associated with a high resolution computing grid. The common strategy to suppress these distant correlations for ensemble Kalman techniques is through localization procedures. In this paper we present key theoretical properties associated to different choices of methods involved in this setup and compare with an incremental 4DVar method experimentally the performances of several variations of an ensemble technique of interest. The comparisons have been led on the basis of a Shallow Water model and have been carried out both with synthetic data and real observations. We particularly addressed the potential pitfalls and advantages of the different methods. The results indicate an advantage in favor of the ensemble technique both in quality and computational cost when dealing with incomplete observations. We highlight as the premise of using ensemble variational assimilation, that the initial perturbation used to build the initial ensemble has to fit the physics of the observed phenomenon . We also apply the method to a stochastic shallow-water model which incorporate an uncertainty expression if the subgrid stress tensor related to the ensemble spread. References [1] A. C. Lorenc, The potential of the ensemble kalman filter for nwp - a comparison with 4d-var, Quart. J. Roy. Meteor. Soc., Vol. 129, pp. 3183-3203, 2003. [2] C. Liu, Q. Xiao, and B. Wang, An Ensemble-Based Four-Dimensional Variational Data Assimilation Scheme. Part I: Technical Formulation and Preliminary Test, Mon. Wea. Rev., Vol. 136(9), pp. 3363-3373, 2008. [3] M. Buehner, Ensemble-derived stationary and flow-dependent background-error covariances: Evaluation in a quasi- operational NWP setting, Quart. J. Roy. Meteor. Soc., Vol. 131(607), pp. 1013-1043, April 2005.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JSV...403..152B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JSV...403..152B"><span>An approach for the assessment of the statistical aspects of the SEA coupling loss factors and the vibrational energy transmission in complex aircraft structures: Experimental investigation and methods benchmark</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bouhaj, M.; von Estorff, O.; Peiffer, A.</p> <p>2017-09-01</p> <p>In the application of Statistical Energy Analysis "SEA" to complex assembled structures, a purely predictive model often exhibits errors. These errors are mainly due to a lack of accurate modelling of the power transmission mechanism described through the Coupling Loss Factors (CLF). Experimental SEA (ESEA) is practically used by the automotive and aerospace industry to verify and update the model or to derive the CLFs for use in an SEA predictive model when analytical estimates cannot be made. This work is particularly motivated by the lack of procedures that allow an estimate to be made of the variance and confidence intervals of the statistical quantities when using the ESEA technique. The aim of this paper is to introduce procedures enabling a statistical description of measured power input, vibration energies and the derived SEA parameters. Particular emphasis is placed on the identification of structural CLFs of complex built-up structures comparing different methods. By adopting a Stochastic Energy Model (SEM), the ensemble average in ESEA is also addressed. For this purpose, expressions are obtained to randomly perturb the energy matrix elements and generate individual samples for the Monte Carlo (MC) technique applied to derive the ensemble averaged CLF. From results of ESEA tests conducted on an aircraft fuselage section, the SEM approach provides a better performance of estimated CLFs compared to classical matrix inversion methods. The expected range of CLF values and the synthesized energy are used as quality criteria of the matrix inversion, allowing to assess critical SEA subsystems, which might require a more refined statistical description of the excitation and the response fields. Moreover, the impact of the variance of the normalized vibration energy on uncertainty of the derived CLFs is outlined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70197818','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70197818"><span>A model ensemble for projecting multi‐decadal coastal cliff retreat during the 21st century</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Limber, Patrick; Barnard, Patrick; Vitousek, Sean; Erikson, Li</p> <p>2018-01-01</p> <p>Sea cliff retreat rates are expected to accelerate with rising sea levels during the 21st century. Here we develop an approach for a multi‐model ensemble that efficiently projects time‐averaged sea cliff retreat over multi‐decadal time scales and large (>50 km) spatial scales. The ensemble consists of five simple 1‐D models adapted from the literature that relate sea cliff retreat to wave impacts, sea level rise (SLR), historical cliff behavior, and cross‐shore profile geometry. Ensemble predictions are based on Monte Carlo simulations of each individual model, which account for the uncertainty of model parameters. The consensus of the individual models also weights uncertainty, such that uncertainty is greater when predictions from different models do not agree. A calibrated, but unvalidated, ensemble was applied to the 475 km‐long coastline of Southern California (USA), with 4 SLR scenarios of 0.5, 0.93, 1.5, and 2 m by 2100. Results suggest that future retreat rates could increase relative to mean historical rates by more than two‐fold for the higher SLR scenarios, causing an average total land loss of 19 – 41 m by 2100. However, model uncertainty ranges from +/‐ 5 – 15 m, reflecting the inherent difficulties of projecting cliff retreat over multiple decades. To enhance ensemble performance, future work could include weighting each model by its skill in matching observations in different morphological settings</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ESD.....9..153E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ESD.....9..153E"><span>Reliability ensemble averaging of 21st century projections of terrestrial net primary productivity reduces global and regional uncertainties</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Exbrayat, Jean-François; Bloom, A. Anthony; Falloon, Pete; Ito, Akihiko; Smallman, T. Luke; Williams, Mathew</p> <p>2018-02-01</p> <p>Multi-model averaging techniques provide opportunities to extract additional information from large ensembles of simulations. In particular, present-day model skill can be used to evaluate their potential performance in future climate simulations. Multi-model averaging methods have been used extensively in climate and hydrological sciences, but they have not been used to constrain projected plant productivity responses to climate change, which is a major uncertainty in Earth system modelling. Here, we use three global observationally orientated estimates of current net primary productivity (NPP) to perform a reliability ensemble averaging (REA) method using 30 global simulations of the 21st century change in NPP based on the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) <q>business as usual</q> emissions scenario. We find that the three REA methods support an increase in global NPP by the end of the 21st century (2095-2099) compared to 2001-2005, which is 2-3 % stronger than the ensemble ISIMIP mean value of 24.2 Pg C y-1. Using REA also leads to a 45-68 % reduction in the global uncertainty of 21st century NPP projection, which strengthens confidence in the resilience of the CO2 fertilization effect to climate change. This reduction in uncertainty is especially clear for boreal ecosystems although it may be an artefact due to the lack of representation of nutrient limitations on NPP in most models. Conversely, the large uncertainty that remains on the sign of the response of NPP in semi-arid regions points to the need for better observations and model development in these regions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1712188O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1712188O"><span>The total probabilities from high-resolution ensemble forecasting of floods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian</p> <p>2015-04-01</p> <p>Ensemble forecasting has for a long time been used in meteorological modelling, to give an indication of the uncertainty of the forecasts. As meteorological ensemble forecasts often show some bias and dispersion errors, there is a need for calibration and post-processing of the ensembles. Typical methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). To make optimal predictions of floods along the stream network in hydrology, we can easily use the ensemble members as input to the hydrological models. However, some of the post-processing methods will need modifications when regionalizing the forecasts outside the calibration locations, as done by Hemri et al. (2013). We present a method for spatial regionalization of the post-processed forecasts based on EMOS and top-kriging (Skøien et al., 2006). We will also look into different methods for handling the non-normality of runoff and the effect on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005. Skøien, J. O., Merz, R. and Blöschl, G.: Top-kriging - Geostatistics on stream networks, Hydrol. Earth Syst. Sci., 10(2), 277-287, 2006.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1421971-ensemble-averaged-structurefunction-relationship-nanocrystals-effective-superparamagnetic-fe-clusters-catalytically-active-pt-skin-ensemble-averaged-structure-function-relationship-composite-nanocrystals-magnetic-bcc-fe-clusters-catalytically-active-fcc-pt-skin','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1421971-ensemble-averaged-structurefunction-relationship-nanocrystals-effective-superparamagnetic-fe-clusters-catalytically-active-pt-skin-ensemble-averaged-structure-function-relationship-composite-nanocrystals-magnetic-bcc-fe-clusters-catalytically-active-fcc-pt-skin"><span>Ensemble averaged structure–function relationship for nanocrystals: effective superparamagnetic Fe clusters with catalytically active Pt skin [Ensemble averaged structure-function relationship for composite nanocrystals: magnetic bcc Fe clusters with catalytically active fcc Pt skin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Petkov, Valeri; Prasai, Binay; Shastri, Sarvjit</p> <p></p> <p>Practical applications require the production and usage of metallic nanocrystals (NCs) in large ensembles. Besides, due to their cluster-bulk solid duality, metallic NCs exhibit a large degree of structural diversity. This poses the question as to what atomic-scale basis is to be used when the structure–function relationship for metallic NCs is to be quantified precisely. In this paper, we address the question by studying bi-functional Fe core-Pt skin type NCs optimized for practical applications. In particular, the cluster-like Fe core and skin-like Pt surface of the NCs exhibit superparamagnetic properties and a superb catalytic activity for the oxygen reduction reaction,more » respectively. We determine the atomic-scale structure of the NCs by non-traditional resonant high-energy X-ray diffraction coupled to atomic pair distribution function analysis. Using the experimental structure data we explain the observed magnetic and catalytic behavior of the NCs in a quantitative manner. Lastly, we demonstrate that NC ensemble-averaged 3D positions of atoms obtained by advanced X-ray scattering techniques are a very proper basis for not only establishing but also quantifying the structure–function relationship for the increasingly complex metallic NCs explored for practical applications.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_8 --> <div id="page_9" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="161"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148x1731M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148x1731M"><span>Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Matsunaga, Y.; Sugita, Y.</p> <p>2018-06-01</p> <p>A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27739015','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27739015"><span>Summary statistics in the attentional blink.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McNair, Nicolas A; Goodbourn, Patrick T; Shone, Lauren T; Harris, Irina M</p> <p>2017-01-01</p> <p>We used the attentional blink (AB) paradigm to investigate the processing stage at which extraction of summary statistics from visual stimuli ("ensemble coding") occurs. Experiment 1 examined whether ensemble coding requires attentional engagement with the items in the ensemble. Participants performed two sequential tasks on each trial: gender discrimination of a single face (T1) and estimating the average emotional expression of an ensemble of four faces (or of a single face, as a control condition) as T2. Ensemble coding was affected by the AB when the tasks were separated by a short temporal lag. In Experiment 2, the order of the tasks was reversed to test whether ensemble coding requires more working-memory resources, and therefore induces a larger AB, than estimating the expression of a single face. Each condition produced a similar magnitude AB in the subsequent gender-discrimination T2 task. Experiment 3 additionally investigated whether the previous results were due to participants adopting a subsampling strategy during the ensemble-coding task. Contrary to this explanation, we found different patterns of performance in the ensemble-coding condition and a condition in which participants were instructed to focus on only a single face within an ensemble. Taken together, these findings suggest that ensemble coding emerges automatically as a result of the deployment of attentional resources across the ensemble of stimuli, prior to information being consolidated in working memory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1225583','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1225583"><span>Random-matrix approach to the statistical compound nuclear reaction at low energies using the Monte-Carlo technique [PowerPoint</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kawano, Toshihiko</p> <p>2015-11-10</p> <p>This theoretical treatment of low-energy compound nucleus reactions begins with the Bohr hypothesis, with corrections, and various statistical theories. The author investigates the statistical properties of the scattering matrix containing a Gaussian Orthogonal Ensemble (GOE) Hamiltonian in the propagator. The following conclusions are reached: For all parameter values studied, the numerical average of MC-generated cross sections coincides with the result of the Verbaarschot, Weidenmueller, Zirnbauer triple-integral formula. Energy average and ensemble average agree reasonably well when the width I is one or two orders of magnitude larger than the average resonance spacing d. In the strong-absorption limit, the channel degree-of-freedommore » ν a is 2. The direct reaction increases the inelastic cross sections while the elastic cross section is reduced.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3970899','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3970899"><span>Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Xue, Yi; Skrynnikov, Nikolai R</p> <p>2014-01-01</p> <p>Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNG41A0126R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNG41A0126R"><span>Long-time Dynamics of Stochastic Wave Breaking</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Restrepo, J. M.; Ramirez, J. M.; Deike, L.; Melville, K.</p> <p>2017-12-01</p> <p>A stochastic parametrization is proposed for the dynamics of wave breaking of progressive water waves. The model is shown to agree with transport estimates, derived from the Lagrangian path of fluid parcels. These trajectories are obtained numerically and are shown to agree well with theory in the non-breaking regime. Of special interest is the impact of wave breaking on transport, momentum exchanges and energy dissipation, as well as dispersion of trajectories. The proposed model, ensemble averaged to larger time scales, is compared to ensemble averages of the numerically generated parcel dynamics, and is then used to capture energy dissipation and path dispersion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4623768','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4623768"><span>Systemic Risk Analysis on Reconstructed Economic and Financial Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cimini, Giulio; Squartini, Tiziano; Garlaschelli, Diego; Gabrielli, Andrea</p> <p>2015-01-01</p> <p>We address a fundamental problem that is systematically encountered when modeling real-world complex systems of societal relevance: the limitedness of the information available. In the case of economic and financial networks, privacy issues severely limit the information that can be accessed and, as a consequence, the possibility of correctly estimating the resilience of these systems to events such as financial shocks, crises and cascade failures. Here we present an innovative method to reconstruct the structure of such partially-accessible systems, based on the knowledge of intrinsic node-specific properties and of the number of connections of only a limited subset of nodes. This information is used to calibrate an inference procedure based on fundamental concepts derived from statistical physics, which allows to generate ensembles of directed weighted networks intended to represent the real system—so that the real network properties can be estimated as their average values within the ensemble. We test the method both on synthetic and empirical networks, focusing on the properties that are commonly used to measure systemic risk. Indeed, the method shows a remarkable robustness with respect to the limitedness of the information available, thus representing a valuable tool for gaining insights on privacy-protected economic and financial systems. PMID:26507849</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015NatSR...515758C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015NatSR...515758C"><span>Systemic Risk Analysis on Reconstructed Economic and Financial Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cimini, Giulio; Squartini, Tiziano; Garlaschelli, Diego; Gabrielli, Andrea</p> <p>2015-10-01</p> <p>We address a fundamental problem that is systematically encountered when modeling real-world complex systems of societal relevance: the limitedness of the information available. In the case of economic and financial networks, privacy issues severely limit the information that can be accessed and, as a consequence, the possibility of correctly estimating the resilience of these systems to events such as financial shocks, crises and cascade failures. Here we present an innovative method to reconstruct the structure of such partially-accessible systems, based on the knowledge of intrinsic node-specific properties and of the number of connections of only a limited subset of nodes. This information is used to calibrate an inference procedure based on fundamental concepts derived from statistical physics, which allows to generate ensembles of directed weighted networks intended to represent the real system—so that the real network properties can be estimated as their average values within the ensemble. We test the method both on synthetic and empirical networks, focusing on the properties that are commonly used to measure systemic risk. Indeed, the method shows a remarkable robustness with respect to the limitedness of the information available, thus representing a valuable tool for gaining insights on privacy-protected economic and financial systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013IJBm...57...91H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013IJBm...57...91H"><span>A respiratory alert model for the Shenandoah Valley, Virginia, USA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hondula, David M.; Davis, Robert E.; Knight, David B.; Sitka, Luke J.; Enfield, Kyle; Gawtry, Stephen B.; Stenger, Phillip J.; Deaton, Michael L.; Normile, Caroline P.; Lee, Temple R.</p> <p>2013-01-01</p> <p>Respiratory morbidity (particularly COPD and asthma) can be influenced by short-term weather fluctuations that affect air quality and lung function. We developed a model to evaluate meteorological conditions associated with respiratory hospital admissions in the Shenandoah Valley of Virginia, USA. We generated ensembles of classification trees based on six years of respiratory-related hospital admissions (64,620 cases) and a suite of 83 potential environmental predictor variables. As our goal was to identify short-term weather linkages to high admission periods, the dependent variable was formulated as a binary classification of five-day moving average respiratory admission departures from the seasonal mean value. Accounting for seasonality removed the long-term apparent inverse relationship between temperature and admissions. We generated eight total models specific to the northern and southern portions of the valley for each season. All eight models demonstrate predictive skill (mean odds ratio = 3.635) when evaluated using a randomization procedure. The predictor variables selected by the ensembling algorithm vary across models, and both meteorological and air quality variables are included. In general, the models indicate complex linkages between respiratory health and environmental conditions that may be difficult to identify using more traditional approaches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1711630G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1711630G"><span>Assessment of Mediterranean cyclones in the multi-ensemble EC-Earth</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gil, Victoria; Liberato, Margarida L. R.; Trigo, Isabel F.; Trigo, Ricardo M.</p> <p>2015-04-01</p> <p>The geographical location and characteristics of the Mediterranean basin make this a particularly active region in terms of cyclone forming and re-development (Trigo et al., 2002). The area is affected by moving depressions, most originated over the North Atlantic, which may later be forced by the orography surrounding the Mediterranean Sea and enhanced by the local source of moisture and heat fluxes over the Sea itself. The present work analyses the response of Mediterranean cyclones to climate change by means of 7 ensemble members of EC-EARTH model from CMIP5 (Fifth Coupled Model Intercomparison Project). We restrict the analysis to a relatively small subset (7 members) of the total number of ensemble members available in order to take into account only the members present in the three selected experiments for robust detection of extra-tropical cyclones in the Mediterranean (Trigo, 2006). We have applied the standard procedure by comparing a common 25-year period of the historical (1980-2004), present day simulations, and the future climate simulations (2074-2098) forced by RCP4.5 and RCP8.5 scenarios. The study area corresponds to the window between 10°W-42°E and 27°N-48°N. The analysis is performed with a focus in spatial distribution density and main characteristics of the overall cyclones for winter (DJF) and summer (JJA) seasons. Despite the discrepancies in cyclone numbers when compared with the ERA Interim common period (reducing to only 72% in DJF and 78% in JJA), the ensemble average matches relatively well the main spatial patterns of areas. Results indicate that the ensemble average is characterized by a small decrease in winter (-3%) and a notable increase in summer (+10%) in total number of cyclones and that the individual ensemble members reveal small spread. Such tendency is particularly pronounced under the high RCP8.5 emission scenario being more moderated under the RCP4.5 scenario. Additionally, an assessment of changes in the annual cycle suggests a slight decrease of the spring maximum and a pronounced increase in the summer maximum. The cyclone characteristics obtained from the ensemble members of EC-Earth indicate that summer cyclones will tend to be slower, less intense but will have a faster deepening phase. Part of the summer enhanced activity is in areas dominated by thermal lows. Trigo I.F., G. R. Bigg and T.D. Davies, 2002: Climatology of cyclogenesis mechanisms in the Mediterranean. Mon. Wea. Rev. 130, 549-569. Trigo, I. F., 2006: Climatology and Interannual Variability of Storm-Tracks in the Euro-Atlantic sector: a comparison between ERA-40 and NCEP/NCAR Reanalyses. Clim. Dynam., 26, 127-143. Acknowledgements: This work was partially supported by FEDER (Fundo Europeu de Desenvolvimento Regional) funds through the COMPETE (Programa Operacional Factores de Competitividade) and by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project STORMEx FCOMP-01-0124-FEDER- 019524 (PTDC/AAC-CLI/121339/2010).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015FNL....1450033L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015FNL....1450033L"><span>Intelligent Ensemble Forecasting System of Stock Market Fluctuations Based on Symetric and Asymetric Wavelet Functions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lahmiri, Salim; Boukadoum, Mounir</p> <p>2015-08-01</p> <p>We present a new ensemble system for stock market returns prediction where continuous wavelet transform (CWT) is used to analyze return series and backpropagation neural networks (BPNNs) for processing CWT-based coefficients, determining the optimal ensemble weights, and providing final forecasts. Particle swarm optimization (PSO) is used for finding optimal weights and biases for each BPNN. To capture symmetry/asymmetry in the underlying data, three wavelet functions with different shapes are adopted. The proposed ensemble system was tested on three Asian stock markets: The Hang Seng, KOSPI, and Taiwan stock market data. Three statistical metrics were used to evaluate the forecasting accuracy; including, mean of absolute errors (MAE), root mean of squared errors (RMSE), and mean of absolute deviations (MADs). Experimental results showed that our proposed ensemble system outperformed the individual CWT-ANN models each with different wavelet function. In addition, the proposed ensemble system outperformed the conventional autoregressive moving average process. As a result, the proposed ensemble system is suitable to capture symmetry/asymmetry in financial data fluctuations for better prediction accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25622192','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25622192"><span>Electrical coupling in ensembles of nonexcitable cells: modeling the spatial map of single cell potentials.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cervera, Javier; Manzanares, Jose Antonio; Mafe, Salvador</p> <p>2015-02-19</p> <p>We analyze the coupling of model nonexcitable (non-neural) cells assuming that the cell membrane potential is the basic individual property. We obtain this potential on the basis of the inward and outward rectifying voltage-gated channels characteristic of cell membranes. We concentrate on the electrical coupling of a cell ensemble rather than on the biochemical and mechanical characteristics of the individual cells, obtain the map of single cell potentials using simple assumptions, and suggest procedures to collectively modify this spatial map. The response of the cell ensemble to an external perturbation and the consequences of cell isolation, heterogeneity, and ensemble size are also analyzed. The results suggest that simple coupling mechanisms can be significant for the biophysical chemistry of model biomolecular ensembles. In particular, the spatiotemporal map of single cell potentials should be relevant for the uptake and distribution of charged nanoparticles over model cell ensembles and the collective properties of droplet networks incorporating protein ion channels inserted in lipid bilayers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4481969','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4481969"><span>Metal Oxide Gas Sensor Drift Compensation Using a Two-Dimensional Classifier Ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Hang; Chu, Renzhi; Tang, Zhenan</p> <p>2015-01-01</p> <p>Sensor drift is the most challenging problem in gas sensing at present. We propose a novel two-dimensional classifier ensemble strategy to solve the gas discrimination problem, regardless of the gas concentration, with high accuracy over extended periods of time. This strategy is appropriate for multi-class classifiers that consist of combinations of pairwise classifiers, such as support vector machines. We compare the performance of the strategy with those of competing methods in an experiment based on a public dataset that was compiled over a period of three years. The experimental results demonstrate that the two-dimensional ensemble outperforms the other methods considered. Furthermore, we propose a pre-aging process inspired by that applied to the sensors to improve the stability of the classifier ensemble. The experimental results demonstrate that the weight of each multi-class classifier model in the ensemble remains fairly static before and after the addition of new classifier models to the ensemble, when a pre-aging procedure is applied. PMID:25942640</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28419025','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28419025"><span>Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G</p> <p>2017-09-01</p> <p>To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018WtFor..33..369V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018WtFor..33..369V"><span>Skill of Global Raw and Postprocessed Ensemble Predictions of Rainfall over Northern Tropical Africa</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vogel, Peter; Knippertz, Peter; Fink, Andreas H.; Schlueter, Andreas; Gneiting, Tilmann</p> <p>2018-04-01</p> <p>Accumulated precipitation forecasts are of high socioeconomic importance for agriculturally dominated societies in northern tropical Africa. In this study, we analyze the performance of nine operational global ensemble prediction systems (EPSs) relative to climatology-based forecasts for 1 to 5-day accumulated precipitation based on the monsoon seasons 2007-2014 for three regions within northern tropical Africa. To assess the full potential of raw ensemble forecasts across spatial scales, we apply state-of-the-art statistical postprocessing methods in form of Bayesian Model Averaging (BMA) and Ensemble Model Output Statistics (EMOS), and verify against station and spatially aggregated, satellite-based gridded observations. Raw ensemble forecasts are uncalibrated, unreliable, and underperform relative to climatology, independently of region, accumulation time, monsoon season, and ensemble. Differences between raw ensemble and climatological forecasts are large, and partly stem from poor prediction for low precipitation amounts. BMA and EMOS postprocessed forecasts are calibrated, reliable, and strongly improve on the raw ensembles, but - somewhat disappointingly - typically do not outperform climatology. Most EPSs exhibit slight improvements over the period 2007-2014, but overall have little added value compared to climatology. We suspect that the parametrization of convection is a potential cause for the sobering lack of ensemble forecast skill in a region dominated by mesoscale convective systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2615214','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2615214"><span>Similarity Measures for Protein Ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper</p> <p>2009-01-01</p> <p>Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations. However, instead of examining individual conformations it is in many cases more relevant to analyse ensembles of conformations that have been obtained either through experiments or from methods such as molecular dynamics simulations. We here present three approaches that can be used to compare conformational ensembles in the same way as the root mean square deviation is used to compare individual pairs of structures. The methods are based on the estimation of the probability distributions underlying the ensembles and subsequent comparison of these distributions. We first validate the methods using a synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single-molecule refinement. PMID:19145244</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1544146','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1544146"><span>Relation between native ensembles and experimental structures of proteins</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Best, Robert B.; Lindorff-Larsen, Kresten; DePristo, Mark A.; Vendruscolo, Michele</p> <p>2006-01-01</p> <p>Different experimental structures of the same protein or of proteins with high sequence similarity contain many small variations. Here we construct ensembles of “high-sequence similarity Protein Data Bank” (HSP) structures and consider the extent to which such ensembles represent the structural heterogeneity of the native state in solution. We find that different NMR measurements probing structure and dynamics of given proteins in solution, including order parameters, scalar couplings, and residual dipolar couplings, are remarkably well reproduced by their respective high-sequence similarity Protein Data Bank ensembles; moreover, we show that the effects of uncertainties in structure determination are insufficient to explain the results. These results highlight the importance of accounting for native-state protein dynamics in making comparisons with ensemble-averaged experimental data and suggest that even a modest number of structures of a protein determined under different conditions, or with small variations in sequence, capture a representative subset of the true native-state ensemble. PMID:16829580</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28292249','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28292249"><span>A Feature and Algorithm Selection Method for Improving the Prediction of Protein Structural Class.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ni, Qianwu; Chen, Lei</p> <p>2017-01-01</p> <p>Correct prediction of protein structural class is beneficial to investigation on protein functions, regulations and interactions. In recent years, several computational methods have been proposed in this regard. However, based on various features, it is still a great challenge to select proper classification algorithm and extract essential features to participate in classification. In this study, a feature and algorithm selection method was presented for improving the accuracy of protein structural class prediction. The amino acid compositions and physiochemical features were adopted to represent features and thirty-eight machine learning algorithms collected in Weka were employed. All features were first analyzed by a feature selection method, minimum redundancy maximum relevance (mRMR), producing a feature list. Then, several feature sets were constructed by adding features in the list one by one. For each feature set, thirtyeight algorithms were executed on a dataset, in which proteins were represented by features in the set. The predicted classes yielded by these algorithms and true class of each protein were collected to construct a dataset, which were analyzed by mRMR method, yielding an algorithm list. From the algorithm list, the algorithm was taken one by one to build an ensemble prediction model. Finally, we selected the ensemble prediction model with the best performance as the optimal ensemble prediction model. Experimental results indicate that the constructed model is much superior to models using single algorithm and other models that only adopt feature selection procedure or algorithm selection procedure. The feature selection procedure or algorithm selection procedure are really helpful for building an ensemble prediction model that can yield a better performance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AdSR...14..227L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AdSR...14..227L"><span>Wind power application research on the fusion of the determination and ensemble prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lan, Shi; Lina, Xu; Yuzhu, Hao</p> <p>2017-07-01</p> <p>The fused product of wind speed for the wind farm is designed through the use of wind speed products of ensemble prediction from the European Centre for Medium-Range Weather Forecasts (ECMWF) and professional numerical model products on wind power based on Mesoscale Model5 (MM5) and Beijing Rapid Update Cycle (BJ-RUC), which are suitable for short-term wind power forecasting and electric dispatch. The single-valued forecast is formed by calculating the different ensemble statistics of the Bayesian probabilistic forecasting representing the uncertainty of ECMWF ensemble prediction. Using autoregressive integrated moving average (ARIMA) model to improve the time resolution of the single-valued forecast, and based on the Bayesian model averaging (BMA) and the deterministic numerical model prediction, the optimal wind speed forecasting curve and the confidence interval are provided. The result shows that the fusion forecast has made obvious improvement to the accuracy relative to the existing numerical forecasting products. Compared with the 0-24 h existing deterministic forecast in the validation period, the mean absolute error (MAE) is decreased by 24.3 % and the correlation coefficient (R) is increased by 12.5 %. In comparison with the ECMWF ensemble forecast, the MAE is reduced by 11.7 %, and R is increased 14.5 %. Additionally, MAE did not increase with the prolongation of the forecast ahead.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18793021','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18793021"><span>Temporal correlation functions of concentration fluctuations: an anomalous case.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lubelski, Ariel; Klafter, Joseph</p> <p>2008-10-09</p> <p>We calculate, within the framework of the continuous time random walk (CTRW) model, multiparticle temporal correlation functions of concentration fluctuations (CCF) in systems that display anomalous subdiffusion. The subdiffusion stems from the nonstationary nature of the CTRW waiting times, which also lead to aging and ergodicity breaking. Due to aging, a system of diffusing particles tends to slow down as time progresses, and therefore, the temporal correlation functions strongly depend on the initial time of measurement. As a consequence, time averages of the CCF differ from ensemble averages, displaying therefore ergodicity breaking. We provide a simple example that demonstrates the difference between these two averages, a difference that might be amenable to experimental tests. We focus on the case of ensemble averaging and assume that the preparation time of the system coincides with the starting time of the measurement. Our analytical calculations are supported by computer simulations based on the CTRW model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3156487','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3156487"><span>The Upper and Lower Bounds of the Prediction Accuracies of Ensemble Methods for Binary Classification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Xueyi; Davidson, Nicholas J.</p> <p>2011-01-01</p> <p>Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_9 --> <div id="page_10" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="181"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H21D1482L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H21D1482L"><span>Enhancing Flood Prediction Reliability Using Bayesian Model Averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Z.; Merwade, V.</p> <p>2017-12-01</p> <p>Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JChPh.146x4112D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JChPh.146x4112D"><span>Girsanov reweighting for path ensembles and Markov state models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Donati, L.; Hartmann, C.; Keller, B. G.</p> <p>2017-06-01</p> <p>The sensitivity of molecular dynamics on changes in the potential energy function plays an important role in understanding the dynamics and function of complex molecules. We present a method to obtain path ensemble averages of a perturbed dynamics from a set of paths generated by a reference dynamics. It is based on the concept of path probability measure and the Girsanov theorem, a result from stochastic analysis to estimate a change of measure of a path ensemble. Since Markov state models (MSMs) of the molecular dynamics can be formulated as a combined phase-space and path ensemble average, the method can be extended to reweight MSMs by combining it with a reweighting of the Boltzmann distribution. We demonstrate how to efficiently implement the Girsanov reweighting in a molecular dynamics simulation program by calculating parts of the reweighting factor "on the fly" during the simulation, and we benchmark the method on test systems ranging from a two-dimensional diffusion process and an artificial many-body system to alanine dipeptide and valine dipeptide in implicit and explicit water. The method can be used to study the sensitivity of molecular dynamics on external perturbations as well as to reweight trajectories generated by enhanced sampling schemes to the original dynamics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28763673','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28763673"><span>Performance assessment of individual and ensemble data-mining techniques for gully erosion modeling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pourghasemi, Hamid Reza; Yousefi, Saleh; Kornejady, Aiding; Cerdà, Artemi</p> <p>2017-12-31</p> <p>Gully erosion is identified as an important sediment source in a range of environments and plays a conclusive role in redistribution of eroded soils on a slope. Hence, addressing spatial occurrence pattern of this phenomenon is very important. Different ensemble models and their single counterparts, mostly data mining methods, have been used for gully erosion susceptibility mapping; however, their calibration and validation procedures need to be thoroughly addressed. The current study presents a series of individual and ensemble data mining methods including artificial neural network (ANN), support vector machine (SVM), maximum entropy (ME), ANN-SVM, ANN-ME, and SVM-ME to map gully erosion susceptibility in Aghemam watershed, Iran. To this aim, a gully inventory map along with sixteen gully conditioning factors was used. A 70:30% randomly partitioned sets were used to assess goodness-of-fit and prediction power of the models. The robustness, as the stability of models' performance in response to changes in the dataset, was assessed through three training/test replicates. As a result, conducted preliminary statistical tests showed that ANN has the highest concordance and spatial differentiation with a chi-square value of 36,656 at 95% confidence level, while the ME appeared to have the lowest concordance (1772). The ME model showed an impractical result where 45% of the study area was introduced as highly susceptible to gullying, in contrast, ANN-SVM indicated a practical result with focusing only on 34% of the study area. Through all three replicates, the ANN-SVM ensemble showed the highest goodness-of-fit and predictive power with a respective values of 0.897 (area under the success rate curve) and 0.879 (area under the prediction rate curve), on average, and correspondingly the highest robustness. This attests the important role of ensemble modeling in congruently building accurate and generalized models which emphasizes the necessity to examine different models integrations. The result of this study can prepare an outline for further biophysical designs on gullies scattered in the study area. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017E%26ES...58a2019R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017E%26ES...58a2019R"><span>Model Averaging for Predicting the Exposure to Aflatoxin B1 Using DNA Methylation in White Blood Cells of Infants</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rahardiantoro, S.; Sartono, B.; Kurnia, A.</p> <p>2017-03-01</p> <p>In recent years, DNA methylation has been the special issue to reveal the pattern of a lot of human diseases. Huge amount of data would be the inescapable phenomenon in this case. In addition, some researchers interesting to take some predictions based on these huge data, especially using regression analysis. The classical approach would be failed to take the task. Model averaging by Ando and Li [1] could be an alternative approach to face this problem. This research applied the model averaging to get the best prediction in high dimension of data. In the practice, the case study by Vargas et al [3], data of exposure to aflatoxin B1 (AFB1) and DNA methylation in white blood cells of infants in The Gambia, take the implementation of model averaging. The best ensemble model selected based on the minimum of MAPE, MAE, and MSE of predictions. The result is ensemble model by model averaging with number of predictors in model candidate is 15.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28779019','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28779019"><span>Bidirectional Modulation of Intrinsic Excitability in Rat Prelimbic Cortex Neuronal Ensembles and Non-Ensembles after Operant Learning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Whitaker, Leslie R; Warren, Brandon L; Venniro, Marco; Harte, Tyler C; McPherson, Kylie B; Beidel, Jennifer; Bossert, Jennifer M; Shaham, Yavin; Bonci, Antonello; Hope, Bruce T</p> <p>2017-09-06</p> <p>Learned associations between environmental stimuli and rewards drive goal-directed learning and motivated behavior. These memories are thought to be encoded by alterations within specific patterns of sparsely distributed neurons called neuronal ensembles that are activated selectively by reward-predictive stimuli. Here, we use the Fos promoter to identify strongly activated neuronal ensembles in rat prelimbic cortex (PLC) and assess altered intrinsic excitability after 10 d of operant food self-administration training (1 h/d). First, we used the Daun02 inactivation procedure in male FosLacZ-transgenic rats to ablate selectively Fos-expressing PLC neurons that were active during operant food self-administration. Selective ablation of these neurons decreased food seeking. We then used male FosGFP-transgenic rats to assess selective alterations of intrinsic excitability in Fos-expressing neuronal ensembles (FosGFP + ) that were activated during food self-administration and compared these with alterations in less activated non-ensemble neurons (FosGFP - ). Using whole-cell recordings of layer V pyramidal neurons in an ex vivo brain slice preparation, we found that operant self-administration increased excitability of FosGFP + neurons and decreased excitability of FosGFP - neurons. Increased excitability of FosGFP + neurons was driven by increased steady-state input resistance. Decreased excitability of FosGFP - neurons was driven by increased contribution of small-conductance calcium-activated potassium (SK) channels. Injections of the specific SK channel antagonist apamin into PLC increased Fos expression but had no effect on food seeking. Overall, operant learning increased intrinsic excitability of PLC Fos-expressing neuronal ensembles that play a role in food seeking but decreased intrinsic excitability of Fos - non-ensembles. SIGNIFICANCE STATEMENT Prefrontal cortex activity plays a critical role in operant learning, but the underlying cellular mechanisms are unknown. Using the chemogenetic Daun02 inactivation procedure, we found that a small number of strongly activated Fos-expressing neuronal ensembles in rat PLC play an important role in learned operant food seeking. Using GFP expression to identify Fos-expressing layer V pyramidal neurons in prelimbic cortex (PLC) of FosGFP-transgenic rats, we found that operant food self-administration led to increased intrinsic excitability in the behaviorally relevant Fos-expressing neuronal ensembles, but decreased intrinsic excitability in Fos - neurons using distinct cellular mechanisms. Copyright © 2017 the authors 0270-6474/17/378845-12$15.00/0.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28208482','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28208482"><span>Aging underdamped scaled Brownian motion: Ensemble- and time-averaged particle displacements, nonergodicity, and the failure of the overdamping approximation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Safdari, Hadiseh; Cherstvy, Andrey G; Chechkin, Aleksei V; Bodrova, Anna; Metzler, Ralf</p> <p>2017-01-01</p> <p>We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvE..95a2120S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvE..95a2120S"><span>Aging underdamped scaled Brownian motion: Ensemble- and time-averaged particle displacements, nonergodicity, and the failure of the overdamping approximation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Safdari, Hadiseh; Cherstvy, Andrey G.; Chechkin, Aleksei V.; Bodrova, Anna; Metzler, Ralf</p> <p>2017-01-01</p> <p>We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21362155-schur-polynomials-biorthogonal-random-matrix-ensembles','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21362155-schur-polynomials-biorthogonal-random-matrix-ensembles"><span>Schur polynomials and biorthogonal random matrix ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Tierz, Miguel</p> <p></p> <p>The study of the average of Schur polynomials over a Stieltjes-Wigert ensemble has been carried out by Dolivet and Tierz [J. Math. Phys. 48, 023507 (2007); e-print arXiv:hep-th/0609167], where it was shown that it is equal to quantum dimensions. Using the same approach, we extend the result to the biorthogonal case. We also study, using the Littlewood-Richardson rule, some particular cases of the quantum dimension result. Finally, we show that the notion of Giambelli compatibility of Schur averages, introduced by Borodin et al. [Adv. Appl. Math. 37, 209 (2006); e-print arXiv:math-ph/0505021], also holds in the biorthogonal setting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhRvE..91d2107S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhRvE..91d2107S"><span>Aging scaled Brownian motion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Safdari, Hadiseh; Chechkin, Aleksei V.; Jafari, Gholamreza R.; Metzler, Ralf</p> <p>2015-04-01</p> <p>Scaled Brownian motion (SBM) is widely used to model anomalous diffusion of passive tracers in complex and biological systems. It is a highly nonstationary process governed by the Langevin equation for Brownian motion, however, with a power-law time dependence of the noise strength. Here we study the aging properties of SBM for both unconfined and confined motion. Specifically, we derive the ensemble and time averaged mean squared displacements and analyze their behavior in the regimes of weak, intermediate, and strong aging. A very rich behavior is revealed for confined aging SBM depending on different aging times and whether the process is sub- or superdiffusive. We demonstrate that the information on the aging factorizes with respect to the lag time and exhibits a functional form that is identical to the aging behavior of scale-free continuous time random walk processes. While SBM exhibits a disparity between ensemble and time averaged observables and is thus weakly nonergodic, strong aging is shown to effect a convergence of the ensemble and time averaged mean squared displacement. Finally, we derive the density of first passage times in the semi-infinite domain that features a crossover defined by the aging time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25974439','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25974439"><span>Aging scaled Brownian motion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Safdari, Hadiseh; Chechkin, Aleksei V; Jafari, Gholamreza R; Metzler, Ralf</p> <p>2015-04-01</p> <p>Scaled Brownian motion (SBM) is widely used to model anomalous diffusion of passive tracers in complex and biological systems. It is a highly nonstationary process governed by the Langevin equation for Brownian motion, however, with a power-law time dependence of the noise strength. Here we study the aging properties of SBM for both unconfined and confined motion. Specifically, we derive the ensemble and time averaged mean squared displacements and analyze their behavior in the regimes of weak, intermediate, and strong aging. A very rich behavior is revealed for confined aging SBM depending on different aging times and whether the process is sub- or superdiffusive. We demonstrate that the information on the aging factorizes with respect to the lag time and exhibits a functional form that is identical to the aging behavior of scale-free continuous time random walk processes. While SBM exhibits a disparity between ensemble and time averaged observables and is thus weakly nonergodic, strong aging is shown to effect a convergence of the ensemble and time averaged mean squared displacement. Finally, we derive the density of first passage times in the semi-infinite domain that features a crossover defined by the aging time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H41H1542A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H41H1542A"><span>Climatic Models Ensemble-based Mid-21st Century Runoff Projections: A Bayesian Framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Achieng, K. O.; Zhu, J.</p> <p>2017-12-01</p> <p>There are a number of North American Regional Climate Change Assessment Program (NARCCAP) climatic models that have been used to project surface runoff in the mid-21st century. Statistical model selection techniques are often used to select the model that best fits data. However, model selection techniques often lead to different conclusions. In this study, ten models are averaged in Bayesian paradigm to project runoff. Bayesian Model Averaging (BMA) is used to project and identify effect of model uncertainty on future runoff projections. Baseflow separation - a two-digital filter which is also called Eckhardt filter - is used to separate USGS streamflow (total runoff) into two components: baseflow and surface runoff. We use this surface runoff as the a priori runoff when conducting BMA of runoff simulated from the ten RCM models. The primary objective of this study is to evaluate how well RCM multi-model ensembles simulate surface runoff, in a Bayesian framework. Specifically, we investigate and discuss the following questions: How well do ten RCM models ensemble jointly simulate surface runoff by averaging over all the models using BMA, given a priori surface runoff? What are the effects of model uncertainty on surface runoff simulation?</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000PNAS...97..634N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000PNAS...97..634N"><span>Landscape approaches for determining the ensemble of folding transition states: Success and failure hinge on the degree of frustration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nymeyer, Hugh; Socci, Nicholas D.; Onuchic, José Nelson</p> <p>2000-01-01</p> <p>* Department of Physics, University of California at San Diego, La Jolla, CA 92093-0319; and § Center for Studies in Physics and Biology, The Rockefeller University, New York, NY 10021 Edited by R. Stephen Berry, University of Chicago, Chicago, IL, and approved November 5, 1999 (received for review July 2, 1999) We present a method for determining structural properties of the ensemble of folding transition states from protein simulations. This method relies on thermodynamic quantities (free energies as a function of global reaction coordinates, such as the percentage of native contacts) and not on "kinetic" measurements (rates, transmission coefficients, complete trajectories); consequently, it requires fewer computational resources compared with otherapproaches, making it more suited to large and complex models. We explain the theoretical framework that underlies this method and use it to clarify the connection between the experimentally determined <IMG SRC="/math/12pt/normal/Phi.gif" ALIGN=BASELINE ALT="Phi "> value, a quantity determined by the ratio of rate and stability changes due to point mutations, and the average structure of the transition state ensemble. To determine the accuracy of this thermodynamic approach, we apply it to minimalist protein models and compare these results with the ones obtained by using the standard experimental procedure for determining <IMG SRC="/math/12pt/normal/Phi.gif" ALIGN=BASELINE ALT="Phi "> values. We show that the accuracy of both methods depends sensitively on the amount of frustration. In particular, the results are similar when applied to models with minimal amounts of frustration, characteristic of rapid-folding, single-domain globular proteins.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1818469S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1818469S"><span>Ensemble hydro-meteorological forecasting for early warning of floods and scheduling of hydropower production</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Solvang Johansen, Stian; Steinsland, Ingelin; Engeland, Kolbjørn</p> <p>2016-04-01</p> <p>Running hydrological models with precipitation and temperature ensemble forcing to generate ensembles of streamflow is a commonly used method in operational hydrology. Evaluations of streamflow ensembles have however revealed that the ensembles are biased with respect to both mean and spread. Thus postprocessing of the ensembles is needed in order to improve the forecast skill. The aims of this study is (i) to to evaluate how postprocessing of streamflow ensembles works for Norwegian catchments within different hydrological regimes and to (ii) demonstrate how post processed streamflow ensembles are used operationally by a hydropower producer. These aims were achieved by postprocessing forecasted daily discharge for 10 lead-times for 20 catchments in Norway by using EPS forcing from ECMWF applied the semi-distributed HBV-model dividing each catchment into 10 elevation zones. Statkraft Energi uses forecasts from these catchments for scheduling hydropower production. The catchments represent different hydrological regimes. Some catchments have stable winter condition with winter low flow and a major flood event during spring or early summer caused by snow melting. Others has a more mixed snow-rain regime, often with a secondary flood season during autumn, and in the coastal areas, the stream flow is dominated by rain, and the main flood season is autumn and winter. For post processing, a Bayesian model averaging model (BMA) close to (Kleiber et al 2011) is used. The model creates a predictive PDF that is a weighted average of PDFs centered on the individual bias corrected forecasts. The weights are here equal since all ensemble members come from the same model, and thus have the same probability. For modeling streamflow, the gamma distribution is chosen as a predictive PDF. The bias correction parameters and the PDF parameters are estimated using a 30-day sliding window training period. Preliminary results show that the improvement varies between catchments depending on where they are situated and the hydrological regime. There is an improvement in CRPS for all catchments compared to raw EPS ensembles. The improvement is up to lead-time 5-7. The postprocessing also improves the MAE for the median of the predictive PDF compared to the median of the raw EPS. But less compared to CRPS, often up to lead-time 2-3. The streamflow ensembles are to some extent used operationally in Statkraft Energi (Hydro Power company, Norway), with respect to early warning, risk assessment and decision-making. Presently all forecast used operationally for short-term scheduling are deterministic, but ensembles are used visually for expert assessment of risk in difficult situations where e.g. there is a chance of overflow in a reservoir. However, there are plans to incorporate ensembles in the daily scheduling of hydropower production.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/42684','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/42684"><span>Unlocking the climate riddle in forested ecosystems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Greg C. Liknes; Christopher W. Woodall; Brian F. Walters; Sara A. Goeking</p> <p>2012-01-01</p> <p>Climate information is often used as a predictor in ecological studies, where temporal averages are typically based on climate normals (30-year means) or seasonal averages. While ensemble projections of future climate forecast a higher global average annual temperature, they also predict increased climate variability. It remains to be seen whether forest ecosystems...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003JApMe..42..308D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003JApMe..42..308D"><span>Evaluation of an Ensemble Dispersion Calculation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Draxler, Roland R.</p> <p>2003-02-01</p> <p>A Lagrangian transport and dispersion model was modified to generate multiple simulations from a single meteorological dataset. Each member of the simulation was computed by assuming a ±1-gridpoint shift in the horizontal direction and a ±250-m shift in the vertical direction of the particle position, with respect to the meteorological data. The configuration resulted in 27 ensemble members. Each member was assumed to have an equal probability. The model was tested by creating an ensemble of daily average air concentrations for 3 months at 75 measurement locations over the eastern half of the United States during the Across North America Tracer Experiment (ANATEX). Two generic graphical displays were developed to summarize the ensemble prediction and the resulting concentration probabilities for a specific event: a probability-exceed plot and a concentration-probability plot. Although a cumulative distribution of the ensemble probabilities compared favorably with the measurement data, the resulting distribution was not uniform. This result was attributed to release height sensitivity. The trajectory ensemble approach accounts for about 41%-47% of the variance in the measurement data. This residual uncertainty is caused by other model and data errors that are not included in the ensemble design.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29579536','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29579536"><span>Novel forecasting approaches using combination of machine learning and statistical models for flood susceptibility mapping.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah</p> <p>2018-07-01</p> <p>In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4233720','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4233720"><span>The interplay between cooperativity and diversity in model threshold ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cervera, Javier; Manzanares, José A.; Mafe, Salvador</p> <p>2014-01-01</p> <p>The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. PMID:25142516</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7963E..15H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7963E..15H"><span>Sampling-based ensemble segmentation against inter-operator variability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huo, Jing; Okada, Kazunori; Pope, Whitney; Brown, Matthew</p> <p>2011-03-01</p> <p>Inconsistency and a lack of reproducibility are commonly associated with semi-automated segmentation methods. In this study, we developed an ensemble approach to improve reproducibility and applied it to glioblastoma multiforme (GBM) brain tumor segmentation on T1-weigted contrast enhanced MR volumes. The proposed approach combines samplingbased simulations and ensemble segmentation into a single framework; it generates a set of segmentations by perturbing user initialization and user-specified internal parameters, then fuses the set of segmentations into a single consensus result. Three combination algorithms were applied: majority voting, averaging and expectation-maximization (EM). The reproducibility of the proposed framework was evaluated by a controlled experiment on 16 tumor cases from a multicenter drug trial. The ensemble framework had significantly better reproducibility than the individual base Otsu thresholding method (p<.001).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24089456','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24089456"><span>Nencki Genomics Database--Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal</p> <p>2013-01-01</p> <p>We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql -h database.nencki-genomics.org -u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3788330','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3788330"><span>Nencki Genomics Database—Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal</p> <p>2013-01-01</p> <p>We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql –h database.nencki-genomics.org –u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface. Database URL: http://www.nencki-genomics.org. PMID:24089456</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_10 --> <div id="page_11" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="201"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1435237-accuracy-microcanonical-lanczos-method-compute-real-frequency-dynamical-spectral-functions-quantum-models-finite-temperatures','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1435237-accuracy-microcanonical-lanczos-method-compute-real-frequency-dynamical-spectral-functions-quantum-models-finite-temperatures"><span>Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio</p> <p></p> <p>We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003)] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here in this paper, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013)] tomore » obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S = 1/2, we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1435237-accuracy-microcanonical-lanczos-method-compute-real-frequency-dynamical-spectral-functions-quantum-models-finite-temperatures','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1435237-accuracy-microcanonical-lanczos-method-compute-real-frequency-dynamical-spectral-functions-quantum-models-finite-temperatures"><span>Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio; ...</p> <p>2018-04-20</p> <p>We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003)] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here in this paper, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013)] tomore » obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S = 1/2, we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JGRD..123.3443T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JGRD..123.3443T"><span>A Simple Ensemble Simulation Technique for Assessment of Future Variations in Specific High-Impact Weather Events</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Taniguchi, Kenji</p> <p>2018-04-01</p> <p>To investigate future variations in high-impact weather events, numerous samples are required. For the detailed assessment in a specific region, a high spatial resolution is also required. A simple ensemble simulation technique is proposed in this paper. In the proposed technique, new ensemble members were generated from one basic state vector and two perturbation vectors, which were obtained by lagged average forecasting simulations. Sensitivity experiments with different numbers of ensemble members, different simulation lengths, and different perturbation magnitudes were performed. Experimental application to a global warming study was also implemented for a typhoon event. Ensemble-mean results and ensemble spreads of total precipitation, atmospheric conditions showed similar characteristics across the sensitivity experiments. The frequencies of the maximum total and hourly precipitation also showed similar distributions. These results indicate the robustness of the proposed technique. On the other hand, considerable ensemble spread was found in each ensemble experiment. In addition, the results of the application to a global warming study showed possible variations in the future. These results indicate that the proposed technique is useful for investigating various meteorological phenomena and the impacts of global warming. The results of the ensemble simulations also enable the stochastic evaluation of differences in high-impact weather events. In addition, the impacts of a spectral nudging technique were also examined. The tracks of a typhoon were quite different between cases with and without spectral nudging; however, the ranges of the tracks among ensemble members were comparable. It indicates that spectral nudging does not necessarily suppress ensemble spread.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29564429','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29564429"><span>Cell-cell bioelectrical interactions and local heterogeneities in genetic networks: a model for the stabilization of single-cell states and multicellular oscillations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cervera, Javier; Manzanares, José A; Mafe, Salvador</p> <p>2018-04-04</p> <p>Genetic networks operate in the presence of local heterogeneities in single-cell transcription and translation rates. Bioelectrical networks and spatio-temporal maps of cell electric potentials can influence multicellular ensembles. Could cell-cell bioelectrical interactions mediated by intercellular gap junctions contribute to the stabilization of multicellular states against local genetic heterogeneities? We theoretically analyze this question on the basis of two well-established experimental facts: (i) the membrane potential is a reliable read-out of the single-cell electrical state and (ii) when the cells are coupled together, their individual cell potentials can be influenced by ensemble-averaged electrical potentials. We propose a minimal biophysical model for the coupling between genetic and bioelectrical networks that associates the local changes occurring in the transcription and translation rates of an ion channel protein with abnormally low (depolarized) cell potentials. We then analyze the conditions under which the depolarization of a small region (patch) in a multicellular ensemble can be reverted by its bioelectrical coupling with the (normally polarized) neighboring cells. We show also that the coupling between genetic and bioelectric networks of non-excitable cells, modulated by average electric potentials at the multicellular ensemble level, can produce oscillatory phenomena. The simulations show the importance of single-cell potentials characteristic of polarized and depolarized states, the relative sizes of the abnormally polarized patch and the rest of the normally polarized ensemble, and intercellular coupling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160007389','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160007389"><span>"Intelligent Ensemble" Projections of Precipitation and Surface Radiation in Support of Agricultural Climate Change Adaptation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Taylor, Patrick C.; Baker, Noel C.</p> <p>2015-01-01</p> <p>Earth's climate is changing and will continue to change into the foreseeable future. Expected changes in the climatological distribution of precipitation, surface temperature, and surface solar radiation will significantly impact agriculture. Adaptation strategies are, therefore, required to reduce the agricultural impacts of climate change. Climate change projections of precipitation, surface temperature, and surface solar radiation distributions are necessary input for adaption planning studies. These projections are conventionally constructed from an ensemble of climate model simulations (e.g., the Coupled Model Intercomparison Project 5 (CMIP5)) as an equal weighted average, one model one vote. Each climate model, however, represents the array of climate-relevant physical processes with varying degrees of fidelity influencing the projection of individual climate variables differently. Presented here is a new approach, termed the "Intelligent Ensemble, that constructs climate variable projections by weighting each model according to its ability to represent key physical processes, e.g., precipitation probability distribution. This approach provides added value over the equal weighted average method. Physical process metrics applied in the "Intelligent Ensemble" method are created using a combination of NASA and NOAA satellite and surface-based cloud, radiation, temperature, and precipitation data sets. The "Intelligent Ensemble" method is applied to the RCP4.5 and RCP8.5 anthropogenic climate forcing simulations within the CMIP5 archive to develop a set of climate change scenarios for precipitation, temperature, and surface solar radiation in each USDA Farm Resource Region for use in climate change adaptation studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFMGC41D0850B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFMGC41D0850B"><span>A short-term ensemble wind speed forecasting system for wind power applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baidya Roy, S.; Traiteur, J. J.; Callicutt, D.; Smith, M.</p> <p>2011-12-01</p> <p>This study develops an adaptive, blended forecasting system to provide accurate wind speed forecasts 1 hour ahead of time for wind power applications. The system consists of an ensemble of 21 forecasts with different configurations of the Weather Research and Forecasting Single Column Model (WRFSCM) and a persistence model. The ensemble is calibrated against observations for a 2 month period (June-July, 2008) at a potential wind farm site in Illinois using the Bayesian Model Averaging (BMA) technique. The forecasting system is evaluated against observations for August 2008 at the same site. The calibrated ensemble forecasts significantly outperform the forecasts from the uncalibrated ensemble while significantly reducing forecast uncertainty under all environmental stability conditions. The system also generates significantly better forecasts than persistence, autoregressive (AR) and autoregressive moving average (ARMA) models during the morning transition and the diurnal convective regimes. This forecasting system is computationally more efficient than traditional numerical weather prediction models and can generate a calibrated forecast, including model runs and calibration, in approximately 1 minute. Currently, hour-ahead wind speed forecasts are almost exclusively produced using statistical models. However, numerical models have several distinct advantages over statistical models including the potential to provide turbulence forecasts. Hence, there is an urgent need to explore the role of numerical models in short-term wind speed forecasting. This work is a step in that direction and is likely to trigger a debate within the wind speed forecasting community.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhRvE..83e6216B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhRvE..83e6216B"><span>Fidelity decay in interacting two-level boson systems: Freezing and revivals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.</p> <p>2011-05-01</p> <p>We study the fidelity decay in the k-body embedded ensembles of random matrices for bosons distributed in two single-particle states, considering the reference or unperturbed Hamiltonian as the one-body terms and the diagonal part of the k-body embedded ensemble of random matrices and the perturbation as the residual off-diagonal part of the interaction. We calculate the ensemble-averaged fidelity with respect to an initial random state within linear response theory to second order on the perturbation strength and demonstrate that it displays the freeze of the fidelity. During the freeze, the average fidelity exhibits periodic revivals at integer values of the Heisenberg time tH. By selecting specific k-body terms of the residual interaction, we find that the periodicity of the revivals during the freeze of fidelity is an integer fraction of tH, thus relating the period of the revivals with the range of the interaction k of the perturbing terms. Numerical calculations confirm the analytical results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AGUFMPP41C1527T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AGUFMPP41C1527T"><span>High northern latitude temperature extremes, 1400-1999</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tingley, M. P.; Huybers, P.; Hughen, K. A.</p> <p>2009-12-01</p> <p>There is often an interest in determining which interval features the most extreme value of a reconstructed climate field, such as the warmest year or decade in a temperature reconstruction. Previous approaches to this type of question have not fully accounted for the spatial and temporal covariance in the climate field when assessing the significance of extreme values. Here we present results from applying BARSAT, a new, Bayesian approach to reconstructing climate fields, to a 600 year multiproxy temperature data set that covers land areas between 45N and 85N. The end result of the analysis is an ensemble of spatially and temporally complete realizations of the temperature field, each of which is consistent with the observations and the estimated values of the parameters that define the assumed spatial and temporal covariance functions. In terms of the spatial average temperature, 1990-1999 was the warmest decade in the 1400-1999 interval in each of 2000 ensemble members, while 1995 was the warmest year in 98% of the ensemble members. A similar analysis at each node of a regular 5 degree grid gives insight into the spatial distribution of warm temperatures, and reveals that 1995 was anomalously warm in Eurasia, whereas 1998 featured extreme warmth in North America. In 70% of the ensemble members, 1601 featured the coldest spatial average, indicating that the eruption of Huaynaputina in Peru in 1600 (with a volcanic explosivity index of 6) had a major cooling impact on the high northern latitudes. Repeating this analysis at each node reveals the varying impacts of major volcanic eruptions on the distribution of extreme cooling. Finally, we use the ensemble to investigate extremes in the time evolution of centennial temperature trends, and find that in more than half the ensemble members, the greatest rate of change in the spatial mean time series was a cooling centered at 1600. The largest rate of centennial scale warming, however, occurred in the 20th Century in more than 98% of the ensemble members.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MAP...tmp...55S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MAP...tmp...55S"><span>A comparison between EDA-EnVar and ETKF-EnVar data assimilation techniques using radar observations at convective scales through a case study of Hurricane Ike (2008)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shen, Feifei; Xu, Dongmei; Xue, Ming; Min, Jinzhong</p> <p>2017-07-01</p> <p>This study examines the impacts of assimilating radar radial velocity (Vr) data for the simulation of hurricane Ike (2008) with two different ensemble generation techniques in the framework of the hybrid ensemble-variational (EnVar) data assimilation system of Weather Research and Forecasting model. For the generation of ensemble perturbations we apply two techniques, the ensemble transform Kalman filter (ETKF) and the ensemble of data assimilation (EDA). For the ETKF-EnVar, the forecast ensemble perturbations are updated by the ETKF, while for the EDA-EnVar, the hybrid is employed to update each ensemble member with perturbed observations. The ensemble mean is analyzed by the hybrid method with flow-dependent ensemble covariance for both EnVar. The sensitivity of analyses and forecasts to the two applied ensemble generation techniques is investigated in our current study. It is found that the EnVar system is rather stable with different ensemble update techniques in terms of its skill on improving the analyses and forecasts. The EDA-EnVar-based ensemble perturbations are likely to include slightly less organized spatial structures than those in ETKF-EnVar, and the perturbations of the latter are constructed more dynamically. Detailed diagnostics reveal that both of the EnVar schemes not only produce positive temperature increments around the hurricane center but also systematically adjust the hurricane location with the hurricane-specific error covariance. On average, the analysis and forecast from the ETKF-EnVar have slightly smaller errors than that from the EDA-EnVar in terms of track, intensity, and precipitation forecast. Moreover, ETKF-EnVar yields better forecasts when verified against conventional observations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19940006497&hterms=Petit&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DPetit','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19940006497&hterms=Petit&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DPetit"><span>An ensemble pulsar time</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Petit, Gerard; Thomas, Claudine; Tavella, Patrizia</p> <p>1993-01-01</p> <p>Millisecond pulsars are galactic objects that exhibit a very stable spinning period. Several tens of these celestial clocks have now been discovered, which opens the possibility that an average time scale may be deduced through a long-term stability algorithm. Such an ensemble average makes it possible to reduce the level of the instabilities originating from the pulsars or from other sources of noise, which are unknown but independent. The basis for such an algorithm is presented and applied to real pulsar data. It is shown that pulsar time could shortly become more stable than the present atomic time, for averaging times of a few years. Pulsar time can also be used as a flywheel to maintain the accuracy of atomic time in case of temporary failure of the primary standards, or to transfer the improved accuracy of future standards back to the present.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24163333','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24163333"><span>Hierarchical encoding makes individuals in a group seem more attractive.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Walker, Drew; Vul, Edward</p> <p>2014-01-01</p> <p>In the research reported here, we found evidence of the cheerleader effect-people seem more attractive in a group than in isolation. We propose that this effect arises via an interplay of three cognitive phenomena: (a) The visual system automatically computes ensemble representations of faces presented in a group, (b) individual members of the group are biased toward this ensemble average, and (c) average faces are attractive. Taken together, these phenomena suggest that individual faces will seem more attractive when presented in a group because they will appear more similar to the average group face, which is more attractive than group members' individual faces. We tested this hypothesis in five experiments in which subjects rated the attractiveness of faces presented either alone or in a group with the same gender. Our results were consistent with the cheerleader effect.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFMGC51A0736C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFMGC51A0736C"><span>Simulation of an ensemble of future climate time series with an hourly weather generator</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Caporali, E.; Fatichi, S.; Ivanov, V. Y.; Kim, J.</p> <p>2010-12-01</p> <p>There is evidence that climate change is occurring in many regions of the world. The necessity of climate change predictions at the local scale and fine temporal resolution is thus warranted for hydrological, ecological, geomorphological, and agricultural applications that can provide thematic insights into the corresponding impacts. Numerous downscaling techniques have been proposed to bridge the gap between the spatial scales adopted in General Circulation Models (GCM) and regional analyses. Nevertheless, the time and spatial resolutions obtained as well as the type of meteorological variables may not be sufficient for detailed studies of climate change effects at the local scales. In this context, this study presents a stochastic downscaling technique that makes use of an hourly weather generator to simulate time series of predicted future climate. Using a Bayesian approach, the downscaling procedure derives distributions of factors of change for several climate statistics from a multi-model ensemble of GCMs. Factors of change are sampled from their distributions using a Monte Carlo technique to entirely account for the probabilistic information obtained with the Bayesian multi-model ensemble. Factors of change are subsequently applied to the statistics derived from observations to re-evaluate the parameters of the weather generator. The weather generator can reproduce a wide set of climate variables and statistics over a range of temporal scales, from extremes, to the low-frequency inter-annual variability. The final result of such a procedure is the generation of an ensemble of hourly time series of meteorological variables that can be considered as representative of future climate, as inferred from GCMs. The generated ensemble of scenarios also accounts for the uncertainty derived from multiple GCMs used in downscaling. Applications of the procedure in reproducing present and future climates are presented for different locations world-wide: Tucson (AZ), Detroit (MI), and Firenze (Italy). The stochastic downscaling is carried out with eight GCMs from the CMIP3 multi-model dataset (IPCC 4AR, A1B scenario).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26263302','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26263302"><span>A Maximum-Likelihood Approach to Force-Field Calibration.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam</p> <p>2015-09-28</p> <p>A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhRvA..86e2324H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhRvA..86e2324H"><span>Ensembles of physical states and random quantum circuits on graphs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hamma, Alioscia; Santra, Siddhartha; Zanardi, Paolo</p> <p>2012-11-01</p> <p>In this paper we continue and extend the investigations of the ensembles of random physical states introduced in Hamma [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.040502 109, 040502 (2012)]. These ensembles are constructed by finite-length random quantum circuits (RQC) acting on the (hyper)edges of an underlying (hyper)graph structure. The latter encodes for the locality structure associated with finite-time quantum evolutions generated by physical, i.e., local, Hamiltonians. Our goal is to analyze physical properties of typical states in these ensembles; in particular here we focus on proxies of quantum entanglement as purity and α-Renyi entropies. The problem is formulated in terms of matrix elements of superoperators which depend on the graph structure, choice of probability measure over the local unitaries, and circuit length. In the α=2 case these superoperators act on a restricted multiqubit space generated by permutation operators associated to the subsets of vertices of the graph. For permutationally invariant interactions the dynamics can be further restricted to an exponentially smaller subspace. We consider different families of RQCs and study their typical entanglement properties for finite time as well as their asymptotic behavior. We find that area law holds in average and that the volume law is a typical property (that is, it holds in average and the fluctuations around the average are vanishing for the large system) of physical states. The area law arises when the evolution time is O(1) with respect to the size L of the system, while the volume law arises as is typical when the evolution time scales like O(L).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3996711','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3996711"><span>The Dropout Learning Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Baldi, Pierre; Sadowski, Peter</p> <p>2014-01-01</p> <p>Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JCAMD..20..263B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JCAMD..20..263B"><span>RNA unrestrained molecular dynamics ensemble improves agreement with experimental NMR data compared to single static structure: a test case</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Beckman, Robert A.; Moreland, David; Louise-May, Shirley; Humblet, Christine</p> <p>2006-05-01</p> <p>Nuclear magnetic resonance (NMR) provides structural and dynamic information reflecting an average, often non-linear, of multiple solution-state conformations. Therefore, a single optimized structure derived from NMR refinement may be misleading if the NMR data actually result from averaging of distinct conformers. It is hypothesized that a conformational ensemble generated by a valid molecular dynamics (MD) simulation should be able to improve agreement with the NMR data set compared with the single optimized starting structure. Using a model system consisting of two sequence-related self-complementary ribonucleotide octamers for which NMR data was available, 0.3 ns particle mesh Ewald MD simulations were performed in the AMBER force field in the presence of explicit water and counterions. Agreement of the averaged properties of the molecular dynamics ensembles with NMR data such as homonuclear proton nuclear Overhauser effect (NOE)-based distance constraints, homonuclear proton and heteronuclear 1H-31P coupling constant ( J) data, and qualitative NMR information on hydrogen bond occupancy, was systematically assessed. Despite the short length of the simulation, the ensemble generated from it agreed with the NMR experimental constraints more completely than the single optimized NMR structure. This suggests that short unrestrained MD simulations may be of utility in interpreting NMR results. As expected, a 0.5 ns simulation utilizing a distance dependent dielectric did not improve agreement with the NMR data, consistent with its inferior exploration of conformational space as assessed by 2-D RMSD plots. Thus, ability to rapidly improve agreement with NMR constraints may be a sensitive diagnostic of the MD methods themselves.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMIN21D0065T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMIN21D0065T"><span>The NASA Reanalysis Ensemble Service - Advanced Capabilities for Integrated Reanalysis Access and Intercomparison</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.</p> <p>2017-12-01</p> <p>NASA's efforts to advance climate analytics-as-a-service are making new capabilities available to the research community: (1) A full-featured Reanalysis Ensemble Service (RES) comprising monthly means data from multiple reanalysis data sets, accessible through an enhanced set of extraction, analytic, arithmetic, and intercomparison operations. The operations are made accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib; (2) A cloud-based, high-performance Virtual Real-Time Analytics Testbed supporting a select set of climate variables. This near real-time capability enables advanced technologies like Spark and Hadoop-based MapReduce analytics over native NetCDF files; and (3) A WPS-compliant Web service interface to our climate data analytics service that will enable greater interoperability with next-generation systems such as ESGF. The Reanalysis Ensemble Service includes the following: - New API that supports full temporal, spatial, and grid-based resolution services with sample queries - A Docker-ready RES application to deploy across platforms - Extended capabilities that enable single- and multiple reanalysis area average, vertical average, re-gridding, standard deviation, and ensemble averages - Convenient, one-stop shopping for commonly used data products from multiple reanalyses including basic sub-setting and arithmetic operations (e.g., avg, sum, max, min, var, count, anomaly) - Full support for the MERRA-2 reanalysis dataset in addition to, ECMWF ERA-Interim, NCEP CFSR, JMA JRA-55 and NOAA/ESRL 20CR… - A Jupyter notebook-based distribution mechanism designed for client use cases that combines CDSlib documentation with interactive scenarios and personalized project management - Supporting analytic services for NASA GMAO Forward Processing datasets - Basic uncertainty quantification services that combine heterogeneous ensemble products with comparative observational products (e.g., reanalysis, observational, visualization) - The ability to compute and visualize multiple reanalysis for ease of inter-comparisons - Automated tools to retrieve and prepare data collections for analytic processing</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27940377','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27940377"><span>Using simulation to interpret experimental data in terms of protein conformational ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Allison, Jane R</p> <p>2017-04-01</p> <p>In their biological environment, proteins are dynamic molecules, necessitating an ensemble structural description. Molecular dynamics simulations and solution-state experiments provide complimentary information in the form of atomically detailed coordinates and averaged or distributions of structural properties or related quantities. Recently, increases in the temporal and spatial scale of conformational sampling and comparison of the more diverse conformational ensembles thus generated have revealed the importance of sampling rare events. Excitingly, new methods based on maximum entropy and Bayesian inference are promising to provide a statistically sound mechanism for combining experimental data with molecular dynamics simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12929922','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12929922"><span>Training in cortical control of neuroprosthetic devices improves signal extraction from small neuronal ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Helms Tillery, S I; Taylor, D M; Schwartz, A B</p> <p>2003-01-01</p> <p>We have recently developed a closed-loop environment in which we can test the ability of primates to control the motion of a virtual device using ensembles of simultaneously recorded neurons /29/. Here we use a maximum likelihood method to assess the information about task performance contained in the neuronal ensemble. We trained two animals to control the motion of a computer cursor in three dimensions. Initially the animals controlled cursor motion using arm movements, but eventually they learned to drive the cursor directly from cortical activity. Using a population vector (PV) based upon the relation between cortical activity and arm motion, the animals were able to control the cursor directly from the brain in a closed-loop environment, but with difficulty. We added a supervised learning method that modified the parameters of the PV according to task performance (adaptive PV), and found that animals were able to exert much finer control over the cursor motion from brain signals. Here we describe a maximum likelihood method (ML) to assess the information about target contained in neuronal ensemble activity. Using this method, we compared the information about target contained in the ensemble during arm control, during brain control early in the adaptive PV, and during brain control after the adaptive PV had settled and the animal could drive the cursor reliably and with fine gradations. During the arm-control task, the ML was able to determine the target of the movement in as few as 10% of the trials, and as many as 75% of the trials, with an average of 65%. This average dropped when the animals used a population vector to control motion of the cursor. On average we could determine the target in around 35% of the trials. This low percentage was also reflected in poor control of the cursor, so that the animal was unable to reach the target in a large percentage of trials. Supervised adjustment of the population vector parameters produced new weighting coefficients and directional tuning parameters for many neurons. This produced a much better performance of the brain-controlled cursor motion. It was also reflected in the maximum likelihood measure of cell activity, producing the correct target based only on neuronal activity in over 80% of the trials on average. The changes in maximum likelihood estimates of target location based on ensemble firing show that an animal's ability to regulate the motion of a cortically controlled device is not crucially dependent on the experimenter's ability to estimate intention from neuronal activity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFM.H42B..05D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFM.H42B..05D"><span>Verification of Ensemble Forecasts for the New York City Operations Support Tool</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Day, G.; Schaake, J. C.; Thiemann, M.; Draijer, S.; Wang, L.</p> <p>2012-12-01</p> <p>The New York City water supply system operated by the Department of Environmental Protection (DEP) serves nine million people. It covers 2,000 square miles of portions of the Catskill, Delaware, and Croton watersheds, and it includes nineteen reservoirs and three controlled lakes. DEP is developing an Operations Support Tool (OST) to support its water supply operations and planning activities. OST includes historical and real-time data, a model of the water supply system complete with operating rules, and lake water quality models developed to evaluate alternatives for managing turbidity in the New York City Catskill reservoirs. OST will enable DEP to manage turbidity in its unfiltered system while satisfying its primary objective of meeting the City's water supply needs, in addition to considering secondary objectives of maintaining ecological flows, supporting fishery and recreation releases, and mitigating downstream flood peaks. The current version of OST relies on statistical forecasts of flows in the system based on recent observed flows. To improve short-term decision making, plans are being made to transition to National Weather Service (NWS) ensemble forecasts based on hydrologic models that account for short-term weather forecast skill, longer-term climate information, as well as the hydrologic state of the watersheds and recent observed flows. To ensure that the ensemble forecasts are unbiased and that the ensemble spread reflects the actual uncertainty of the forecasts, a statistical model has been developed to post-process the NWS ensemble forecasts to account for hydrologic model error as well as any inherent bias and uncertainty in initial model states, meteorological data and forecasts. The post-processor is designed to produce adjusted ensemble forecasts that are consistent with the DEP historical flow sequences that were used to develop the system operating rules. A set of historical hindcasts that is representative of the real-time ensemble forecasts is needed to verify that the post-processed forecasts are unbiased, statistically reliable, and preserve the skill inherent in the "raw" NWS ensemble forecasts. A verification procedure and set of metrics will be presented that provide an objective assessment of ensemble forecasts. The procedure will be applied to both raw ensemble hindcasts and to post-processed ensemble hindcasts. The verification metrics will be used to validate proper functioning of the post-processor and to provide a benchmark for comparison of different types of forecasts. For example, current NWS ensemble forecasts are based on climatology, using each historical year to generate a forecast trace. The NWS Hydrologic Ensemble Forecast System (HEFS) under development will utilize output from both the National Oceanic Atmospheric Administration (NOAA) Global Ensemble Forecast System (GEFS) and the Climate Forecast System (CFS). Incorporating short-term meteorological forecasts and longer-term climate forecast information should provide sharper, more accurate forecasts. Hindcasts from HEFS will enable New York City to generate verification results to validate the new forecasts and further fine-tune system operating rules. Project verification results will be presented for different watersheds across a range of seasons, lead times, and flow levels to assess the quality of the current ensemble forecasts.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..15.7768S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..15.7768S"><span>Coupled lagged ensemble weather- and river runoff prediction in complex Alpine terrain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Smiatek, Gerhard; Kunstmann, Harald; Werhahn, Johannes</p> <p>2013-04-01</p> <p>It is still a challenge to predict fast reacting streamflow precipitation response in Alpine terrain. Civil protection measures require flood prediction in 24 - 48 lead time. This holds particularly true for the Ammer River region which was affected by century floods in 1999, 2003 and 2005. Since 2005 a coupled NWP/Hydrology model system is operated in simulating and predicting the Ammer River discharges. The Ammer River catchment is located in the Bavarian Ammergau Alps and alpine forelands, Germany. With elevations reaching 2185 m and annual mean precipitation between 1100 and 2000 mm it represents very demanding test ground for a river runoff prediction system. The one way coupled system utilizes a lagged ensemble prediction system (EPS) taking into account combination of recent and previous NWP forecasts. The major components of the system are the MM5 NWP model run at 3.5 km resolution and initialized twice a day, the hydrology model WaSiM-ETH run at 100 m resolution and Perl object environment (POE) implementing the networking and the system operation. Results obtained in the years 2005-2012 reveal that river runoff simulations depict already high correlation (NSC in range 0.53 and 0.95) with observed runoff in retrospective runs with monitored meteorology data, but suffer from errors in quantitative precipitation forecast (QPF) from the employed numerical weather prediction model. We evaluate the NWP model accuracy, especially the precipitation intensity, frequency and location and put a focus on the performance gain of bias adjustment procedures. We show how this enhanced QFP data help to reduce the uncertainty in the discharge prediction. In addition to the HND (Hochwassernachrichtendienst, Bayern) observations TERENO Longterm Observatory hydrometeorological observation data are available since 2011. They are used to evaluate the NWP performance and setup of a bias correction procedure based on ensemble postprocessing applying Bayesian (BMA) model averaging. We first present briefly the technical setup of the operational coupled lagged NWP/Hydrology model system and then focus on the evaluation of the NWP model, the BMA enhanced QPF and its application within the Ammer simulation system in the period 2011 - 2012</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010APS..SES.CA001W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010APS..SES.CA001W"><span>Observing the conformation of individual SNARE proteins inside live cells</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Weninger, Keith</p> <p>2010-10-01</p> <p>Protein conformational dynamics are directly linked to function in many instances. Within living cells, protein dynamics are rarely synchronized so observing ensemble-averaged behaviors can hide details of signaling pathways. Here we present an approach using single molecule fluorescence resonance energy transfer (FRET) to observe the conformation of individual SNARE proteins as they fold to enter the SNARE complex in living cells. Proteins were recombinantly expressed, labeled with small-molecule fluorescent dyes and microinjected for in vivo imaging and tracking using total internal reflection microscopy. Observing single molecules avoids the difficulties of averaging over unsynchronized ensembles. Our approach is easily generalized to a wide variety of proteins in many cellular signaling pathways.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27991626','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27991626"><span>Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S</p> <p>2017-01-05</p> <p>The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/5859946-nonuniform-fluids-grand-canonical-ensemble','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/5859946-nonuniform-fluids-grand-canonical-ensemble"><span>Nonuniform fluids in the grand canonical ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Percus, J.K.</p> <p>1982-01-01</p> <p>Nonuniform simple classical fluids are considered quite generally. The grand canonical ensemble is particularly suitable, conceptually, in the leading approximation of local thermodynamics, which figuratively divides the system into approximately uniform spatial subsystems. The procedure is reviewed by which this approach is systematically corrected for slowly varying density profiles, and a model is suggested that carries the correction into the domain of local fluctuations. The latter is assessed for substrate bounded fluids, as well as for two-phase interfaces. The peculiarities of the grand ensemble in a two-phase region stem from the inherent very large number fluctuations. A primitive model showsmore » how these are quenched in the canonical ensemble. This is taken advantage of by applying the Kac-Siegert representation of the van der Waals decomposition with petit canonical corrections, to the two-phase regime.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26737994','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26737994"><span>Analysis of microvascular perfusion with multi-dimensional complete ensemble empirical mode decomposition with adaptive noise algorithm: Processing of laser speckle contrast images recorded in healthy subjects, at rest and during acetylcholine stimulation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Humeau-Heurtier, Anne; Marche, Pauline; Dubois, Severine; Mahe, Guillaume</p> <p>2015-01-01</p> <p>Laser speckle contrast imaging (LSCI) is a full-field imaging modality to monitor microvascular blood flow. It is able to give images with high temporal and spatial resolutions. However, when the skin is studied, the interpretation of the bidimensional data may be difficult. This is why an averaging of the perfusion values in regions of interest is often performed and the result is followed in time, reducing the data to monodimensional time series. In order to avoid such a procedure (that leads to a loss of the spatial resolution), we propose to extract patterns from LSCI data and to compare these patterns for two physiological states in healthy subjects: at rest and at the peak of acetylcholine-induced perfusion peak. For this purpose, the recent multi-dimensional complete ensemble empirical mode decomposition with adaptive noise (MCEEMDAN) algorithm is applied to LSCI data. The results show that the intrinsic mode functions and residue given by MCEEMDAN show different patterns for the two physiological states. The images, as bidimensional data, can therefore be processed to reveal microvascular perfusion patterns, hidden in the images themselves. This work is therefore a feasibility study before analyzing data in patients with microvascular dysfunctions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EL.....9030004K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EL.....9030004K"><span>Ergodicity of financial indices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kolesnikov, A. V.; Rühl, T.</p> <p>2010-05-01</p> <p>We introduce the concept of the ensemble averaging for financial markets. We address the question of equality of ensemble and time averaging in their sequence and investigate if these averagings are equivalent for large amount of equity indices and branches. We start with the model of Gaussian-distributed returns, equal-weighted stocks in each index and absence of correlations within a single day and show that even this oversimplified model captures already the run of the corresponding index reasonably well due to its self-averaging properties. We introduce the concept of the instant cross-sectional volatility and discuss its relation to the ordinary time-resolved counterpart. The role of the cross-sectional volatility for the description of the corresponding index as well as the role of correlations between the single stocks and the role of non-Gaussianity of stock distributions is briefly discussed. Our model reveals quickly and efficiently some anomalies or bubbles in a particular financial market and gives an estimate of how large these effects can be and how quickly they disappear.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28522849','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28522849"><span>CarcinoPred-EL: Novel models for predicting the carcinogenicity of chemicals using molecular fingerprints and ensemble learning methods.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Li; Ai, Haixin; Chen, Wen; Yin, Zimo; Hu, Huan; Zhu, Junfeng; Zhao, Jian; Zhao, Qi; Liu, Hongsheng</p> <p>2017-05-18</p> <p>Carcinogenicity refers to a highly toxic end point of certain chemicals, and has become an important issue in the drug development process. In this study, three novel ensemble classification models, namely Ensemble SVM, Ensemble RF, and Ensemble XGBoost, were developed to predict carcinogenicity of chemicals using seven types of molecular fingerprints and three machine learning methods based on a dataset containing 1003 diverse compounds with rat carcinogenicity. Among these three models, Ensemble XGBoost is found to be the best, giving an average accuracy of 70.1 ± 2.9%, sensitivity of 67.0 ± 5.0%, and specificity of 73.1 ± 4.4% in five-fold cross-validation and an accuracy of 70.0%, sensitivity of 65.2%, and specificity of 76.5% in external validation. In comparison with some recent methods, the ensemble models outperform some machine learning-based approaches and yield equal accuracy and higher specificity but lower sensitivity than rule-based expert systems. It is also found that the ensemble models could be further improved if more data were available. As an application, the ensemble models are employed to discover potential carcinogens in the DrugBank database. The results indicate that the proposed models are helpful in predicting the carcinogenicity of chemicals. A web server called CarcinoPred-EL has been built for these models ( http://ccsipb.lnu.edu.cn/toxicity/CarcinoPred-EL/ ).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25142516','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25142516"><span>The interplay between cooperativity and diversity in model threshold ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cervera, Javier; Manzanares, José A; Mafe, Salvador</p> <p>2014-10-06</p> <p>The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. © 2014 The Author(s) Published by the Royal Society. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7962E..2PH','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7962E..2PH"><span>Confidence-based ensemble for GBM brain tumor segmentation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huo, Jing; van Rikxoort, Eva M.; Okada, Kazunori; Kim, Hyun J.; Pope, Whitney; Goldin, Jonathan; Brown, Matthew</p> <p>2011-03-01</p> <p>It is a challenging task to automatically segment glioblastoma multiforme (GBM) brain tumors on T1w post-contrast isotropic MR images. A semi-automated system using fuzzy connectedness has recently been developed for computing the tumor volume that reduces the cost of manual annotation. In this study, we propose a an ensemble method that combines multiple segmentation results into a final ensemble one. The method is evaluated on a dataset of 20 cases from a multi-center pharmaceutical drug trial and compared to the fuzzy connectedness method. Three individual methods were used in the framework: fuzzy connectedness, GrowCut, and voxel classification. The combination method is a confidence map averaging (CMA) method. The CMA method shows an improved ROC curve compared to the fuzzy connectedness method (p < 0.001). The CMA ensemble result is more robust compared to the three individual methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JHyd..555..371A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JHyd..555..371A"><span>On the incidence of meteorological and hydrological processors: Effect of resolution, sharpness and reliability of hydrological ensemble forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abaza, Mabrouk; Anctil, François; Fortin, Vincent; Perreault, Luc</p> <p>2017-12-01</p> <p>Meteorological and hydrological ensemble prediction systems are imperfect. Their outputs could often be improved through the use of a statistical processor, opening up the question of the necessity of using both processors (meteorological and hydrological), only one of them, or none. This experiment compares the predictive distributions from four hydrological ensemble prediction systems (H-EPS) utilising the Ensemble Kalman filter (EnKF) probabilistic sequential data assimilation scheme. They differ in the inclusion or not of the Distribution Based Scaling (DBS) method for post-processing meteorological forecasts and the ensemble Bayesian Model Averaging (ensemble BMA) method for hydrological forecast post-processing. The experiment is implemented on three large watersheds and relies on the combination of two meteorological reforecast products: the 4-member Canadian reforecasts from the Canadian Centre for Meteorological and Environmental Prediction (CCMEP) and the 10-member American reforecasts from the National Oceanic and Atmospheric Administration (NOAA), leading to 14 members at each time step. Results show that all four tested H-EPS lead to resolution and sharpness values that are quite similar, with an advantage to DBS + EnKF. The ensemble BMA is unable to compensate for any bias left in the precipitation ensemble forecasts. On the other hand, it succeeds in calibrating ensemble members that are otherwise under-dispersed. If reliability is preferred over resolution and sharpness, DBS + EnKF + ensemble BMA performs best, making use of both processors in the H-EPS system. Conversely, for enhanced resolution and sharpness, DBS is the preferred method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29454111','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29454111"><span>Combining Rosetta with molecular dynamics (MD): A benchmark of the MD-based ensemble protein design.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ludwiczak, Jan; Jarmula, Adam; Dunin-Horkawicz, Stanislaw</p> <p>2018-07-01</p> <p>Computational protein design is a set of procedures for computing amino acid sequences that will fold into a specified structure. Rosetta Design, a commonly used software for protein design, allows for the effective identification of sequences compatible with a given backbone structure, while molecular dynamics (MD) simulations can thoroughly sample near-native conformations. We benchmarked a procedure in which Rosetta design is started on MD-derived structural ensembles and showed that such a combined approach generates 20-30% more diverse sequences than currently available methods with only a slight increase in computation time. Importantly, the increase in diversity is achieved without a loss in the quality of the designed sequences assessed by their resemblance to natural sequences. We demonstrate that the MD-based procedure is also applicable to de novo design tasks started from backbone structures without any sequence information. In addition, we implemented a protocol that can be used to assess the stability of designed models and to select the best candidates for experimental validation. In sum our results demonstrate that the MD ensemble-based flexible backbone design can be a viable method for protein design, especially for tasks that require a large pool of diverse sequences. Copyright © 2018 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFM.A23F..03A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFM.A23F..03A"><span>Ensemble Downscaling of Winter Seasonal Forecasts: The MRED Project</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Arritt, R. W.; Mred Team</p> <p>2010-12-01</p> <p>The Multi-Regional climate model Ensemble Downscaling (MRED) project is a multi-institutional project that is producing large ensembles of downscaled winter seasonal forecasts from coupled atmosphere-ocean seasonal prediction models. Eight regional climate models each are downscaling 15-member ensembles from the National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) and the new NASA seasonal forecast system based on the GEOS5 atmospheric model coupled with the MOM4 ocean model. This produces 240-member ensembles, i.e., 8 regional models x 15 global ensemble members x 2 global models, for each winter season (December-April) of 1982-2003. Results to date show that combined global-regional downscaled forecasts have greatest skill for seasonal precipitation anomalies during strong El Niño events such as 1982-83 and 1997-98. Ensemble means of area-averaged seasonal precipitation for the regional models generally track the corresponding results for the global model, though there is considerable inter-model variability amongst the regional models. For seasons and regions where area mean precipitation is accurately simulated the regional models bring added value by extracting greater spatial detail from the global forecasts, mainly due to better resolution of terrain in the regional models. Our results also emphasize that an ensemble approach is essential to realizing the added value from the combined global-regional modeling system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=334179','PESTICIDES'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=334179"><span>Insights into the deterministic skill of air quality ensembles ...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3), nitrogen dioxide (NO2) and particulate matter (PM10). Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII). The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme) and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each stati</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70035825','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70035825"><span>Assessing the impact of land use change on hydrology by ensemble modelling (LUCHEM) II: Ensemble combinations and predictions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.</p> <p>2009-01-01</p> <p>This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in multi-model ensembles. The reasons behind these observations may relate to the effects of the weighting schemes, non-stationarity of the climate series and possible cross-correlations between models. Crown Copyright ?? 2008.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010JCAMD..24..675Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010JCAMD..24..675Y"><span>Dynamic clustering threshold reduces conformer ensemble size while maintaining a biologically relevant ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yongye, Austin B.; Bender, Andreas; Martínez-Mayorga, Karina</p> <p>2010-08-01</p> <p>Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged- RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged- RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1-4), medium (5-9) and high (10-15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1312046-near-optimal-protocols-complex-nonequilibrium-transformations','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1312046-near-optimal-protocols-complex-nonequilibrium-transformations"><span>Near-optimal protocols in complex nonequilibrium transformations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Gingrich, Todd R.; Rotskoff, Grant M.; Crooks, Gavin E.; ...</p> <p>2016-08-29</p> <p>The development of sophisticated experimental means to control nanoscale systems has motivated efforts to design driving protocols that minimize the energy dissipated to the environment. Computational models are a crucial tool in this practical challenge. In this paper, we describe a general method for sampling an ensemble of finite-time, nonequilibrium protocols biased toward a low average dissipation. In addition, we show that this scheme can be carried out very efficiently in several limiting cases. As an application, we sample the ensemble of low-dissipation protocols that invert the magnetization of a 2D Ising model and explore how the diversity of themore » protocols varies in response to constraints on the average dissipation. In this example, we find that there is a large set of protocols with average dissipation close to the optimal value, which we argue is a general phenomenon.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3580869','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3580869"><span>Automated Delineation of Lung Tumors from CT Images Using a Single Click Ensemble Segmentation Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gu, Yuhua; Kumar, Virendra; Hall, Lawrence O; Goldgof, Dmitry B; Li, Ching-Yen; Korn, René; Bendtsen, Claus; Velazquez, Emmanuel Rios; Dekker, Andre; Aerts, Hugo; Lambin, Philippe; Li, Xiuli; Tian, Jie; Gatenby, Robert A; Gillies, Robert J</p> <p>2012-01-01</p> <p>A single click ensemble segmentation (SCES) approach based on an existing “Click&Grow” algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated. PMID:23459617</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25913899','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25913899"><span>Real time detection of farm-level swine mycobacteriosis outbreak using time series modeling of the number of condemned intestines in abattoirs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Adachi, Yasumoto; Makita, Kohei</p> <p>2015-09-01</p> <p>Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5325197','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5325197"><span>Clustering cancer gene expression data by projective clustering ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yu, Xianxue; Yu, Guoxian</p> <p>2017-01-01</p> <p>Gene expression data analysis has paramount implications for gene treatments, cancer diagnosis and other domains. Clustering is an important and promising tool to analyze gene expression data. Gene expression data is often characterized by a large amount of genes but with limited samples, thus various projective clustering techniques and ensemble techniques have been suggested to combat with these challenges. However, it is rather challenging to synergy these two kinds of techniques together to avoid the curse of dimensionality problem and to boost the performance of gene expression data clustering. In this paper, we employ a projective clustering ensemble (PCE) to integrate the advantages of projective clustering and ensemble clustering, and to avoid the dilemma of combining multiple projective clusterings. Our experimental results on publicly available cancer gene expression data show PCE can improve the quality of clustering gene expression data by at least 4.5% (on average) than other related techniques, including dimensionality reduction based single clustering and ensemble approaches. The empirical study demonstrates that, to further boost the performance of clustering cancer gene expression data, it is necessary and promising to synergy projective clustering with ensemble clustering. PCE can serve as an effective alternative technique for clustering gene expression data. PMID:28234920</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoJI.212..345O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoJI.212..345O"><span>Numerical modelling of multiphase multicomponent reactive transport in the Earth's interior</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Oliveira, Beñat; Afonso, Juan Carlos; Zlotnik, Sergio; Diez, Pedro</p> <p>2018-01-01</p> <p>We present a conceptual and numerical approach to model processes in the Earth's interior that involve multiple phases that simultaneously interact thermally, mechanically and chemically. The approach is truly multiphase in the sense that each dynamic phase is explicitly modelled with an individual set of mass, momentum, energy and chemical mass balance equations coupled via interfacial interaction terms. It is also truly multicomponent in the sense that the compositions of the system and its constituent phases are expressed by a full set of fundamental chemical components (e.g. SiO2, Al2O3, MgO, etc.) rather than proxies. These chemical components evolve, react with and partition into different phases according to an internally consistent thermodynamic model. We combine concepts from Ensemble Averaging and Classical Irreversible Thermodynamics to obtain sets of macroscopic balance equations that describe the evolution of systems governed by multiphase multicomponent reactive transport (MPMCRT). Equilibrium mineral assemblages, their compositions and physical properties, and closure relations for the balance equations are obtained via a `dynamic' Gibbs free-energy minimization procedure (i.e. minimizations are performed on-the-fly as needed by the simulation). Surface tension and surface energy contributions to the dynamics and energetics of the system are taken into account. We show how complex rheologies, that is, visco-elasto-plastic, and/or different interfacial models can be incorporated into our MPMCRT ensemble-averaged formulation. The resulting model provides a reliable platform to study the dynamics and nonlinear feedbacks of MPMCRT systems of different nature and scales, as well as to make realistic comparisons with both geophysical and geochemical data sets. Several numerical examples are presented to illustrate the benefits and limitations of the model.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29758620','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29758620"><span>Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio; Tohyama, Takami</p> <p>2018-04-01</p> <p>We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003)PRBMDO0163-182910.1103/PhysRevB.68.235106] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013)PRLTAO0031-900710.1103/PhysRevLett.111.010401] to obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S=1/2, we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvE..97d3308O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvE..97d3308O"><span>Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio; Tohyama, Takami</p> <p>2018-04-01</p> <p>We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003), 10.1103/PhysRevB.68.235106] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013), 10.1103/PhysRevLett.111.010401] to obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S =1 /2 , we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..15.5905M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..15.5905M"><span>Probabilistic Storm Surge Forecast For Venice</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mel, Riccardo; Lionello, Piero</p> <p>2013-04-01</p> <p>This study describes an ensemble storm surge prediction procedure for the city of Venice, which is potentially very useful for its management, maintenance and for operating the movable barriers that are presently being built. Ensemble Prediction System (EPS) is meant to complement the existing SL forecast system by providing a probabilistic forecast and information on uncertainty of SL prediction. The procedure is applied to storm surge events in the period 2009-2010 producing for each of them an ensemble of 50 simulations. It is shown that EPS slightly increases the accuracy of SL prediction with respect to the deterministic forecast (DF) and it is more reliable than it. Though results are low biased and forecast uncertainty is underestimated, the probability distribution of maximum sea level produced by the EPS is acceptably realistic. The error of the EPS mean is shown to be correlated with the EPS spread. SL peaks correspond to maxima of uncertainty and uncertainty increases linearly with the forecast range. The quasi linear dynamics of the storm surges produces a modulation of the uncertainty after the SL peak with period corresponding to that of the main Adriatic seiche.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ThApC.tmp..394S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ThApC.tmp..394S"><span>Simulation of tropical cyclone activity over the western North Pacific based on CMIP5 models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shen, Haibo; Zhou, Weican; Zhao, Haikun</p> <p>2017-09-01</p> <p>Based on the Coupled Model Inter-comparison Project 5 (CMIP5) models, the tropical cyclone (TC) activity in the summers of 1965-2005 over the western North Pacific (WNP) is simulated by a TC dynamically downscaling system. In consideration of diversity among climate models, Bayesian model averaging (BMA) and equal-weighed model averaging (EMA) methods are applied to produce the ensemble large-scale environmental factors of the CMIP5 model outputs. The environmental factors generated by BMA and EMA methods are compared, as well as the corresponding TC simulations by the downscaling system. Results indicate that BMA method shows a significant advantage over the EMA. In addition, impacts of model selections on BMA method are examined. To each factor, ten models with better performance are selected from 30 CMIP5 models and then conduct BMA, respectively. As a consequence, the ensemble environmental factors and simulated TC activity are similar with the results from the 30 models' BMA, which verifies the BMA method can afford corresponding weight for each model in the ensemble based on the model's predictive skill. Thereby, the existence of poor performance models will not particularly affect the BMA effectiveness and the ensemble outcomes are improved. Finally, based upon the BMA method and downscaling system, we analyze the sensitivity of TC activity to three important environmental factors, i.e., sea surface temperature (SST), large-scale steering flow, and vertical wind shear. Among three factors, SST and large-scale steering flow greatly affect TC tracks, while average intensity distribution is sensitive to all three environmental factors. Moreover, SST and vertical wind shear jointly play a critical role in the inter-annual variability of TC lifetime maximum intensity and frequency of intense TCs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..1514107S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..1514107S"><span>Synchronization Experiments With A Global Coupled Model of Intermediate Complexity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin</p> <p>2013-04-01</p> <p>In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoRL..45.4273A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoRL..45.4273A"><span>Machine Learning Predictions of a Multiresolution Climate Model Ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anderson, Gemma J.; Lucas, Donald D.</p> <p>2018-05-01</p> <p>Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5024108','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5024108"><span>Controllable quantum dynamics of inhomogeneous nitrogen-vacancy center ensembles coupled to superconducting resonators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Song, Wan-lu; Yang, Wan-li; Yin, Zhang-qi; Chen, Chang-yong; Feng, Mang</p> <p>2016-01-01</p> <p>We explore controllable quantum dynamics of a hybrid system, which consists of an array of mutually coupled superconducting resonators (SRs) with each containing a nitrogen-vacancy center spin ensemble (NVE) in the presence of inhomogeneous broadening. We focus on a three-site model, which compared with the two-site case, shows more complicated and richer dynamical behavior, and displays a series of damped oscillations under various experimental situations, reflecting the intricate balance and competition between the NVE-SR collective coupling and the adjacent-site photon hopping. Particularly, we find that the inhomogeneous broadening of the spin ensemble can suppress the population transfer between the SR and the local NVE. In this context, although the inhomogeneous broadening of the spin ensemble diminishes entanglement among the NVEs, optimal entanglement, characterized by averaging the lower bound of concurrence, could be achieved through accurately adjusting the tunable parameters. PMID:27627994</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24110485','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24110485"><span>An ensemble rank learning approach for gene prioritization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Po-Feng; Soo, Von-Wun</p> <p>2013-01-01</p> <p>Several different computational approaches have been developed to solve the gene prioritization problem. We intend to use the ensemble boosting learning techniques to combine variant computational approaches for gene prioritization in order to improve the overall performance. In particular we add a heuristic weighting function to the Rankboost algorithm according to: 1) the absolute ranks generated by the adopted methods for a certain gene, and 2) the ranking relationship between all gene-pairs from each prioritization result. We select 13 known prostate cancer genes in OMIM database as training set and protein coding gene data in HGNC database as test set. We adopt the leave-one-out strategy for the ensemble rank boosting learning. The experimental results show that our ensemble learning approach outperforms the four gene-prioritization methods in ToppGene suite in the ranking results of the 13 known genes in terms of mean average precision, ROC and AUC measures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28036236','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28036236"><span>Ensemble Perception of Dynamic Emotional Groups.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Elias, Elric; Dyer, Michael; Sweeny, Timothy D</p> <p>2017-02-01</p> <p>Crowds of emotional faces are ubiquitous, so much so that the visual system utilizes a specialized mechanism known as ensemble coding to see them. In addition to being proximally close, members of emotional crowds, such as a laughing audience or an angry mob, often behave together. The manner in which crowd members behave-in sync or out of sync-may be critical for understanding their collective affect. Are ensemble mechanisms sensitive to these dynamic properties of groups? Here, observers estimated the average emotion of a crowd of dynamic faces. The members of some crowds changed their expressions synchronously, whereas individuals in other crowds acted asynchronously. Observers perceived the emotion of a synchronous group more precisely than the emotion of an asynchronous crowd or even a single dynamic face. These results demonstrate that ensemble representation is particularly sensitive to coordinated behavior, and they suggest that shared behavior is critical for understanding emotion in groups.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25866658','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25866658"><span>Advanced ensemble modelling of flexible macromolecules using X-ray solution scattering.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tria, Giancarlo; Mertens, Haydyn D T; Kachala, Michael; Svergun, Dmitri I</p> <p>2015-03-01</p> <p>Dynamic ensembles of macromolecules mediate essential processes in biology. Understanding the mechanisms driving the function and molecular interactions of 'unstructured' and flexible molecules requires alternative approaches to those traditionally employed in structural biology. Small-angle X-ray scattering (SAXS) is an established method for structural characterization of biological macromolecules in solution, and is directly applicable to the study of flexible systems such as intrinsically disordered proteins and multi-domain proteins with unstructured regions. The Ensemble Optimization Method (EOM) [Bernadó et al. (2007 ▶). J. Am. Chem. Soc. 129, 5656-5664] was the first approach introducing the concept of ensemble fitting of the SAXS data from flexible systems. In this approach, a large pool of macromolecules covering the available conformational space is generated and a sub-ensemble of conformers coexisting in solution is selected guided by the fit to the experimental SAXS data. This paper presents a series of new developments and advancements to the method, including significantly enhanced functionality and also quantitative metrics for the characterization of the results. Building on the original concept of ensemble optimization, the algorithms for pool generation have been redesigned to allow for the construction of partially or completely symmetric oligomeric models, and the selection procedure was improved to refine the size of the ensemble. Quantitative measures of the flexibility of the system studied, based on the characteristic integral parameters of the selected ensemble, are introduced. These improvements are implemented in the new EOM version 2.0, and the capabilities as well as inherent limitations of the ensemble approach in SAXS, and of EOM 2.0 in particular, are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29240972','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29240972"><span>Evidence for Dynamic Chemical Kinetics at Individual Molecular Ruthenium Catalysts.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Easter, Quinn T; Blum, Suzanne A</p> <p>2018-02-05</p> <p>Catalytic cycles are typically depicted as possessing time-invariant steps with fixed rates. Yet the true behavior of individual catalysts with respect to time is unknown, hidden by the ensemble averaging inherent to bulk measurements. Evidence is presented for variable chemical kinetics at individual catalysts, with a focus on ring-opening metathesis polymerization catalyzed by the second-generation Grubbs' ruthenium catalyst. Fluorescence microscopy is used to probe the chemical kinetics of the reaction because the technique possesses sufficient sensitivity for the detection of single chemical reactions. Insertion reactions in submicron regions likely occur at groups of many (not single) catalysts, yet not so many that their unique kinetic behavior is ensemble averaged. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1393517','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1393517"><span>A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr</p> <p></p> <p>We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1393517-performance-analysis-ensemble-averaging-high-fidelity-turbulence-simulations-strong-scaling-limit','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1393517-performance-analysis-ensemble-averaging-high-fidelity-turbulence-simulations-strong-scaling-limit"><span>A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr; ...</p> <p>2017-06-07</p> <p>We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19760010865','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19760010865"><span>A strictly Markovian expansion for plasma turbulence theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jones, F. C.</p> <p>1976-01-01</p> <p>The collision operator that appears in the equation of motion for a particle distribution function that was averaged over an ensemble of random Hamiltonians is non-Markovian. It is non-Markovian in that it involves a propagated integral over the past history of the ensemble averaged distribution function. All formal expansions of this nonlinear collision operator to date preserve this non-Markovian character term by term yielding an integro-differential equation that must be converted to a diffusion equation by an additional approximation. An expansion is derived for the collision operator that is strictly Markovian to any finite order and yields a diffusion equation as the lowest nontrivial order. The validity of this expansion is seen to be the same as that of the standard quasilinear expansion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015GMDD....8.9925P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015GMDD....8.9925P"><span>Large ensemble modeling of last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.</p> <p>2015-11-01</p> <p>A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvA..93e2302S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvA..93e2302S"><span>Implementing the Deutsch-Jozsa algorithm with macroscopic ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Semenenko, Henry; Byrnes, Tim</p> <p>2016-05-01</p> <p>Quantum computing implementations under consideration today typically deal with systems with microscopic degrees of freedom such as photons, ions, cold atoms, and superconducting circuits. The quantum information is stored typically in low-dimensional Hilbert spaces such as qubits, as quantum effects are strongest in such systems. It has, however, been demonstrated that quantum effects can be observed in mesoscopic and macroscopic systems, such as nanomechanical systems and gas ensembles. While few-qubit quantum information demonstrations have been performed with such macroscopic systems, a quantum algorithm showing exponential speedup over classical algorithms is yet to be shown. Here, we show that the Deutsch-Jozsa algorithm can be implemented with macroscopic ensembles. The encoding that we use avoids the detrimental effects of decoherence that normally plagues macroscopic implementations. We discuss two mapping procedures which can be chosen depending upon the constraints of the oracle and the experiment. Both methods have an exponential speedup over the classical case, and only require control of the ensembles at the level of the total spin of the ensembles. It is shown that both approaches reproduce the qubit Deutsch-Jozsa algorithm, and are robust under decoherence.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25316152','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25316152"><span>Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A</p> <p>2015-01-15</p> <p>Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4262745','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4262745"><span>Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.</p> <p>2014-01-01</p> <p>Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25012476','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25012476"><span>Impact of ensemble learning in the assessment of skeletal maturity.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cunha, Pedro; Moura, Daniel C; Guevara López, Miguel Angel; Guerra, Conceição; Pinto, Daniela; Ramos, Isabel</p> <p>2014-09-01</p> <p>The assessment of the bone age, or skeletal maturity, is an important task in pediatrics that measures the degree of maturation of children's bones. Nowadays, there is no standard clinical procedure for assessing bone age and the most widely used approaches are the Greulich and Pyle and the Tanner and Whitehouse methods. Computer methods have been proposed to automatize the process; however, there is a lack of exploration about how to combine the features of the different parts of the hand, and how to take advantage of ensemble techniques for this purpose. This paper presents a study where the use of ensemble techniques for improving bone age assessment is evaluated. A new computer method was developed that extracts descriptors for each joint of each finger, which are then combined using different ensemble schemes for obtaining a final bone age value. Three popular ensemble schemes are explored in this study: bagging, stacking and voting. Best results were achieved by bagging with a rule-based regression (M5P), scoring a mean absolute error of 10.16 months. Results show that ensemble techniques improve the prediction performance of most of the evaluated regression algorithms, always achieving best or comparable to best results. Therefore, the success of the ensemble methods allow us to conclude that their use may improve computer-based bone age assessment, offering a scalable option for utilizing multiple regions of interest and combining their output.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3524795','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3524795"><span>Modelling dynamics in protein crystal structures by ensemble refinement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Burnley, B Tom; Afonine, Pavel V; Adams, Paul D; Gros, Piet</p> <p>2012-01-01</p> <p>Single-structure models derived from X-ray data do not adequately account for the inherent, functionally important dynamics of protein molecules. We generated ensembles of structures by time-averaged refinement, where local molecular vibrations were sampled by molecular-dynamics (MD) simulation whilst global disorder was partitioned into an underlying overall translation–libration–screw (TLS) model. Modeling of 20 protein datasets at 1.1–3.1 Å resolution reduced cross-validated Rfree values by 0.3–4.9%, indicating that ensemble models fit the X-ray data better than single structures. The ensembles revealed that, while most proteins display a well-ordered core, some proteins exhibit a ‘molten core’ likely supporting functionally important dynamics in ligand binding, enzyme activity and protomer assembly. Order–disorder changes in HIV protease indicate a mechanism of entropy compensation for ordering the catalytic residues upon ligand binding by disordering specific core residues. Thus, ensemble refinement extracts dynamical details from the X-ray data that allow a more comprehensive understanding of structure–dynamics–function relationships. DOI: http://dx.doi.org/10.7554/eLife.00311.001 PMID:23251785</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1334906-selecting-classification-ensemble-detecting-process-drift-evolving-data-stream','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1334906-selecting-classification-ensemble-detecting-process-drift-evolving-data-stream"><span>Selecting a Classification Ensemble and Detecting Process Drift in an Evolving Data Stream</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Heredia-Langner, Alejandro; Rodriguez, Luke R.; Lin, Andy</p> <p>2015-09-30</p> <p>We characterize the commercial behavior of a group of companies in a common line of business using a small ensemble of classifiers on a stream of records containing commercial activity information. This approach is able to effectively find a subset of classifiers that can be used to predict company labels with reasonable accuracy. Performance of the ensemble, its error rate under stable conditions, can be characterized using an exponentially weighted moving average (EWMA) statistic. The behavior of the EWMA statistic can be used to monitor a record stream from the commercial network and determine when significant changes have occurred. Resultsmore » indicate that larger classification ensembles may not necessarily be optimal, pointing to the need to search the combinatorial classifier space in a systematic way. Results also show that current and past performance of an ensemble can be used to detect when statistically significant changes in the activity of the network have occurred. The dataset used in this work contains tens of thousands of high level commercial activity records with continuous and categorical variables and hundreds of labels, making classification challenging.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4948663','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4948663"><span>A benchmark for reaction coordinates in the transition path ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2016-01-01</p> <p>The molecular mechanism of a reaction is embedded in its transition path ensemble, the complete collection of reactive trajectories. Utilizing the information in the transition path ensemble alone, we developed a novel metric, which we termed the emergent potential energy, for distinguishing reaction coordinates from the bath modes. The emergent potential energy can be understood as the average energy cost for making a displacement of a coordinate in the transition path ensemble. Where displacing a bath mode invokes essentially no cost, it costs significantly to move the reaction coordinate. Based on some general assumptions of the behaviors of reaction and bath coordinates in the transition path ensemble, we proved theoretically with statistical mechanics that the emergent potential energy could serve as a benchmark of reaction coordinates and demonstrated its effectiveness by applying it to a prototypical system of biomolecular dynamics. Using the emergent potential energy as guidance, we developed a committor-free and intuition-independent method for identifying reaction coordinates in complex systems. We expect this method to be applicable to a wide range of reaction processes in complex biomolecular systems. PMID:27059559</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JChPh.147u4110M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JChPh.147u4110M"><span>On the non-stationary generalized Langevin equation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Meyer, Hugues; Voigtmann, Thomas; Schilling, Tanja</p> <p>2017-12-01</p> <p>In molecular dynamics simulations and single molecule experiments, observables are usually measured along dynamic trajectories and then averaged over an ensemble ("bundle") of trajectories. Under stationary conditions, the time-evolution of such averages is described by the generalized Langevin equation. By contrast, if the dynamics is not stationary, it is not a priori clear which form the equation of motion for an averaged observable has. We employ the formalism of time-dependent projection operator techniques to derive the equation of motion for a non-equilibrium trajectory-averaged observable as well as for its non-stationary auto-correlation function. The equation is similar in structure to the generalized Langevin equation but exhibits a time-dependent memory kernel as well as a fluctuating force that implicitly depends on the initial conditions of the process. We also derive a relation between this memory kernel and the autocorrelation function of the fluctuating force that has a structure similar to a fluctuation-dissipation relation. In addition, we show how the choice of the projection operator allows us to relate the Taylor expansion of the memory kernel to data that are accessible in MD simulations and experiments, thus allowing us to construct the equation of motion. As a numerical example, the procedure is applied to Brownian motion initialized in non-equilibrium conditions and is shown to be consistent with direct measurements from simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001APS..MARW32012B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001APS..MARW32012B"><span>Variety of Behavior of Equity Returns in Financial Markets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bonanno, Giovanni; Lillo, Fabrizio; Mantegna, Rosario N.</p> <p>2001-03-01</p> <p>The price dynamics of a set of equities traded in an efficient market is pretty complex. It consists of almost not redundant time series which have (i) long-range correlated volatility and (ii) cross-correlation between each pair of equities. We perform a study of the statistical properties of an ensemble of equities returns which is fruitful to elucidate the nature and role of time and ensemble correlation. Specifically, we investigate a statistical ensemble of daily returns of n equities traded in United States financial markets. For each trading day of our database, we study the ensemble return distribution. We find that a typical ensemble return distribution exists in most of the trading days [1] with the exception of crash and rally days and of the days following to these extreme events [2]. We analyze each ensemble return distribution by extracting its first two central moments. We call the second moment of the ensemble return distribution the variety of the market. We choose this term because high variety implies a variated behavior of the equities returns in the considered day. We observe that the mean return and the variety are fluctuating in time and are stochastic processes themselves. The variety is a long-range correlated stochastic process. Customary time-averaged statistical properties of time series of stock returns are also considered. In general, time-averaged and portfolio-averaged returns have different statistical properties [1]. We infer from these differences information about the relative strength of correlation between equities and between different trading days. We also compare our empirical results with those predicted by the single-index model and we conclude that this simple model is unable to explain the statistical properties of the second moment of the ensemble return distribution. Correlation between pairs of equities are continuously present in the dynamics of a stock portfolio. Hence, it is relevant to investigate pair correlation in a efficient and original way. We propose to investigate these correlations at a daily and intra daily time horizon with a method based on concepts of random frustrated systems. Specifically, a hierarchical organization of the investigated equities is obtained by determining a metric distance between stocks and by investigating the properties of the subdominant ultrametric associated with it [3]. The high-frequency cross-correlation existing between pairs of equities are investigated in a set of 100 stocks traded in US equity markets. The decrease of the cross-correlation between the equity returns observed for diminishing time horizons progressively changes the nature of the hierarchical structure associated to each different time horizon [4]. The nature of the correlation present between pairs of time series of equity returns collected in a portfolio has a strong influence on the variety of the market. We finally discuss the relation between pair correlation and variety of an ensemble return distribution. References [1] Fabrizio Lillo and Rosario N. Mantegna, Variety and volatility in financial markets, Phys. Rev. E 62, 6126-6134 (2000). [2] Fabrizio Lillo and Rosario N. Mantegna, Symmetry alteration of ensemble return distribution in crash and rally days of financial market, Eur. Phys. J. B 15, 603-606 (2000). [3] Rosario N. Mantegna, Hierarchical structure in financial markets, Eur. Phys. J. B 11, 193-197 (1999). [4] Giovanni Bonanno, Fabrizio Lillo, and Rosario N. Mantegna, High-frequency cross-correlation in a set of stocks, Quantitative Finance (in press).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29342958','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29342958"><span>An Enhanced Method to Estimate Heart Rate from Seismocardiography via Ensemble Averaging of Body Movements at Six Degrees of Freedom.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Hyunwoo; Lee, Hana; Whang, Mincheol</p> <p>2018-01-15</p> <p>Continuous cardiac monitoring has been developed to evaluate cardiac activity outside of clinical environments due to the advancement of novel instruments. Seismocardiography (SCG) is one of the vital components that could develop such a monitoring system. Although SCG has been presented with a lower accuracy, this novel cardiac indicator has been steadily proposed over traditional methods such as electrocardiography (ECG). Thus, it is necessary to develop an enhanced method by combining the significant cardiac indicators. In this study, the six-axis signals of accelerometer and gyroscope were measured and integrated by the L2 normalization and multi-dimensional kineticardiography (MKCG) approaches, respectively. The waveforms of accelerometer and gyroscope were standardized and combined via ensemble averaging, and the heart rate was calculated from the dominant frequency. Thirty participants (15 females) were asked to stand or sit in relaxed and aroused conditions. Their SCG was measured during the task. As a result, proposed method showed higher accuracy than traditional SCG methods in all measurement conditions. The three main contributions are as follows: (1) the ensemble averaging enhanced heart rate estimation with the benefits of the six-axis signals; (2) the proposed method was compared with the previous SCG method that employs fewer-axis; and (3) the method was tested in various measurement conditions for a more practical application.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015APS..DFDR31008S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015APS..DFDR31008S"><span>Measurements of wind-waves under transient wind conditions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shemer, Lev; Zavadsky, Andrey</p> <p>2015-11-01</p> <p>Wind forcing in nature is always unsteady, resulting in a complicated evolution pattern that involves numerous time and space scales. In the present work, wind waves in a laboratory wind-wave flume are studied under unsteady forcing`. The variation of the surface elevation is measured by capacitance wave gauges, while the components of the instantaneous surface slope in across-wind and along-wind directions are determined by a regular or scanning laser slope gauge. The locations of the wave gauge and of the laser slope gauge are separated by few centimeters in across-wind direction. Instantaneous wind velocity was recorded simultaneously using Pitot tube. Measurements are performed at a number of fetches and for different patterns of wind velocity variation. For each case, at least 100 independent realizations were recorded for a given wind velocity variation pattern. The accumulated data sets allow calculating ensemble-averaged values of the measured parameters. Significant differences between the evolution patterns of the surface elevation and of the slope components were found. Wavelet analysis was applied to determine dominant wave frequency of the surface elevation and of the slope variation at each instant. Corresponding ensemble-averaged values acquired by different sensors were computed and compared. Analysis of the measured ensemble-averaged quantities at different fetches makes it possible to identify different stages in the wind-wave evolution and to estimate the appropriate time and length scales.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23049168','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23049168"><span>A translating stage system for µ-PIV measurements surrounding the tip of a migrating semi-infinite bubble.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Smith, B J; Yamaguchi, E; Gaver, D P</p> <p>2010-01-01</p> <p>We have designed, fabricated and evaluated a novel translating stage system (TSS) that augments a conventional micro particle image velocimetry (µ-PIV) system. The TSS has been used to enhance the ability to measure flow fields surrounding the tip of a migrating semi-infinite bubble in a glass capillary tube under both steady and pulsatile reopening conditions. With conventional µ-PIV systems, observations near the bubble tip are challenging because the forward progress of the bubble rapidly sweeps the air-liquid interface across the microscopic field of view. The translating stage mechanically cancels the mean bubble tip velocity, keeping the interface within the microscope field of view and providing a tenfold increase in data collection efficiency compared to fixed-stage techniques. This dramatic improvement allows nearly continuous observation of the flow field over long propagation distances. A large (136-frame) ensemble-averaged velocity field recorded with the TSS near the tip of a steadily migrating bubble is shown to compare well with fixed-stage results under identical flow conditions. Use of the TSS allows the ensemble-averaged measurement of pulsatile bubble propagation flow fields, which would be practically impossible using conventional fixed-stage techniques. We demonstrate our ability to analyze these time-dependent two-phase flows using the ensemble-averaged flow field at four points in the oscillatory cycle.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/6912541-interactions-between-moist-heating-dynamics-atmospheric-predictability','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/6912541-interactions-between-moist-heating-dynamics-atmospheric-predictability"><span>Interactions between moist heating and dynamics in atmospheric predictability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Straus, D.M.; Huntley, M.A.</p> <p>1994-02-01</p> <p>The predictability properties of a fixed heating version of a GCM in which the moist heating is specified beforehand are studied in a series of identical twin experiments. Comparison is made to an identical set of experiments using the control GCM, a five-level R30 version of the COLA GCM. The experiments each contain six ensembles, with a single ensemble consisting of six 30-day integrations starting from slightly perturbed Northern Hemisphere wintertime initial conditions. The moist heating from each integration within a single control ensemble was averaged over the ensemble. This averaged heating (a function of three spatial dimensions and time)more » was used as the prespecified heating in each member of the corresponding fixed heating ensemble. The errors grow less rapidly in the fixed heating case. The most rapidly growing scales at small times (global wavenumber 6) have doubling times of 3.2 days compared to 2.4 days for the control experiments. The predictability times for the most energetic scales (global wavenumbers 9-12) are about two weeks for the fixed heating experiments, compared to 9 days for the control. The ratio of error energy in the fixed heating to the control case falls below 0.5 by day 8, and then gradually increases as the error growth slows in the control case. The growth of errors is described in terms of budgets of error kinetic energy (EKE) and error available potential energy (EAPE) developed in terms of global wavenumber n. The diabatic generation of EAPE (G[sub APE]) is positive in the control case and is dominated by midlatitude heating errors after day 2. The fixed heating G[sub APE] is negative at all times due to longwave radiative cooling. 36 refs., 9 figs., 1 tab.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24483403','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24483403"><span>Transient aging in fractional Brownian and Langevin-equation motion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kursawe, Jochen; Schulz, Johannes; Metzler, Ralf</p> <p>2013-12-01</p> <p>Stochastic processes driven by stationary fractional Gaussian noise, that is, fractional Brownian motion and fractional Langevin-equation motion, are usually considered to be ergodic in the sense that, after an algebraic relaxation, time and ensemble averages of physical observables coincide. Recently it was demonstrated that fractional Brownian motion and fractional Langevin-equation motion under external confinement are transiently nonergodic-time and ensemble averages behave differently-from the moment when the particle starts to sense the confinement. Here we show that these processes also exhibit transient aging, that is, physical observables such as the time-averaged mean-squared displacement depend on the time lag between the initiation of the system at time t=0 and the start of the measurement at the aging time t(a). In particular, it turns out that for fractional Langevin-equation motion the aging dependence on t(a) is different between the cases of free and confined motion. We obtain explicit analytical expressions for the aged moments of the particle position as well as the time-averaged mean-squared displacement and present a numerical analysis of this transient aging phenomenon.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/5848410','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/5848410"><span>Stresses and elastic constants of crystalline sodium, from molecular dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Schiferl, S.K.</p> <p>1985-02-01</p> <p>The stresses and the elastic constants of bcc sodium are calculated by molecular dynamics (MD) for temperatures to T = 340K. The total adiabatic potential of a system of sodium atoms is represented by pseudopotential model. The resulting expression has two terms: a large, strictly volume-dependent potential, plus a sum over ion pairs of a small, volume-dependent two-body potential. The stresses and the elastic constants are given as strain derivatives of the Helmholtz free energy. The resulting expressions involve canonical ensemble averages (and fluctuation averages) of the position and volume derivatives of the potential. An ensemble correction relates the resultsmore » to MD equilibrium averages. Evaluation of the potential and its derivatives requires the calculation of integrals with infinite upper limits of integration, and integrand singularities. Methods for calculating these integrals and estimating the effects of integration errors are developed. A method is given for choosing initial conditions that relax quickly to a desired equilibrium state. Statistical methods developed earlier for MD data are extended to evaluate uncertainties in fluctuation averages, and to test for symmetry. 45 refs., 10 figs., 4 tabs.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27715078','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27715078"><span>Noise-Resilient Quantum Computing with a Nitrogen-Vacancy Center and Nuclear Spins.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Casanova, J; Wang, Z-Y; Plenio, M B</p> <p>2016-09-23</p> <p>Selective control of qubits in a quantum register for the purposes of quantum information processing represents a critical challenge for dense spin ensembles in solid-state systems. Here we present a protocol that achieves a complete set of selective electron-nuclear gates and single nuclear rotations in such an ensemble in diamond facilitated by a nearby nitrogen-vacancy (NV) center. The protocol suppresses internuclear interactions as well as unwanted coupling between the NV center and other spins of the ensemble to achieve quantum gate fidelities well exceeding 99%. Notably, our method can be applied to weakly coupled, distant spins representing a scalable procedure that exploits the exceptional properties of nuclear spins in diamond as robust quantum memories.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MolPh.116..351S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MolPh.116..351S"><span>Microcanonical-ensemble computer simulation of the high-temperature expansion coefficients of the Helmholtz free energy of a square-well fluid</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sastre, Francisco; Moreno-Hilario, Elizabeth; Sotelo-Serna, Maria Guadalupe; Gil-Villegas, Alejandro</p> <p>2018-02-01</p> <p>The microcanonical-ensemble computer simulation method (MCE) is used to evaluate the perturbation terms Ai of the Helmholtz free energy of a square-well (SW) fluid. The MCE method offers a very efficient and accurate procedure for the determination of perturbation terms of discrete-potential systems such as the SW fluid and surpass the standard NVT canonical ensemble Monte Carlo method, allowing the calculation of the first six expansion terms. Results are presented for the case of a SW potential with attractive ranges 1.1 ≤ λ ≤ 1.8. Using semi-empirical representation of the MCE values for Ai, we also discuss the accuracy in the determination of the phase diagram of this system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3721968','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3721968"><span>Multiscale Macromolecular Simulation: Role of Evolving Ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Singharoy, A.; Joshi, H.; Ortoleva, P.J.</p> <p>2013-01-01</p> <p>Multiscale analysis provides an algorithm for the efficient simulation of macromolecular assemblies. This algorithm involves the coevolution of a quasiequilibrium probability density of atomic configurations and the Langevin dynamics of spatial coarse-grained variables denoted order parameters (OPs) characterizing nanoscale system features. In practice, implementation of the probability density involves the generation of constant OP ensembles of atomic configurations. Such ensembles are used to construct thermal forces and diffusion factors that mediate the stochastic OP dynamics. Generation of all-atom ensembles at every Langevin timestep is computationally expensive. Here, multiscale computation for macromolecular systems is made more efficient by a method that self-consistently folds in ensembles of all-atom configurations constructed in an earlier step, history, of the Langevin evolution. This procedure accounts for the temporal evolution of these ensembles, accurately providing thermal forces and diffusions. It is shown that efficiency and accuracy of the OP-based simulations is increased via the integration of this historical information. Accuracy improves with the square root of the number of historical timesteps included in the calculation. As a result, CPU usage can be decreased by a factor of 3-8 without loss of accuracy. The algorithm is implemented into our existing force-field based multiscale simulation platform and demonstrated via the structural dynamics of viral capsomers. PMID:22978601</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1223012','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1223012"><span>Cell population modelling of yeast glycolytic oscillations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Henson, Michael A; Müller, Dirk; Reuss, Matthias</p> <p>2002-01-01</p> <p>We investigated a cell-population modelling technique in which the population is constructed from an ensemble of individual cell models. The average value or the number distribution of any intracellular property captured by the individual cell model can be calculated by simulation of a sufficient number of individual cells. The proposed method is applied to a simple model of yeast glycolytic oscillations where synchronization of the cell population is mediated by the action of an excreted metabolite. We show that smooth one-dimensional distributions can be obtained with ensembles comprising 1000 individual cells. Random variations in the state and/or structure of individual cells are shown to produce complex dynamic behaviours which cannot be adequately captured by small ensembles. PMID:12206713</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29325871','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29325871"><span>Use of ultraviolet-fluorescence-based simulation in evaluation of personal protective equipment worn for first assessment and care of a patient with suspected high-consequence infectious disease.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hall, S; Poller, B; Bailey, C; Gregory, S; Clark, R; Roberts, P; Tunbridge, A; Poran, V; Evans, C; Crook, B</p> <p>2018-06-01</p> <p>Variations currently exist across the UK in the choice of personal protective equipment (PPE) used by healthcare workers when caring for patients with suspected high-consequence infectious diseases (HCIDs). To test the protection afforded to healthcare workers by current PPE ensembles during assessment of a suspected HCID case, and to provide an evidence base to justify proposal of a unified PPE ensemble for healthcare workers across the UK. One 'basic level' (enhanced precautions) PPE ensemble and five 'suspected case' PPE ensembles were evaluated in volunteer trials using 'Violet'; an ultraviolet-fluorescence-based simulation exercise to visualize exposure/contamination events. Contamination was photographed and mapped. There were 147 post-simulation and 31 post-doffing contamination events, from a maximum of 980, when evaluating the basic level of PPE. Therefore, this PPE ensemble did not afford adequate protection, primarily due to direct contamination of exposed areas of the skin. For the five suspected case ensembles, 1584 post-simulation contamination events were recorded, from a maximum of 5110. Twelve post-doffing contamination events were also observed (face, two events; neck, one event; forearm, one event; lower legs, eight events). All suspected case PPE ensembles either had post-doffing contamination events or other significant disadvantages to their use. This identified the need to design a unified PPE ensemble and doffing procedure, incorporating the most protective PPE considered for each body area. This work has been presented to, and reviewed by, key stakeholders to decide on a proposed unified ensemble, subject to further evaluation. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28708399','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28708399"><span>Development and Validation of a Computational Model Ensemble for the Early Detection of BCRP/ABCG2 Substrates during the Drug Design Stage.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gantner, Melisa E; Peroni, Roxana N; Morales, Juan F; Villalba, María L; Ruiz, María E; Talevi, Alan</p> <p>2017-08-28</p> <p>Breast Cancer Resistance Protein (BCRP) is an ATP-dependent efflux transporter linked to the multidrug resistance phenomenon in many diseases such as epilepsy and cancer and a potential source of drug interactions. For these reasons, the early identification of substrates and nonsubstrates of this transporter during the drug discovery stage is of great interest. We have developed a computational nonlinear model ensemble based on conformational independent molecular descriptors using a combined strategy of genetic algorithms, J48 decision tree classifiers, and data fusion. The best model ensemble consists in averaging the ranking of the 12 decision trees that showed the best performance on the training set, which also demonstrated a good performance for the test set. It was experimentally validated using the ex vivo everted rat intestinal sac model. Five anticonvulsant drugs classified as nonsubstrates for BRCP by the model ensemble were experimentally evaluated, and none of them proved to be a BCRP substrate under the experimental conditions used, thus confirming the predictive ability of the model ensemble. The model ensemble reported here is a potentially valuable tool to be used as an in silico ADME filter in computer-aided drug discovery campaigns intended to overcome BCRP-mediated multidrug resistance issues and to prevent drug-drug interactions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3992658','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3992658"><span>From a structural average to the conformational ensemble of a DNA bulge</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shi, Xuesong; Beauchamp, Kyle A.; Harbury, Pehr B.; Herschlag, Daniel</p> <p>2014-01-01</p> <p>Direct experimental measurements of conformational ensembles are critical for understanding macromolecular function, but traditional biophysical methods do not directly report the solution ensemble of a macromolecule. Small-angle X-ray scattering interferometry has the potential to overcome this limitation by providing the instantaneous distance distribution between pairs of gold-nanocrystal probes conjugated to a macromolecule in solution. Our X-ray interferometry experiments reveal an increasing bend angle of DNA duplexes with bulges of one, three, and five adenosine residues, consistent with previous FRET measurements, and further reveal an increasingly broad conformational ensemble with increasing bulge length. The distance distributions for the AAA bulge duplex (3A-DNA) with six different Au-Au pairs provide strong evidence against a simple elastic model in which fluctuations occur about a single conformational state. Instead, the measured distance distributions suggest a 3A-DNA ensemble with multiple conformational states predominantly across a region of conformational space with bend angles between 24 and 85 degrees and characteristic bend directions and helical twists and displacements. Additional X-ray interferometry experiments revealed perturbations to the ensemble from changes in ionic conditions and the bulge sequence, effects that can be understood in terms of electrostatic and stacking contributions to the ensemble and that demonstrate the sensitivity of X-ray interferometry. Combining X-ray interferometry ensemble data with molecular dynamics simulations gave atomic-level models of representative conformational states and of the molecular interactions that may shape the ensemble, and fluorescence measurements with 2-aminopurine-substituted 3A-DNA provided initial tests of these atomistic models. More generally, X-ray interferometry will provide powerful benchmarks for testing and developing computational methods. PMID:24706812</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24667482','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24667482"><span>NIMEFI: gene regulatory network inference using multiple ensemble feature importance algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ruyssinck, Joeri; Huynh-Thu, Vân Anh; Geurts, Pierre; Dhaene, Tom; Demeester, Piet; Saeys, Yvan</p> <p>2014-01-01</p> <p>One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms) and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made publicly available.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20110013169&hterms=Coding&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DCoding','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20110013169&hterms=Coding&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DCoding"><span>The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush</p> <p>2008-01-01</p> <p>We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19780063736&hterms=self+expansion+theory&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dself%2Bexpansion%2Btheory','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19780063736&hterms=self+expansion+theory&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dself%2Bexpansion%2Btheory"><span>A strictly Markovian expansion for plasma turbulence theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jones, F. C.</p> <p>1978-01-01</p> <p>The collision operator that appears in the equation of motion for a particle distribution function that has been averaged over an ensemble of random Hamiltonians is non-Markovian. It is non-Markovian in that it involves a propagated integral over the past history of the ensemble averaged distribution function. All formal expansions of this nonlinear collision operator to date preserve this non-Markovian character term by term yielding an integro-differential equation that must be converted to a diffusion equation by an additional approximation. In this note we derive an expansion of the collision operator that is strictly Markovian to any finite order and yields a diffusion equation as the lowest non-trivial order. The validity of this expansion is seen to be the same as that of the standard quasi-linear expansion.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040082200&hterms=TOM&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DTOM','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040082200&hterms=TOM&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DTOM"><span>Comparison of TOMS, SBW & SBUV/2 Version 8 Total Column Ozone Data with Data from Groundstations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Labow, G. J.; McPeters, R. D.; Bhartia, P. K.</p> <p>2004-01-01</p> <p>The Nimbus-7 and Earth Probe Total Ozone Mapping Spectrometer (TOMS) data as well as SBUV and SBUV/2 data have been reprocessed with a new retrieval algorithm (Version 8) and an updated calibration procedure. An overview will be presented systematically comparing ozone values to an ensemble of Brewer and Dobson spectrophotometers. The comparisons were made as a function of latitude, solar zenith angle, reflectivity and total ozone. Results show that the accuracy of the TOMS retrieval has been improved when aerosols are present in the atmosphere, when snow/ice and sea glint are present, and when ozone in the northern hemisphere is extremely low. TOMS overpass data are derived from the single TOMS best match measurement, almost always located within one degree of the ground station and usually made within an hour of local noon. The Version 8 Earth Probe TOMS ozone values have decreased by an average of about 1% due to a much better understanding of the calibration of the instrument. N-7 SBUV as well as the series of NOAA SBUV/2 column ozone values have also been processed with the Version 8 algorithm and have been compared to values from an ensemble of groundstations. Results show that the SBW column ozone values agree well with the groundstations and the datasets are useful for trend studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoRL..45.4429Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoRL..45.4429Y"><span>Medium-Range Forecast Skill for Extraordinary Arctic Cyclones in Summer of 2008-2016</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yamagami, Akio; Matsueda, Mio; Tanaka, Hiroshi L.</p> <p>2018-05-01</p> <p>Arctic cyclones (ACs) are a severe atmospheric phenomenon that affects the Arctic environment. This study assesses the forecast skill of five leading operational medium-range ensemble forecasts for 10 extraordinary ACs that occurred in summer during 2008-2016. Average existence probability of the predicted ACs was >0.9 at lead times of ≤3.5 days. Average central position error of the predicted ACs was less than half of the mean radius of the 10 ACs (469.1 km) at lead times of 2.5-4.5 days. Average central pressure error of the predicted ACs was 5.5-10.7 hPa at such lead times. Therefore, the operational ensemble prediction systems generally predict the position of ACs within 469.1 km 2.5-4.5 days before they mature. The forecast skill for the extraordinary ACs is lower than that for midlatitude cyclones in the Northern Hemisphere but similar to that in the Southern Hemisphere.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvE..97a2502K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvE..97a2502K"><span>Shear-stress fluctuations and relaxation in polymer glasses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kriuchevskyi, I.; Wittmer, J. P.; Meyer, H.; Benzerara, O.; Baschnagel, J.</p> <p>2018-01-01</p> <p>We investigate by means of molecular dynamics simulation a coarse-grained polymer glass model focusing on (quasistatic and dynamical) shear-stress fluctuations as a function of temperature T and sampling time Δ t . The linear response is characterized using (ensemble-averaged) expectation values of the contributions (time averaged for each shear plane) to the stress-fluctuation relation μsf for the shear modulus and the shear-stress relaxation modulus G (t ) . Using 100 independent configurations, we pay attention to the respective standard deviations. While the ensemble-averaged modulus μsf(T ) decreases continuously with increasing T for all Δ t sampled, its standard deviation δ μsf(T ) is nonmonotonic with a striking peak at the glass transition. The question of whether the shear modulus is continuous or has a jump singularity at the glass transition is thus ill posed. Confirming the effective time-translational invariance of our systems, the Δ t dependence of μsf and related quantities can be understood using a weighted integral over G (t ) .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100023328','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100023328"><span>Performance of Trajectory Models with Wind Uncertainty</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lee, Alan G.; Weygandt, Stephen S.; Schwartz, Barry; Murphy, James R.</p> <p>2009-01-01</p> <p>Typical aircraft trajectory predictors use wind forecasts but do not account for the forecast uncertainty. A method for generating estimates of wind prediction uncertainty is described and its effect on aircraft trajectory prediction uncertainty is investigated. The procedure for estimating the wind prediction uncertainty relies uses a time-lagged ensemble of weather model forecasts from the hourly updated Rapid Update Cycle (RUC) weather prediction system. Forecast uncertainty is estimated using measures of the spread amongst various RUC time-lagged ensemble forecasts. This proof of concept study illustrates the estimated uncertainty and the actual wind errors, and documents the validity of the assumed ensemble-forecast accuracy relationship. Aircraft trajectory predictions are made using RUC winds with provision for the estimated uncertainty. Results for a set of simulated flights indicate this simple approach effectively translates the wind uncertainty estimate into an aircraft trajectory uncertainty. A key strength of the method is the ability to relate uncertainty to specific weather phenomena (contained in the various ensemble members) allowing identification of regional variations in uncertainty.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018FrES..tmp...23N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018FrES..tmp...23N"><span>Ensembles vs. information theory: supporting science under uncertainty</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nearing, Grey S.; Gupta, Hoshin V.</p> <p>2018-05-01</p> <p>Multi-model ensembles are one of the most common ways to deal with epistemic uncertainty in hydrology. This is a problem because there is no known way to sample models such that the resulting ensemble admits a measure that has any systematic (i.e., asymptotic, bounded, or consistent) relationship with uncertainty. Multi-model ensembles are effectively sensitivity analyses and cannot - even partially - quantify uncertainty. One consequence of this is that multi-model approaches cannot support a consistent scientific method - in particular, multi-model approaches yield unbounded errors in inference. In contrast, information theory supports a coherent hypothesis test that is robust to (i.e., bounded under) arbitrary epistemic uncertainty. This paper may be understood as advocating a procedure for hypothesis testing that does not require quantifying uncertainty, but is coherent and reliable (i.e., bounded) in the presence of arbitrary (unknown and unknowable) uncertainty. We conclude by offering some suggestions about how this proposed philosophy of science suggests new ways to conceptualize and construct simulation models of complex, dynamical systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5373382','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5373382"><span>Correlated variability modifies working memory fidelity in primate prefrontal neuronal ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Leavitt, Matthew L.; Pieper, Florian; Sachs, Adam J.; Martinez-Trujillo, Julio C.</p> <p>2017-01-01</p> <p>Neurons in the primate lateral prefrontal cortex (LPFC) encode working memory (WM) representations via sustained firing, a phenomenon hypothesized to arise from recurrent dynamics within ensembles of interconnected neurons. Here, we tested this hypothesis by using microelectrode arrays to examine spike count correlations (rsc) in LPFC neuronal ensembles during a spatial WM task. We found a pattern of pairwise rsc during WM maintenance indicative of stronger coupling between similarly tuned neurons and increased inhibition between dissimilarly tuned neurons. We then used a linear decoder to quantify the effects of the high-dimensional rsc structure on information coding in the neuronal ensembles. We found that the rsc structure could facilitate or impair coding, depending on the size of the ensemble and tuning properties of its constituent neurons. A simple optimization procedure demonstrated that near-maximum decoding performance could be achieved using a relatively small number of neurons. These WM-optimized subensembles were more signal correlation (rsignal)-diverse and anatomically dispersed than predicted by the statistics of the full recorded population of neurons, and they often contained neurons that were poorly WM-selective, yet enhanced coding fidelity by shaping the ensemble’s rsc structure. We observed a pattern of rsc between LPFC neurons indicative of recurrent dynamics as a mechanism for WM-related activity and that the rsc structure can increase the fidelity of WM representations. Thus, WM coding in LPFC neuronal ensembles arises from a complex synergy between single neuron coding properties and multidimensional, ensemble-level phenomena. PMID:28275096</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018HESS...22.2007D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018HESS...22.2007D"><span>Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 2: numerical application</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dib, Alain; Kavvas, M. Levent</p> <p>2018-03-01</p> <p>The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..1513090D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..1513090D"><span>Interactive vs. Non-Interactive Ensembles for Weather Prediction and Climate Projection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duane, Gregory</p> <p>2013-04-01</p> <p>If the members of an ensemble of different models are allowed to interact with one another in run time, predictive skill can be improved as compared to that of any individual model or any average of indvidual model outputs. Inter-model connections in such an interactive ensemble can be trained, using historical data, so that the resulting ``supermodel" synchronizes with reality when used in weather-prediction mode, where the individual models perform data assimilation from each other (with trainable inter-model "observation error") as well as from real observations. In climate-projection mode, parameters of the individual models are changed, as might occur from an increase in GHG levels, and one obtains relevant statistical properties of the new supermodel attractor. In simple cases, it has been shown that training of the inter-model connections with the old parameter values gives a supermodel that is still predictive when the parameter values are changed. Here we inquire as to the circumstances under which supermodel performance can be expected to exceed that of the customary weighted average of model outputs. We consider a supermodel formed from quasigeostrophic channel models with different forcing coefficients, and introduce an effective training scheme for the inter-model connections. We show that the blocked-zonal index cycle is reproduced better by the supermodel than by any non-interactive ensemble in the extreme case where the forcing coefficients of the different models are very large or very small. With realistic differences in forcing coefficients, as would be representative of actual differences among IPCC-class models, the usual linearity assumption is justified and a weighted average of model outputs is adequate. It is therefore hypothesized that supermodeling is likely to be useful in situations where there are qualitative model differences, as arising from sub-gridscale parameterizations, that affect overall model behavior. Otherwise the usual ex post facto averaging will probably suffice. Previous results from an ENSO-prediction supermodel [Kirtman et al.] are re-examined in light of the hypothesis about the importance of qualitative inter-model differences.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2901495','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2901495"><span>Dynamic clustering threshold reduces conformer ensemble size while maintaining a biologically relevant ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yongye, Austin B.; Bender, Andreas</p> <p>2010-01-01</p> <p>Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged-RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged-RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1–4), medium (5–9) and high (10–15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments. Electronic supplementary material The online version of this article (doi:10.1007/s10822-010-9365-1) contains supplementary material, which is available to authorized users. PMID:20499135</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3575305','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3575305"><span>A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2012-01-01</p> <p>Background Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? Results The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Conclusion Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway. PMID:23216969</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23216969','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23216969"><span>A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Günther, Oliver P; Chen, Virginia; Freue, Gabriela Cohen; Balshaw, Robert F; Tebbutt, Scott J; Hollander, Zsuzsanna; Takhar, Mandeep; McMaster, W Robert; McManus, Bruce M; Keown, Paul A; Ng, Raymond T</p> <p>2012-12-08</p> <p>Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H41A1417L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H41A1417L"><span>Multi-model analysis in hydrological prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lanthier, M.; Arsenault, R.; Brissette, F.</p> <p>2017-12-01</p> <p>Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been largely corrected on short-term predictions. For the longer term, the addition of the multi-model member has been beneficial to the quality of the predictions, although it is too early to determine whether the gain is related to the addition of a member or if multi-model member has plus-value itself.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1916810O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1916810O"><span>Total probabilities of ensemble runoff forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian</p> <p>2017-04-01</p> <p>Ensemble forecasting has a long history from meteorological modelling, as an indication of the uncertainty of the forecasts. However, it is necessary to calibrate and post-process the ensembles as the they often exhibit both bias and dispersion errors. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters varying in space and time, while giving a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, which makes it unsuitable for our purpose. Our post-processing method of the ensembles is developed in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu), where we are making forecasts for whole Europe, and based on observations from around 700 catchments. As the target is flood forecasting, we are also more interested in improving the forecast skill for high-flows rather than in a good prediction of the entire flow regime. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different meteorological forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to estimate the total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but we are adding a spatial penalty in the calibration process to force a spatial correlation of the parameters. The penalty takes distance, stream-connectivity and size of the catchment areas into account. This can in some cases have a slight negative impact on the calibration error, but avoids large differences between parameters of nearby locations, whether stream connected or not. The spatial calibration also makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ChA%26A..41..430Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ChA%26A..41..430Y"><span>Ensemble Pulsar Time Scale</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yin, Dong-shan; Gao, Yu-ping; Zhao, Shu-hong</p> <p>2017-07-01</p> <p>Millisecond pulsars can generate another type of time scale that is totally independent of the atomic time scale, because the physical mechanisms of the pulsar time scale and the atomic time scale are quite different from each other. Usually the pulsar timing observations are not evenly sampled, and the internals between two data points range from several hours to more than half a month. Further more, these data sets are sparse. All this makes it difficult to generate an ensemble pulsar time scale. Hence, a new algorithm to calculate the ensemble pulsar time scale is proposed. Firstly, a cubic spline interpolation is used to densify the data set, and make the intervals between data points uniform. Then, the Vondrak filter is employed to smooth the data set, and get rid of the high-frequency noises, and finally the weighted average method is adopted to generate the ensemble pulsar time scale. The newly released NANOGRAV (North American Nanohertz Observatory for Gravitational Waves) 9-year data set is used to generate the ensemble pulsar time scale. This data set includes the 9-year observational data of 37 millisecond pulsars observed by the 100-meter Green Bank telescope and the 305-meter Arecibo telescope. It is found that the algorithm used in this paper can reduce effectively the influence caused by the noises in pulsar timing residuals, and improve the long-term stability of the ensemble pulsar time scale. Results indicate that the long-term (> 1 yr) stability of the ensemble pulsar time scale is better than 3.4 × 10-15.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12366212','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12366212"><span>Fracture of disordered solids in compression as a critical phenomenon. I. Statistical mechanics formalism.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Toussaint, Renaud; Pride, Steven R</p> <p>2002-09-01</p> <p>This is the first of a series of three articles that treats fracture localization as a critical phenomenon. This first article establishes a statistical mechanics based on ensemble averages when fluctuations through time play no role in defining the ensemble. Ensembles are obtained by dividing a huge rock sample into many mesoscopic volumes. Because rocks are a disordered collection of grains in cohesive contact, we expect that once shear strain is applied and cracks begin to arrive in the system, the mesoscopic volumes will have a wide distribution of different crack states. These mesoscopic volumes are the members of our ensembles. We determine the probability of observing a mesoscopic volume to be in a given crack state by maximizing Shannon's measure of the emergent-crack disorder subject to constraints coming from the energy balance of brittle fracture. The laws of thermodynamics, the partition function, and the quantification of temperature are obtained for such cracking systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhRvE..89b2111C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhRvE..89b2111C"><span>Finite-size effects on current correlation functions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Shunda; Zhang, Yong; Wang, Jiao; Zhao, Hong</p> <p>2014-02-01</p> <p>We study why the calculation of current correlation functions (CCFs) still suffers from finite-size effects even when the periodic boundary condition is taken. Two important one-dimensional, momentum-conserving systems are investigated as examples. Intriguingly, it is found that the state of a system recurs in the sense of microcanonical ensemble average, and such recurrence may result in oscillations in CCFs. Meanwhile, we find that the sound mode collisions induce an extra time decay in a current so that its correlation function decays faster (slower) in a smaller (larger) system. Based on these two unveiled mechanisms, a procedure for correctly evaluating the decay rate of a CCF is proposed, with which our analysis suggests that the global energy CCF decays as ˜t-2/3 in the diatomic hard-core gas model and in a manner close to ˜t-1/2 in the Fermi-Pasta-Ulam-β model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28268573','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28268573"><span>Deep neural ensemble for retinal vessel segmentation in fundus images towards achieving label-free angiography.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lahiri, A; Roy, Abhijit Guha; Sheet, Debdoot; Biswas, Prabir Kumar</p> <p>2016-08-01</p> <p>Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member autoencoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708. Comparison with other major algorithms substantiates the high efficacy of our model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26389618','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26389618"><span>Sensory processing patterns predict the integration of information held in visual working memory.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lowe, Matthew X; Stevenson, Ryan A; Wilson, Kristin E; Ouslis, Natasha E; Barense, Morgan D; Cant, Jonathan S; Ferber, Susanne</p> <p>2016-02-01</p> <p>Given the limited resources of visual working memory, multiple items may be remembered as an averaged group or ensemble. As a result, local information may be ill-defined, but these ensemble representations provide accurate diagnostics of the natural world by combining gist information with item-level information held in visual working memory. Some neurodevelopmental disorders are characterized by sensory processing profiles that predispose individuals to avoid or seek-out sensory stimulation, fundamentally altering their perceptual experience. Here, we report such processing styles will affect the computation of ensemble statistics in the general population. We identified stable adult sensory processing patterns to demonstrate that individuals with low sensory thresholds who show a greater proclivity to engage in active response strategies to prevent sensory overstimulation are less likely to integrate mean size information across a set of similar items and are therefore more likely to be biased away from the mean size representation of an ensemble display. We therefore propose the study of ensemble processing should extend beyond the statistics of the display, and should also consider the statistics of the observer. (PsycINFO Database Record (c) 2016 APA, all rights reserved).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhyA..487..215S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhyA..487..215S"><span>Generalized ensemble theory with non-extensive statistics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shen, Ke-Ming; Zhang, Ben-Wei; Wang, En-Ke</p> <p>2017-12-01</p> <p>The non-extensive canonical ensemble theory is reconsidered with the method of Lagrange multipliers by maximizing Tsallis entropy, with the constraint that the normalized term of Tsallis' q -average of physical quantities, the sum ∑ pjq, is independent of the probability pi for Tsallis parameter q. The self-referential problem in the deduced probability and thermal quantities in non-extensive statistics is thus avoided, and thermodynamical relationships are obtained in a consistent and natural way. We also extend the study to the non-extensive grand canonical ensemble theory and obtain the q-deformed Bose-Einstein distribution as well as the q-deformed Fermi-Dirac distribution. The theory is further applied to the generalized Planck law to demonstrate the distinct behaviors of the various generalized q-distribution functions discussed in literature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=supercomputer&pg=2&id=EJ410944','ERIC'); return false;" href="https://eric.ed.gov/?q=supercomputer&pg=2&id=EJ410944"><span>Analytical Applications of Monte Carlo Techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Guell, Oscar A.; Holcombe, James A.</p> <p>1990-01-01</p> <p>Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19740018956','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19740018956"><span>On the error probability of general tree and trellis codes with applications to sequential decoding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Johannesson, R.</p> <p>1973-01-01</p> <p>An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19820015075','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19820015075"><span>Program for narrow-band analysis of aircraft flyover noise using ensemble averaging techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gridley, D.</p> <p>1982-01-01</p> <p>A package of computer programs was developed for analyzing acoustic data from an aircraft flyover. The package assumes the aircraft is flying at constant altitude and constant velocity in a fixed attitude over a linear array of ground microphones. Aircraft position is provided by radar and an option exists for including the effects of the aircraft's rigid-body attitude relative to the flight path. Time synchronization between radar and acoustic recording stations permits ensemble averaging techniques to be applied to the acoustic data thereby increasing the statistical accuracy of the acoustic results. Measured layered meteorological data obtained during the flyovers are used to compute propagation effects through the atmosphere. Final results are narrow-band spectra and directivities corrected for the flight environment to an equivalent static condition at a specified radius.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JHyd..539..237R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JHyd..539..237R"><span>Potentialities of ensemble strategies for flood forecasting over the Milano urban area</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ravazzani, Giovanni; Amengual, Arnau; Ceppi, Alessandro; Homar, Víctor; Romero, Romu; Lombardi, Gabriele; Mancini, Marco</p> <p>2016-08-01</p> <p>Analysis of ensemble forecasting strategies, which can provide a tangible backing for flood early warning procedures and mitigation measures over the Mediterranean region, is one of the fundamental motivations of the international HyMeX programme. Here, we examine two severe hydrometeorological episodes that affected the Milano urban area and for which the complex flood protection system of the city did not completely succeed. Indeed, flood damage have exponentially increased during the last 60 years, due to industrial and urban developments. Thus, the improvement of the Milano flood control system needs a synergism between structural and non-structural approaches. First, we examine how land-use changes due to urban development have altered the hydrological response to intense rainfalls. Second, we test a flood forecasting system which comprises the Flash-flood Event-based Spatially distributed rainfall-runoff Transformation, including Water Balance (FEST-WB) and the Weather Research and Forecasting (WRF) models. Accurate forecasts of deep moist convection and extreme precipitation are difficult to be predicted due to uncertainties arising from the numeric weather prediction (NWP) physical parameterizations and high sensitivity to misrepresentation of the atmospheric state; however, two hydrological ensemble prediction systems (HEPS) have been designed to explicitly cope with uncertainties in the initial and lateral boundary conditions (IC/LBCs) and physical parameterizations of the NWP model. No substantial differences in skill have been found between both ensemble strategies when considering an enhanced diversity of IC/LBCs for the perturbed initial conditions ensemble. Furthermore, no additional benefits have been found by considering more frequent LBCs in a mixed physics ensemble, as ensemble spread seems to be reduced. These findings could help to design the most appropriate ensemble strategies before these hydrometeorological extremes, given the computational cost of running such advanced HEPSs for operational purposes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010APExp...3i2801K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010APExp...3i2801K"><span>Optical Rabi Oscillations in a Quantum Dot Ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kujiraoka, Mamiko; Ishi-Hayase, Junko; Akahane, Kouichi; Yamamoto, Naokatsu; Ema, Kazuhiro; Sasaki, Masahide</p> <p>2010-09-01</p> <p>We have investigated Rabi oscillations of exciton polarization in a self-assembled InAs quantum dot ensemble. The four-wave mixing signals measured as a function of the average of the pulse area showed the large in-plane anisotropy and nonharmonic oscillations. The experimental results can be well reproduced by a two-level model calculation including three types of inhomogeneities without any fitting parameter. The large anisotropy can be well explained by the anisotropic dipole moments. We also find that the nonharmonic behaviors partly originate from the polarization interference.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24853864','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24853864"><span>A random matrix approach to credit risk.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas</p> <p>2014-01-01</p> <p>We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4031172','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4031172"><span>A Random Matrix Approach to Credit Risk</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Guhr, Thomas</p> <p>2014-01-01</p> <p>We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA478634','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA478634"><span>ensembleBMA: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2007-08-15</p> <p>library is used to allow addition of the legend and map outline to the plot. > bluescale <- function(n) hsv (4/6, s = seq(from = 1 /8, to = 1 , length = n...v = 1 ) > plotBMAforecast( probFreeze290104, lon=srftGridData$lon, lat =srftGridData$ lat , type="image", col=bluescale(100)) > title("Probability of...probPrecip130103) # used to determine zlim in plots [ 1 ] 0.02832709 0.99534860 > plotBMAforecast( probPrecip130103[,Ŕ"], lon=prcpGridData$lon, lat</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22261648-stochastic-dynamics-small-ensembles-non-processive-molecular-motors-parallel-cluster-model','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22261648-stochastic-dynamics-small-ensembles-non-processive-molecular-motors-parallel-cluster-model"><span>Stochastic dynamics of small ensembles of non-processive molecular motors: The parallel cluster model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Erdmann, Thorsten; Albert, Philipp J.; Schwarz, Ulrich S.</p> <p>2013-11-07</p> <p>Non-processive molecular motors have to work together in ensembles in order to generate appreciable levels of force or movement. In skeletal muscle, for example, hundreds of myosin II molecules cooperate in thick filaments. In non-muscle cells, by contrast, small groups with few tens of non-muscle myosin II motors contribute to essential cellular processes such as transport, shape changes, or mechanosensing. Here we introduce a detailed and analytically tractable model for this important situation. Using a three-state crossbridge model for the myosin II motor cycle and exploiting the assumptions of fast power stroke kinetics and equal load sharing between motors inmore » equivalent states, we reduce the stochastic reaction network to a one-step master equation for the binding and unbinding dynamics (parallel cluster model) and derive the rules for ensemble movement. We find that for constant external load, ensemble dynamics is strongly shaped by the catch bond character of myosin II, which leads to an increase of the fraction of bound motors under load and thus to firm attachment even for small ensembles. This adaptation to load results in a concave force-velocity relation described by a Hill relation. For external load provided by a linear spring, myosin II ensembles dynamically adjust themselves towards an isometric state with constant average position and load. The dynamics of the ensembles is now determined mainly by the distribution of motors over the different kinds of bound states. For increasing stiffness of the external spring, there is a sharp transition beyond which myosin II can no longer perform the power stroke. Slow unbinding from the pre-power-stroke state protects the ensembles against detachment.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNG33A0195L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNG33A0195L"><span>Multi-objective optimization for generating a weighted multi-model ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, H.</p> <p>2017-12-01</p> <p>Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5839517','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5839517"><span>Avoided climate impacts of urban and rural heat and cold waves over the U.S. using large climate model ensembles for RCP8.5 and RCP4.5</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Anderson, G.B.; Jones, B.; McGinnis, S.A.; Sanderson, B.</p> <p>2015-01-01</p> <p>Previous studies examining future changes in heat/cold waves using climate model ensembles have been limited to grid cell-average quantities. Here, we make use of an urban parameterization in the Community Earth System Model (CESM) that represents the urban heat island effect, which can exacerbate extreme heat but may ameliorate extreme cold in urban relative to rural areas. Heat/cold wave characteristics are derived for U.S. regions from a bias-corrected CESM 30-member ensemble for climate outcomes driven by the RCP8.5 forcing scenario and a 15-member ensemble driven by RCP4.5. Significant differences are found between urban and grid cell-average heat/cold wave characteristics. Most notably, urban heat waves for 1981–2005 are more intense than grid cell-average by 2.1°C (southeast) to 4.6°C (southwest), while cold waves are less intense. We assess the avoided climate impacts of urban heat/cold waves in 2061–2080 when following the lower forcing scenario. Urban heat wave days per year increase from 6 in 1981–2005 to up to 92 (southeast) in RCP8.5. Following RCP4.5 reduces heat wave days by about 50%. Large avoided impacts are demonstrated for individual communities; e.g., the longest heat wave for Houston in RCP4.5 is 38 days while in RCP8.5 there is one heat wave per year that is longer than a month with some lasting the entire summer. Heat waves also start later in the season in RCP4.5 (earliest are in early May) than RCP8.5 (mid-April), compared to 1981–2005 (late May). In some communities, cold wave events decrease from 2 per year for 1981–2005 to one-in-five year events in RCP4.5 and one-in-ten year events in RCP8.5. PMID:29520121</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhyA..419..221H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhyA..419..221H"><span>Variable diffusion in stock market fluctuations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.</p> <p>2015-02-01</p> <p>We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NatSR...744900B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NatSR...744900B"><span>A novel procedure for the identification of chaos in complex biological systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bazeia, D.; Pereira, M. B. P. N.; Brito, A. V.; Oliveira, B. F. De; Ramos, J. G. G. S.</p> <p>2017-03-01</p> <p>We demonstrate the presence of chaos in stochastic simulations that are widely used to study biodiversity in nature. The investigation deals with a set of three distinct species that evolve according to the standard rules of mobility, reproduction and predation, with predation following the cyclic rules of the popular rock, paper and scissors game. The study uncovers the possibility to distinguish between time evolutions that start from slightly different initial states, guided by the Hamming distance which heuristically unveils the chaotic behavior. The finding opens up a quantitative approach that relates the correlation length to the average density of maxima of a typical species, and an ensemble of stochastic simulations is implemented to support the procedure. The main result of the work shows how a single and simple experimental realization that counts the density of maxima associated with the chaotic evolution of the species serves to infer its correlation length. We use the result to investigate others distinct complex systems, one dealing with a set of differential equations that can be used to model a diversity of natural and artificial chaotic systems, and another one, focusing on the ocean water level.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JCoPh.228..976G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JCoPh.228..976G"><span>Simulation of unsteady flows by the DSMC macroscopic chemistry method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Goldsworthy, Mark; Macrossan, Michael; Abdel-jawad, Madhat</p> <p>2009-03-01</p> <p>In the Direct Simulation Monte-Carlo (DSMC) method, a combination of statistical and deterministic procedures applied to a finite number of 'simulator' particles are used to model rarefied gas-kinetic processes. In the macroscopic chemistry method (MCM) for DSMC, chemical reactions are decoupled from the specific particle pairs selected for collisions. Information from all of the particles within a cell, not just those selected for collisions, is used to determine a reaction rate coefficient for that cell. Unlike collision-based methods, MCM can be used with any viscosity or non-reacting collision models and any non-reacting energy exchange models. It can be used to implement any reaction rate formulations, whether these be from experimental or theoretical studies. MCM has been previously validated for steady flow DSMC simulations. Here we show how MCM can be used to model chemical kinetics in DSMC simulations of unsteady flow. Results are compared with a collision-based chemistry procedure for two binary reactions in a 1-D unsteady shock-expansion tube simulation. Close agreement is demonstrated between the two methods for instantaneous, ensemble-averaged profiles of temperature, density and species mole fractions, as well as for the accumulated number of net reactions per cell.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1813618O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1813618O"><span>Total probabilities of ensemble runoff forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian</p> <p>2016-04-01</p> <p>Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative impact on the calibration error, but makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.H23A1226D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.H23A1226D"><span>Calibration and parameterization of a semi-distributed hydrological model to support sub-daily ensemble flood forecasting; a watershed in southeast Brazil</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>de Almeida Bressiani, D.; Srinivasan, R.; Mendiondo, E. M.</p> <p>2013-12-01</p> <p>The use of distributed or semi-distributed models to represent the processes and dynamics of a watershed in the last few years has increased. These models are important tools to predict and forecast the hydrological responses of the watersheds, and they can subside disaster risk management and planning. However they usually have a lot of parameters, of which, due to the spatial and temporal variability of the processes, are not known, specially in developing countries; therefore a robust and sensible calibration is very important. This study conduced a sub-daily calibration and parameterization of the Soil & Water Assessment Tool (SWAT) for a 12,600 km2 watershed in southeast Brazil, and uses ensemble forecasts to evaluate if the model can be used as a tool for flood forecasting. The Piracicaba Watershed, in São Paulo State, is mainly rural, but has about 4 million of population in highly relevant urban areas, and three cities in the list of critical cities of the National Center for Natural Disasters Monitoring and Alerts. For calibration: the watershed was divided in areas with similar hydrological characteristics, for each of these areas one gauge station was chosen for calibration; this procedure was performed to evaluate the effectiveness of calibrating in fewer places, since areas with the same group of groundwater, soil, land use and slope characteristics should have similar parameters; making calibration a less time-consuming task. The sensibility analysis and calibration were performed on the software SWAT-CUP with the optimization algorithm: Sequential Uncertainly Fitting Version 2 (SUFI-2), which uses Latin hypercube sampling scheme in an iterative process. The performance of the models to evaluate the calibration and validation was done with: Nash-Sutcliffe efficiency coefficient (NSE), determination coefficient (r2), root mean square error (RMSE), and percent bias (PBIAS), with monthly average values of NSE around 0.70, r2 of 0.9, normalized RMSE of 0.01, and PBIAS of 10. Past events were analysed to evaluate the possibility of using the SWAT developed model for Piracicaba watershed as a tool for ensemble flood forecasting. For the ensemble evaluation members from the numerical model Eta were used. Eta is an atmospheric model used for research and operational purposes, with 5km resolution, and is updated twice a day (00 e 12 UTC) for a ten day horizon, with precipitation and weather estimates for each hour. The parameterized SWAT model performed overall well for ensemble flood forecasting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28802329','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28802329"><span>Skin lesion computational diagnosis of dermoscopic images: Ensemble models based on input feature manipulation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S</p> <p>2017-10-01</p> <p>The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..1615427P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..1615427P"><span>HEPEX - achievements and challenges!</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pappenberger, Florian; Ramos, Maria-Helena; Thielen, Jutta; Wood, Andy; Wang, Qj; Duan, Qingyun; Collischonn, Walter; Verkade, Jan; Voisin, Nathalie; Wetterhall, Fredrik; Vuillaume, Jean-Francois Emmanuel; Lucatero Villasenor, Diana; Cloke, Hannah L.; Schaake, John; van Andel, Schalk-Jan</p> <p>2014-05-01</p> <p>HEPEX is an international initiative bringing together hydrologists, meteorologists, researchers and end-users to develop advanced probabilistic hydrological forecast techniques for improved flood, drought and water management. HEPEX was launched in 2004 as an independent, cooperative international scientific activity. During the first meeting, the overarching goal was defined as: "to develop and test procedures to produce reliable hydrological ensemble forecasts, and to demonstrate their utility in decision making related to the water, environmental and emergency management sectors." The applications of hydrological ensemble predictions span across large spatio-temporal scales, ranging from short-term and localized predictions to global climate change and regional modeling. Within the HEPEX community, information is shared through its blog (www.hepex.org), meetings, testbeds and intercompaison experiments, as well as project reportings. Key questions of HEPEX are: * What adaptations are required for meteorological ensemble systems to be coupled with hydrological ensemble systems? * How should the existing hydrological ensemble prediction systems be modified to account for all sources of uncertainty within a forecast? * What is the best way for the user community to take advantage of ensemble forecasts and to make better decisions based on them? This year HEPEX celebrates its 10th year anniversary and this poster will present a review of the main operational and research achievements and challenges prepared by Hepex contributors on data assimilation, post-processing of hydrologic predictions, forecast verification, communication and use of probabilistic forecasts in decision-making. Additionally, we will present the most recent activities implemented by Hepex and illustrate how everyone can join the community and participate to the development of new approaches in hydrologic ensemble prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ThApC.132.1057Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ThApC.132.1057Y"><span>Multi-criterion model ensemble of CMIP5 surface air temperature over China</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming</p> <p>2018-05-01</p> <p>The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the South Central China (the Inner Mongolia), the North Eastern China (the South Central China), and the North Western China (the South Central China), under RCP 2.6, RCP 4.5, and RCP 8.5 scenarios, respectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3965471','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3965471"><span>NIMEFI: Gene Regulatory Network Inference using Multiple Ensemble Feature Importance Algorithms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ruyssinck, Joeri; Huynh-Thu, Vân Anh; Geurts, Pierre; Dhaene, Tom; Demeester, Piet; Saeys, Yvan</p> <p>2014-01-01</p> <p>One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms) and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made publicly available. PMID:24667482</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JHyd..468..268L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JHyd..468..268L"><span>Analyzing the uncertainty of suspended sediment load prediction using sequential data assimilation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Leisenring, Marc; Moradkhani, Hamid</p> <p>2012-10-01</p> <p>SummaryA first step in understanding the impacts of sediment and controlling the sources of sediment is to quantify the mass loading. Since mass loading is the product of flow and concentration, the quantification of loads first requires the quantification of runoff volume. Using the National Weather Service's SNOW-17 and the Sacramento Soil Moisture Accounting (SAC-SMA) models, this study employed particle filter based Bayesian data assimilation methods to predict seasonal snow water equivalent (SWE) and runoff within a small watershed in the Lake Tahoe Basin located in California, USA. A procedure was developed to scale the variance multipliers (a.k.a hyperparameters) for model parameters and predictions based on the accuracy of the mean predictions relative to the ensemble spread. In addition, an online bias correction algorithm based on the lagged average bias was implemented to detect and correct for systematic bias in model forecasts prior to updating with the particle filter. Both of these methods significantly improved the performance of the particle filter without requiring excessively wide prediction bounds. The flow ensemble was linked to a non-linear regression model that was used to predict suspended sediment concentrations (SSCs) based on runoff rate and time of year. Runoff volumes and SSC were then combined to produce an ensemble of suspended sediment load estimates. Annual suspended sediment loads for the 5 years of simulation were finally computed along with 95% prediction intervals that account for uncertainty in both the SSC regression model and flow rate estimates. Understanding the uncertainty associated with annual suspended sediment load predictions is critical for making sound watershed management decisions aimed at maintaining the exceptional clarity of Lake Tahoe. The computational methods developed and applied in this research could assist with similar studies where it is important to quantify the predictive uncertainty of pollutant load estimates.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/936447','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/936447"><span>Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ajami, N K; Duan, Q; Gao, X</p> <p>2005-04-11</p> <p>This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26844300','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26844300"><span>Metainference: A Bayesian inference method for heterogeneous systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele</p> <p>2016-01-01</p> <p>Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1918455D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1918455D"><span>Synchronized Trajectories in a Climate "Supermodel"</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duane, Gregory; Schevenhoven, Francine; Selten, Frank</p> <p>2017-04-01</p> <p>Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28544272','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28544272"><span>Ensemble variant interpretation methods to predict enzyme activity and assign pathogenicity in the CAGI4 NAGLU (Human N-acetyl-glucosaminidase) and UBE2I (Human SUMO-ligase) challenges.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yin, Yizhou; Kundu, Kunal; Pal, Lipika R; Moult, John</p> <p>2017-09-01</p> <p>CAGI (Critical Assessment of Genome Interpretation) conducts community experiments to determine the state of the art in relating genotype to phenotype. Here, we report results obtained using newly developed ensemble methods to address two CAGI4 challenges: enzyme activity for population missense variants found in NAGLU (Human N-acetyl-glucosaminidase) and random missense mutations in Human UBE2I (Human SUMO E2 ligase), assayed in a high-throughput competitive yeast complementation procedure. The ensemble methods are effective, ranked second for SUMO-ligase and third for NAGLU, according to the CAGI independent assessors. However, in common with other methods used in CAGI, there are large discrepancies between predicted and experimental activities for a subset of variants. Analysis of the structural context provides some insight into these. Post-challenge analysis shows that the ensemble methods are also effective at assigning pathogenicity for the NAGLU variants. In the clinic, providing an estimate of the reliability of pathogenic assignments is the key. We have also used the NAGLU dataset to show that ensemble methods have considerable potential for this task, and are already reliable enough for use with a subset of mutations. © 2017 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930062803&hterms=Chimera&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DChimera','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930062803&hterms=Chimera&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DChimera"><span>Effects of bleed-hole geometry and plenum pressure on three-dimensional shock-wave/boundary-layer/bleed interactions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chyu, Wei J.; Rimlinger, Mark J.; Shih, Tom I.-P.</p> <p>1993-01-01</p> <p>A numerical study was performed to investigate 3D shock-wave/boundary-layer interactions on a flat plate with bleed through one or more circular holes that vent into a plenum. This study was focused on how bleed-hole geometry and pressure ratio across bleed holes affect the bleed rate and the physics of the flow in the vicinity of the holes. The aspects of the bleed-hole geometry investigated include angle of bleed hole and the number of bleed holes. The plenum/freestream pressure ratios investigated range from 0.3 to 1.7. This study is based on the ensemble-averaged, 'full compressible' Navier-Stokes (N-S) equations closed by the Baldwin-Lomax algebraic turbulence model. Solutions to the ensemble-averaged N-S equations were obtained by an implicit finite-volume method using the partially-split, two-factored algorithm of Steger on an overlapping Chimera grid.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/989792','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/989792"><span>Optimized nested Markov chain Monte Carlo sampling: theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D</p> <p>2009-01-01</p> <p>Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples ofmore » the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29363314','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29363314"><span>Life under the Microscope: Single-Molecule Fluorescence Highlights the RNA World.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ray, Sujay; Widom, Julia R; Walter, Nils G</p> <p>2018-04-25</p> <p>The emergence of single-molecule (SM) fluorescence techniques has opened up a vast new toolbox for exploring the molecular basis of life. The ability to monitor individual biomolecules in real time enables complex, dynamic folding pathways to be interrogated without the averaging effect of ensemble measurements. In parallel, modern biology has been revolutionized by our emerging understanding of the many functions of RNA. In this comprehensive review, we survey SM fluorescence approaches and discuss how the application of these tools to RNA and RNA-containing macromolecular complexes in vitro has yielded significant insights into the underlying biology. Topics covered include the three-dimensional folding landscapes of a plethora of isolated RNA molecules, their assembly and interactions in RNA-protein complexes, and the relation of these properties to their biological functions. In all of these examples, the use of SM fluorescence methods has revealed critical information beyond the reach of ensemble averages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22479573-almost-sure-convergence-quantum-spin-glasses','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22479573-almost-sure-convergence-quantum-spin-glasses"><span>Almost sure convergence in quantum spin glasses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Buzinski, David, E-mail: dab197@case.edu; Meckes, Elizabeth, E-mail: elizabeth.meckes@case.edu</p> <p>2015-12-15</p> <p>Recently, Keating, Linden, and Wells [Markov Processes Relat. Fields 21(3), 537-555 (2015)] showed that the density of states measure of a nearest-neighbor quantum spin glass model is approximately Gaussian when the number of particles is large. The density of states measure is the ensemble average of the empirical spectral measure of a random matrix; in this paper, we use concentration of measure and entropy techniques together with the result of Keating, Linden, and Wells to show that in fact the empirical spectral measure of such a random matrix is almost surely approximately Gaussian itself with no ensemble averaging. We alsomore » extend this result to a spherical quantum spin glass model and to the more general coupling geometries investigated by Erdős and Schröder [Math. Phys., Anal. Geom. 17(3-4), 441–464 (2014)].« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JSMTE..11.3401T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JSMTE..11.3401T"><span>Typical performance of approximation algorithms for NP-hard problems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Takabe, Satoshi; Hukushima, Koji</p> <p>2016-11-01</p> <p>Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AdSR....8..115K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AdSR....8..115K"><span>On the skill of various ensemble spread estimators for probabilistic short range wind forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kann, A.</p> <p>2012-05-01</p> <p>A variety of applications ranging from civil protection associated with severe weather to economical interests are heavily dependent on meteorological information. For example, a precise planning of the energy supply with a high share of renewables requires detailed meteorological information on high temporal and spatial resolution. With respect to wind power, detailed analyses and forecasts of wind speed are of crucial interest for the energy management. Although the applicability and the current skill of state-of-the-art probabilistic short range forecasts has increased during the last years, ensemble systems still show systematic deficiencies which limit its practical use. This paper presents methods to improve the ensemble skill of 10-m wind speed forecasts by combining deterministic information from a nowcasting system on very high horizontal resolution with uncertainty estimates from a limited area ensemble system. It is shown for a one month validation period that a statistical post-processing procedure (a modified non-homogeneous Gaussian regression) adds further skill to the probabilistic forecasts, especially beyond the nowcasting range after +6 h.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25516108','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25516108"><span>Differences in single and aggregated nanoparticle plasmon spectroscopy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Singh, Pushkar; Deckert-Gaudig, Tanja; Schneidewind, Henrik; Kirsch, Konstantin; van Schrojenstein Lantman, Evelien M; Weckhuysen, Bert M; Deckert, Volker</p> <p>2015-02-07</p> <p>Vibrational spectroscopy usually provides structural information averaged over many molecules. We report a larger peak position variation and reproducibly smaller FWHM of TERS spectra compared to SERS spectra indicating that the number of molecules excited in a TERS experiment is extremely low. Thus, orientational averaging effects are suppressed and micro ensembles are investigated. This is shown for a thiophenol molecule adsorbed on Au nanoplates and nanoparticles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NHESS..17.1795P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NHESS..17.1795P"><span>Revisiting the synoptic-scale predictability of severe European winter storms using ECMWF ensemble reforecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pantillon, Florian; Knippertz, Peter; Corsmeier, Ulrich</p> <p>2017-10-01</p> <p>New insights into the synoptic-scale predictability of 25 severe European winter storms of the 1995-2015 period are obtained using the homogeneous ensemble reforecast dataset from the European Centre for Medium-Range Weather Forecasts. The predictability of the storms is assessed with different metrics including (a) the track and intensity to investigate the storms' dynamics and (b) the Storm Severity Index to estimate the impact of the associated wind gusts. The storms are well predicted by the whole ensemble up to 2-4 days ahead. At longer lead times, the number of members predicting the observed storms decreases and the ensemble average is not clearly defined for the track and intensity. The Extreme Forecast Index and Shift of Tails are therefore computed from the deviation of the ensemble from the model climate. Based on these indices, the model has some skill in forecasting the area covered by extreme wind gusts up to 10 days, which indicates a clear potential for early warnings. However, large variability is found between the individual storms. The poor predictability of outliers appears related to their physical characteristics such as explosive intensification or small size. Longer datasets with more cases would be needed to further substantiate these points.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A33B2345G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A33B2345G"><span>Single Aerosol Particle Studies Using Optical Trapping Raman And Cavity Ringdown Spectroscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gong, Z.; Wang, C.; Pan, Y. L.; Videen, G.</p> <p>2017-12-01</p> <p>Due to the physical and chemical complexity of aerosol particles and the interdisciplinary nature of aerosol science that involves physics, chemistry, and biology, our knowledge of aerosol particles is rather incomplete; our current understanding of aerosol particles is limited by averaged (over size, composition, shape, and orientation) and/or ensemble (over time, size, and multi-particles) measurements. Physically, single aerosol particles are the fundamental units of any large aerosol ensembles. Chemically, single aerosol particles carry individual chemical components (properties and constituents) in particle ensemble processes. Therefore, the study of single aerosol particles can bridge the gap between aerosol ensembles and bulk/surface properties and provide a hierarchical progression from a simple benchmark single-component system to a mixed-phase multicomponent system. A single aerosol particle can be an effective reactor to study heterogeneous surface chemistry in multiple phases. Latest technological advances provide exciting new opportunities to study single aerosol particles and to further develop single aerosol particle instrumentation. We present updates on our recent studies of single aerosol particles optically trapped in air using the optical-trapping Raman and cavity ringdown spectroscopy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000Natur.405..567L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000Natur.405..567L"><span>Cortical ensemble activity increasingly predicts behaviour outcomes during learning of a motor task</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Laubach, Mark; Wessberg, Johan; Nicolelis, Miguel A. L.</p> <p>2000-06-01</p> <p>When an animal learns to make movements in response to different stimuli, changes in activity in the motor cortex seem to accompany and underlie this learning. The precise nature of modifications in cortical motor areas during the initial stages of motor learning, however, is largely unknown. Here we address this issue by chronically recording from neuronal ensembles located in the rat motor cortex, throughout the period required for rats to learn a reaction-time task. Motor learning was demonstrated by a decrease in the variance of the rats' reaction times and an increase in the time the animals were able to wait for a trigger stimulus. These behavioural changes were correlated with a significant increase in our ability to predict the correct or incorrect outcome of single trials based on three measures of neuronal ensemble activity: average firing rate, temporal patterns of firing, and correlated firing. This increase in prediction indicates that an association between sensory cues and movement emerged in the motor cortex as the task was learned. Such modifications in cortical ensemble activity may be critical for the initial learning of motor tasks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000093260','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000093260"><span>Decimated Input Ensembles for Improved Generalization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tumer, Kagan; Oza, Nikunj C.; Norvig, Peter (Technical Monitor)</p> <p>1999-01-01</p> <p>Recently, many researchers have demonstrated that using classifier ensembles (e.g., averaging the outputs of multiple classifiers before reaching a classification decision) leads to improved performance for many difficult generalization problems. However, in many domains there are serious impediments to such "turnkey" classification accuracy improvements. Most notable among these is the deleterious effect of highly correlated classifiers on the ensemble performance. One particular solution to this problem is generating "new" training sets by sampling the original one. However, with finite number of patterns, this causes a reduction in the training patterns each classifier sees, often resulting in considerably worsened generalization performance (particularly for high dimensional data domains) for each individual classifier. Generally, this drop in the accuracy of the individual classifier performance more than offsets any potential gains due to combining, unless diversity among classifiers is actively promoted. In this work, we introduce a method that: (1) reduces the correlation among the classifiers; (2) reduces the dimensionality of the data, thus lessening the impact of the 'curse of dimensionality'; and (3) improves the classification performance of the ensemble.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4103595','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4103595"><span>CABS-flex predictions of protein flexibility compared with NMR ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian</p> <p>2014-01-01</p> <p>Motivation: Identification of flexible regions of protein structures is important for understanding of their biological functions. Recently, we have developed a fast approach for predicting protein structure fluctuations from a single protein model: the CABS-flex. CABS-flex was shown to be an efficient alternative to conventional all-atom molecular dynamics (MD). In this work, we evaluate CABS-flex and MD predictions by comparison with protein structural variations within NMR ensembles. Results: Based on a benchmark set of 140 proteins, we show that the relative fluctuations of protein residues obtained from CABS-flex are well correlated to those of NMR ensembles. On average, this correlation is stronger than that between MD and NMR ensembles. In conclusion, CABS-flex is useful and complementary to MD in predicting protein regions that undergo conformational changes as well as the extent of such changes. Availability and implementation: The CABS-flex is freely available to all users at http://biocomp.chem.uw.edu.pl/CABSflex. Contact: sekmi@chem.uw.edu.pl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24735558</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24735558','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24735558"><span>CABS-flex predictions of protein flexibility compared with NMR ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian</p> <p>2014-08-01</p> <p>Identification of flexible regions of protein structures is important for understanding of their biological functions. Recently, we have developed a fast approach for predicting protein structure fluctuations from a single protein model: the CABS-flex. CABS-flex was shown to be an efficient alternative to conventional all-atom molecular dynamics (MD). In this work, we evaluate CABS-flex and MD predictions by comparison with protein structural variations within NMR ensembles. Based on a benchmark set of 140 proteins, we show that the relative fluctuations of protein residues obtained from CABS-flex are well correlated to those of NMR ensembles. On average, this correlation is stronger than that between MD and NMR ensembles. In conclusion, CABS-flex is useful and complementary to MD in predicting protein regions that undergo conformational changes as well as the extent of such changes. The CABS-flex is freely available to all users at http://biocomp.chem.uw.edu.pl/CABSflex. sekmi@chem.uw.edu.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29788510','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29788510"><span>Predicting drug-induced liver injury using ensemble learning methods and molecular fingerprints.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ai, Haixin; Chen, Wen; Zhang, Li; Huang, Liangchao; Yin, Zimo; Hu, Huan; Zhao, Qi; Zhao, Jian; Liu, Hongsheng</p> <p>2018-05-21</p> <p>Drug-induced liver injury (DILI) is a major safety concern in the drug-development process, and various methods have been proposed to predict the hepatotoxicity of compounds during the early stages of drug trials. In this study, we developed an ensemble model using three machine learning algorithms and 12 molecular fingerprints from a dataset containing 1,241 diverse compounds. The ensemble model achieved an average accuracy of 71.1±2.6%, sensitivity of 79.9±3.6%, specificity of 60.3±4.8%, and area under the receiver operating characteristic curve (AUC) of 0.764±0.026 in five-fold cross-validation and an accuracy of 84.3%, sensitivity of 86.9%, specificity of 75.4%, and AUC of 0.904 in an external validation dataset of 286 compounds collected from the Liver Toxicity Knowledge Base (LTKB). Compared with previous methods, the ensemble model achieved relatively high accuracy and sensitivity. We also identified several substructures related to DILI. In addition, we provide a web server offering access to our models (http://ccsipb.lnu.edu.cn/toxicity/HepatoPred-EL/).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1421971-ensemble-averaged-structurefunction-relationship-nanocrystals-effective-superparamagnetic-fe-clusters-catalytically-active-pt-skin','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1421971-ensemble-averaged-structurefunction-relationship-nanocrystals-effective-superparamagnetic-fe-clusters-catalytically-active-pt-skin"><span>Ensemble averaged structure–function relationship for nanocrystals: effective superparamagnetic Fe clusters with catalytically active Pt skin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Petkov, Valeri; Prasai, Binay; Shastri, Sarvjit</p> <p>2017-09-12</p> <p>Practical applications require the production and usage of metallic nanocrystals (NCs) in large ensembles. Besides, due to their cluster-bulk solid duality, metallic NCs exhibit a large degree of structural diversity. This poses the question as to what atomic-scale basis is to be used when the structure–function relationship for metallic NCs is to be quantified precisely. In this paper, we address the question by studying bi-functional Fe core-Pt skin type NCs optimized for practical applications. In particular, the cluster-like Fe core and skin-like Pt surface of the NCs exhibit superparamagnetic properties and a superb catalytic activity for the oxygen reduction reaction,more » respectively. We determine the atomic-scale structure of the NCs by non-traditional resonant high-energy X-ray diffraction coupled to atomic pair distribution function analysis. Using the experimental structure data we explain the observed magnetic and catalytic behavior of the NCs in a quantitative manner. Lastly, we demonstrate that NC ensemble-averaged 3D positions of atoms obtained by advanced X-ray scattering techniques are a very proper basis for not only establishing but also quantifying the structure–function relationship for the increasingly complex metallic NCs explored for practical applications.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1008a2019A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1008a2019A"><span>Ensemble averaging and stacking of ARIMA and GSTAR model for rainfall forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anggraeni, D.; Kurnia, I. F.; Hadi, A. F.</p> <p>2018-04-01</p> <p>Unpredictable rainfall changes can affect human activities, such as in agriculture, aviation, shipping which depend on weather forecasts. Therefore, we need forecasting tools with high accuracy in predicting the rainfall in the future. This research focus on local forcasting of the rainfall at Jember in 2005 until 2016, from 77 rainfall stations. The rainfall here was not only related to the occurrence of the previous of its stations, but also related to others, it’s called the spatial effect. The aim of this research is to apply the GSTAR model, to determine whether there are some correlations of spatial effect between one to another stations. The GSTAR model is an expansion of the space-time model that combines the time-related effects, the locations (stations) in a time series effects, and also the location it self. The GSTAR model will also be compared to the ARIMA model that completely ignores the independent variables. The forcested value of the ARIMA and of the GSTAR models then being combined using the ensemble forecasting technique. The averaging and stacking method of ensemble forecasting method here provide us the best model with higher acuracy model that has the smaller RMSE (Root Mean Square Error) value. Finally, with the best model we can offer a better local rainfall forecasting in Jember for the future.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EJASP2012...14Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EJASP2012...14Y"><span>A framework of multitemplate ensemble for fingerprint verification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yin, Yilong; Ning, Yanbin; Ren, Chunxiao; Liu, Li</p> <p>2012-12-01</p> <p>How to improve performance of an automatic fingerprint verification system (AFVS) is always a big challenge in biometric verification field. Recently, it becomes popular to improve the performance of AFVS using ensemble learning approach to fuse related information of fingerprints. In this article, we propose a novel framework of fingerprint verification which is based on the multitemplate ensemble method. This framework is consisted of three stages. In the first stage, enrollment stage, we adopt an effective template selection method to select those fingerprints which best represent a finger, and then, a polyhedron is created by the matching results of multiple template fingerprints and a virtual centroid of the polyhedron is given. In the second stage, verification stage, we measure the distance between the centroid of the polyhedron and a query image. In the final stage, a fusion rule is used to choose a proper distance from a distance set. The experimental results on the FVC2004 database prove the improvement on the effectiveness of the new framework in fingerprint verification. With a minutiae-based matching method, the average EER of four databases in FVC2004 drops from 10.85 to 0.88, and with a ridge-based matching method, the average EER of these four databases also decreases from 14.58 to 2.51.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28497136','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28497136"><span>Local chemical potential, local hardness, and dual descriptors in temperature dependent chemical reactivity theory.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Franco-Pérez, Marco; Ayers, Paul W; Gázquez, José L; Vela, Alberto</p> <p>2017-05-31</p> <p>In this work we establish a new temperature dependent procedure within the grand canonical ensemble, to avoid the Dirac delta function exhibited by some of the second order chemical reactivity descriptors based on density functional theory, at a temperature of 0 K. Through the definition of a local chemical potential designed to integrate to the global temperature dependent electronic chemical potential, the local chemical hardness is expressed in terms of the derivative of this local chemical potential with respect to the average number of electrons. For the three-ground-states ensemble model, this local hardness contains a term that is equal to the one intuitively proposed by Meneses, Tiznado, Contreras and Fuentealba, which integrates to the global hardness given by the difference in the first ionization potential, I, and the electron affinity, A, at any temperature. However, in the present approach one finds an additional temperature-dependent term that introduces changes at the local level and integrates to zero. Additionally, a τ-hard dual descriptor and a τ-soft dual descriptor given in terms of the product of the global hardness and the global softness multiplied by the dual descriptor, respectively, are derived. Since all these reactivity indices are given by expressions composed of terms that correspond to products of the global properties multiplied by the electrophilic or nucleophilic Fukui functions, they may be useful for studying and comparing equivalent sites in different chemical environments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20030068046','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20030068046"><span>Resonance Effects in the NASA Transonic Flutter Cascade Facility</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lepicovsky, J.; Capece, V. R.; Ford, C. T.</p> <p>2003-01-01</p> <p>Investigations of unsteady pressure loadings on the blades of fans operating near the stall flutter boundary are carried out under simulated conditions in the NASA Transonic Flutter Cascade facility (TFC). It has been observed that for inlet Mach numbers of about 0.8, the cascade flowfield exhibits intense low-frequency pressure oscillations. The origins of these oscillations were not clear. It was speculated that this behavior was either caused by instabilities in the blade separated flow zone or that it was a tunnel resonance phenomenon. It has now been determined that the strong low-frequency oscillations, observed in the TFC facility, are not a cascade phenomenon contributing to blade flutter, but that they are solely caused by the tunnel resonance characteristics. Most likely, the self-induced oscillations originate in the system of exit duct resonators. For sure, the self-induced oscillations can be significantly suppressed for a narrow range of inlet Mach numbers by tuning one of the resonators. A considerable amount of flutter simulation data has been acquired in this facility to date, and therefore it is of interest to know how much this tunnel self-induced flow oscillation influences the experimental data at high subsonic Mach numbers since this facility is being used to simulate flutter in transonic fans. In short, can this body of experimental data still be used reliably to verify computer codes for blade flutter and blade life predictions? To answer this question a study on resonance effects in the NASA TFC facility was carried out. The results, based on spectral and ensemble averaging analysis of the cascade data, showed that the interaction between self-induced oscillations and forced blade motion oscillations is very weak and can generally be neglected. The forced motion data acquired with the mistuned tunnel, when strong self-induced oscillations were present, can be used as reliable forced pressure fluctuations provided that they are extracted from raw data sets by an ensemble averaging procedure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27034973','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27034973"><span>An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ranganayaki, V; Deepa, S N</p> <p>2016-01-01</p> <p>Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28716511','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28716511"><span>An ensemble predictive modeling framework for breast cancer classification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nagarajan, Radhakrishnan; Upreti, Meenakshi</p> <p>2017-12-01</p> <p>Molecular changes often precede clinical presentation of diseases and can be useful surrogates with potential to assist in informed clinical decision making. Recent studies have demonstrated the usefulness of modeling approaches such as classification that can predict the clinical outcomes from molecular expression profiles. While useful, a majority of these approaches implicitly use all molecular markers as features in the classification process often resulting in sparse high-dimensional projection of the samples often comparable to that of the sample size. In this study, a variant of the recently proposed ensemble classification approach is used for predicting good and poor-prognosis breast cancer samples from their molecular expression profiles. In contrast to traditional single and ensemble classifiers, the proposed approach uses multiple base classifiers with varying feature sets obtained from two-dimensional projection of the samples in conjunction with a majority voting strategy for predicting the class labels. In contrast to our earlier implementation, base classifiers in the ensembles are chosen based on maximal sensitivity and minimal redundancy by choosing only those with low average cosine distance. The resulting ensemble sets are subsequently modeled as undirected graphs. Performance of four different classification algorithms is shown to be better within the proposed ensemble framework in contrast to using them as traditional single classifier systems. Significance of a subset of genes with high-degree centrality in the network abstractions across the poor-prognosis samples is also discussed. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013WRR....49.6744H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013WRR....49.6744H"><span>Simultaneous calibration of ensemble river flow predictions over an entire range of lead times</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hemri, S.; Fundel, F.; Zappa, M.</p> <p>2013-10-01</p> <p>Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMGC21E0980P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMGC21E0980P"><span>An 'Observational Large Ensemble' to compare observed and modeled temperature trend uncertainty due to internal variability.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Poppick, A. N.; McKinnon, K. A.; Dunn-Sigouin, E.; Deser, C.</p> <p>2017-12-01</p> <p>Initial condition climate model ensembles suggest that regional temperature trends can be highly variable on decadal timescales due to characteristics of internal climate variability. Accounting for trend uncertainty due to internal variability is therefore necessary to contextualize recent observed temperature changes. However, while the variability of trends in a climate model ensemble can be evaluated directly (as the spread across ensemble members), internal variability simulated by a climate model may be inconsistent with observations. Observation-based methods for assessing the role of internal variability on trend uncertainty are therefore required. Here, we use a statistical resampling approach to assess trend uncertainty due to internal variability in historical 50-year (1966-2015) winter near-surface air temperature trends over North America. We compare this estimate of trend uncertainty to simulated trend variability in the NCAR CESM1 Large Ensemble (LENS), finding that uncertainty in wintertime temperature trends over North America due to internal variability is largely overestimated by CESM1, on average by a factor of 32%. Our observation-based resampling approach is combined with the forced signal from LENS to produce an 'Observational Large Ensemble' (OLENS). The members of OLENS indicate a range of spatially coherent fields of temperature trends resulting from different sequences of internal variability consistent with observations. The smaller trend variability in OLENS suggests that uncertainty in the historical climate change signal in observations due to internal variability is less than suggested by LENS.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4791511','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4791511"><span>An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ranganayaki, V.; Deepa, S. N.</p> <p>2016-01-01</p> <p>Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21197855','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21197855"><span>Torso undergarments: their merit for clothed and armored individuals in hot-dry conditions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Van den Heuvel, Anne M J; Kerry, Pete; Van der Velde, Jeroen H P M; Patterson, Mark J; Taylor, Nigel A S</p> <p>2010-12-01</p> <p>The aim of this study was to evaluate how the textile composition of torso undergarment fabrics may impact upon thermal strain, moisture transfer, and the thermal and clothing comfort of fully clothed, armored individuals working in a hot-dry environment (41.2 degrees C and 29.8% relative humidity). Five undergarment configurations were assessed using eight men who walked for 120 min (4 km x h(-1)), then alternated running (2 min at 10 km x h(-1)) and walking (2 min at 4 km x h(-1)) for 20 min. Trials differed only in the torso undergarments worn: no t-shirt (Ensemble A); 100% cotton t-shirt (Ensemble B); 100% woolen t-shirt (Ensemble C); synthetic t-shirt (Ensemble D: nylon, polyethylene, elastane); hybrid shirt (Ensemble E). Thermal and cardiovascular strain progressively increased throughout each trial, with the average terminal core temperature being 38.5 degrees C and heart rate peaking at 170 bpm across all trials. However, no significant between-trial separations were evident for core or mean skin temperatures, or for heart rate, sweat production, evaporation, the within-ensemble water vapor pressures, or for thermal or clothing discomfort. Thus, under these conditions, neither the t-shirt textile compositions, nor the presence or absence of an undergarment, offered any significant thermal, central cardiac, or comfort advantages. Furthermore, there was no evidence that any of these fabrics created a significantly drier microclimate next to the skin.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMNG23B..03D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMNG23B..03D"><span>Interactive vs. Non-Interactive Multi-Model Ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duane, G. S.</p> <p>2013-12-01</p> <p>If the members of an ensemble of different models are allowed to interact with one another in run time, predictive skill can be improved as compared to that of any individual model or any average of indvidual model outputs. Inter-model connections in such an interactive ensemble can be trained, using historical data, so that the resulting ``supermodel' synchronizes with reality when used in weather-prediction mode, where the individual models perform data assimilation from each other (with trainable inter-model 'observation error') as well as from real observations. In climate-projection mode, parameters of the individual models are changed, as might occur from an increase in GHG levels, and one obtains relevant statistical properties of the new supermodel attractor. In simple cases, it has been shown that training of the inter-model connections with the old parameter values gives a supermodel that is still predictive when the parameter values are changed. Here we inquire as to the circumstances under which supermodel performance can be expected to exceed that of the customary weighted average of model outputs. We consider a supermodel formed from quasigeostrophic (QG) channel models with different forcing coefficients, and introduce an effective training scheme for the inter-model connections. We show that the blocked-zonal index cycle is reproduced better by the supermodel than by any non-interactive ensemble in the extreme case where the forcing coefficients of the different models are very large or very small. With realistic differences in forcing coefficients, as would be representative of actual differences among IPCC-class models, the usual linearity assumption is justified and a weighted average of model outputs is adequate. It is therefore hypothesized that supermodeling is likely to be useful in situations where there are qualitative model differences, as arising from sub-gridscale parameterizations, that affect overall model behavior. Otherwise the usual ex post facto averaging will probably suffice. The advantage of supermodeling is seen in statistics such as anticorrelation between blocking activity in the Atlantic and Pacific sectors, in the case of the QG channel model, rather than in overall blocking frequency. Likewise in climate models, the advantage of supermodeling is typically manifest in higher-order statistics rather than in quantities such as mean temperature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890010174','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890010174"><span>Laser transit anemometer software development program</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Abbiss, John B.</p> <p>1989-01-01</p> <p>Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5754089','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5754089"><span>A Kolmogorov-Smirnov test for the molecular clock based on Bayesian ensembles of phylogenies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Antoneli, Fernando; Passos, Fernando M.; Lopes, Luciano R.</p> <p>2018-01-01</p> <p>Divergence date estimates are central to understand evolutionary processes and depend, in the case of molecular phylogenies, on tests of molecular clocks. Here we propose two non-parametric tests of strict and relaxed molecular clocks built upon a framework that uses the empirical cumulative distribution (ECD) of branch lengths obtained from an ensemble of Bayesian trees and well known non-parametric (one-sample and two-sample) Kolmogorov-Smirnov (KS) goodness-of-fit test. In the strict clock case, the method consists in using the one-sample Kolmogorov-Smirnov (KS) test to directly test if the phylogeny is clock-like, in other words, if it follows a Poisson law. The ECD is computed from the discretized branch lengths and the parameter λ of the expected Poisson distribution is calculated as the average branch length over the ensemble of trees. To compensate for the auto-correlation in the ensemble of trees and pseudo-replication we take advantage of thinning and effective sample size, two features provided by Bayesian inference MCMC samplers. Finally, it is observed that tree topologies with very long or very short branches lead to Poisson mixtures and in this case we propose the use of the two-sample KS test with samples from two continuous branch length distributions, one obtained from an ensemble of clock-constrained trees and the other from an ensemble of unconstrained trees. Moreover, in this second form the test can also be applied to test for relaxed clock models. The use of a statistically equivalent ensemble of phylogenies to obtain the branch lengths ECD, instead of one consensus tree, yields considerable reduction of the effects of small sample size and provides a gain of power. PMID:29300759</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29121946','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29121946"><span>Improving precision of glomerular filtration rate estimating model by ensemble learning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Xun; Li, Ningshan; Lv, Linsheng; Fu, Yongmei; Cheng, Cailian; Wang, Caixia; Ye, Yuqiu; Li, Shaomin; Lou, Tanqi</p> <p>2017-11-09</p> <p>Accurate assessment of kidney function is clinically important, but estimates of glomerular filtration rate (GFR) by regression are imprecise. We hypothesized that ensemble learning could improve precision. A total of 1419 participants were enrolled, with 1002 in the development dataset and 417 in the external validation dataset. GFR was independently estimated from age, sex and serum creatinine using an artificial neural network (ANN), support vector machine (SVM), regression, and ensemble learning. GFR was measured by 99mTc-DTPA renal dynamic imaging calibrated with dual plasma sample 99mTc-DTPA GFR. Mean measured GFRs were 70.0 ml/min/1.73 m 2 in the developmental and 53.4 ml/min/1.73 m 2 in the external validation cohorts. In the external validation cohort, precision was better in the ensemble model of the ANN, SVM and regression equation (IQR = 13.5 ml/min/1.73 m 2 ) than in the new regression model (IQR = 14.0 ml/min/1.73 m 2 , P < 0.001). The precision of ensemble learning was the best of the three models, but the models had similar bias and accuracy. The median difference ranged from 2.3 to 3.7 ml/min/1.73 m 2 , 30% accuracy ranged from 73.1 to 76.0%, and P was > 0.05 for all comparisons of the new regression equation and the other new models. An ensemble learning model including three variables, the average ANN, SVM, and regression equation values, was more precise than the new regression model. A more complex ensemble learning strategy may further improve GFR estimates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EPJWC.13703014N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EPJWC.13703014N"><span>Domain wall network as QCD vacuum: confinement, chiral symmetry, hadronization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nedelko, Sergei N.; Voronin, Vladimir V.</p> <p>2017-03-01</p> <p>An approach to QCD vacuum as a medium describable in terms of statistical ensemble of almost everywhere homogeneous Abelian (anti-)self-dual gluon fields is reviewed. These fields play the role of the confining medium for color charged fields as well as underline the mechanism of realization of chiral SUL(Nf) × SUR(Nf) and UA(1) symmetries. Hadronization formalism based on this ensemble leads to manifestly defined quantum effective meson action. Strong, electromagnetic and weak interactions of mesons are represented in the action in terms of nonlocal n-point interaction vertices given by the quark-gluon loops averaged over the background ensemble. Systematic results for the mass spectrum and decay constants of radially excited light, heavy-light mesons and heavy quarkonia are presented. Relationship of this approach to the results of functional renormalization group and Dyson-Schwinger equations, and the picture of harmonic confinement is briefly outlined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15114356','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15114356"><span>Large-scale recording of neuronal ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Buzsáki, György</p> <p>2004-05-01</p> <p>How does the brain orchestrate perceptions, thoughts and actions from the spiking activity of its neurons? Early single-neuron recording research treated spike pattern variability as noise that needed to be averaged out to reveal the brain's representation of invariant input. Another view is that variability of spikes is centrally coordinated and that this brain-generated ensemble pattern in cortical structures is itself a potential source of cognition. Large-scale recordings from neuronal ensembles now offer the opportunity to test these competing theoretical frameworks. Currently, wire and micro-machined silicon electrode arrays can record from large numbers of neurons and monitor local neural circuits at work. Achieving the full potential of massively parallel neuronal recordings, however, will require further development of the neuron-electrode interface, automated and efficient spike-sorting algorithms for effective isolation and identification of single neurons, and new mathematical insights for the analysis of network properties.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080030792','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080030792"><span>Effects of a Rotating Aerodynamic Probe on the Flow Field of a Compressor Rotor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lepicovsky, Jan</p> <p>2008-01-01</p> <p>An investigation of distortions of the rotor exit flow field caused by an aerodynamic probe mounted in the rotor is described in this paper. A rotor total pressure Kiel probe, mounted on the rotor hub and extending up to the mid-span radius of a rotor blade channel, generates a wake that forms additional flow blockage. Three types of high-response aerodynamic probes were used to investigate the distorted flow field behind the rotor. These probes were: a split-fiber thermo-anemometric probe to measure velocity and flow direction, a total pressure probe, and a disk probe for in-flow static pressure measurement. The signals acquired from these high-response probes were reduced using an ensemble averaging method based on a once per rotor revolution signal. The rotor ensemble averages were combined to construct contour plots for each rotor channel of the rotor tested. In order to quantify the rotor probe effects, the contour plots for each individual rotor blade passage were averaged into a single value. The distribution of these average values along the rotor circumference is a measure of changes in the rotor exit flow field due to the presence of a probe in the rotor. These distributions were generated for axial flow velocity and for static pressure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012CoPhC.183.1783U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012CoPhC.183.1783U"><span>Novel algorithm and MATLAB-based program for automated power law analysis of single particle, time-dependent mean-square displacement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Umansky, Moti; Weihs, Daphne</p> <p>2012-08-01</p> <p>In many physical and biophysical studies, single-particle tracking is utilized to reveal interactions, diffusion coefficients, active modes of driving motion, dynamic local structure, micromechanics, and microrheology. The basic analysis applied to those data is to determine the time-dependent mean-square displacement (MSD) of particle trajectories and perform time- and ensemble-averaging of similar motions. The motion of particles typically exhibits time-dependent power-law scaling, and only trajectories with qualitatively and quantitatively comparable MSD should be ensembled. Ensemble averaging trajectories that arise from different mechanisms, e.g., actively driven and diffusive, is incorrect and can result inaccurate correlations between structure, mechanics, and activity. We have developed an algorithm to automatically and accurately determine power-law scaling of experimentally measured single-particle MSD. Trajectories can then categorized and grouped according to user defined cutoffs of time, amplitudes, scaling exponent values, or combinations. Power-law fits are then provided for each trajectory alongside categorized groups of trajectories, histograms of power laws, and the ensemble-averaged MSD of each group. The codes are designed to be easily incorporated into existing user codes. We expect that this algorithm and program will be invaluable to anyone performing single-particle tracking, be it in physical or biophysical systems. Catalogue identifier: AEMD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 25 892 No. of bytes in distributed program, including test data, etc.: 5 572 780 Distribution format: tar.gz Programming language: MATLAB (MathWorks Inc.) version 7.11 (2010b) or higher, program should also be backwards compatible. Symbolic Math Toolboxes (5.5) is required. The Curve Fitting Toolbox (3.0) is recommended. Computer: Tested on Windows only, yet should work on any computer running MATLAB. In Windows 7, should be used as administrator, if the user is not the administrator the program may not be able to save outputs and temporary outputs to all locations. Operating system: Any supporting MATLAB (MathWorks Inc.) v7.11 / 2010b or higher. Supplementary material: Sample output files (approx. 30 MBytes) are available. Classification: 12 External routines: Several MATLAB subfunctions (m-files), freely available on the web, were used as part of and included in, this code: count, NaN suite, parseArgs, roundsd, subaxis, wcov, wmean, and the executable pdfTK.exe. Nature of problem: In many physical and biophysical areas employing single-particle tracking, having the time-dependent power-laws governing the time-averaged meansquare displacement (MSD) of a single particle is crucial. Those power laws determine the mode-of-motion and hint at the underlying mechanisms driving motion. Accurate determination of the power laws that describe each trajectory will allow categorization into groups for further analysis of single trajectories or ensemble analysis, e.g. ensemble and time-averaged MSD. Solution method: The algorithm in the provided program automatically analyzes and fits time-dependent power laws to single particle trajectories, then group particles according to user defined cutoffs. It accepts time-dependent trajectories of several particles, each trajectory is run through the program, its time-averaged MSD is calculated, and power laws are determined in regions where the MSD is linear on a log-log scale. Our algorithm searches for high-curvature points in experimental data, here time-dependent MSD. Those serve as anchor points for determining the ranges of the power-law fits. Power-law scaling is then accurately determined and error estimations of the parameters and quality of fit are provided. After all single trajectory time-averaged MSDs are fit, we obtain cutoffs from the user to categorize and segment the power laws into groups; cutoff are either in exponents of the power laws, time of appearance of the fits, or both together. The trajectories are sorted according to the cutoffs and the time- and ensemble-averaged MSD of each group is provided, with histograms of the distributions of the exponents in each group. The program then allows the user to generate new trajectory files with trajectories segmented according to the determined groups, for any further required analysis. Additional comments: README file giving the names and a brief description of all the files that make-up the package and clear instructions on the installation and execution of the program is included in the distribution package. Running time: On an i5 Windows 7 machine with 4 GB RAM the automated parts of the run (excluding data loading and user input) take less than 45 minutes to analyze and save all stages for an 844 trajectory file, including optional PDF save. Trajectory length did not affect run time (tested up to 3600 frames/trajectory), which was on average 3.2±0.4 seconds per trajectory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMIN13A1649T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMIN13A1649T"><span>Extending Climate Analytics as a Service to the Earth System Grid Federation Progress Report on the Reanalysis Ensemble Service</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.</p> <p>2016-12-01</p> <p>We are extending climate analytics-as-a-service, including: (1) A high-performance Virtual Real-Time Analytics Testbed supporting six major reanalysis data sets using advanced technologies like the Cloudera Impala-based SQL and Hadoop-based MapReduce analytics over native NetCDF files. (2) A Reanalysis Ensemble Service (RES) that offers a basic set of commonly used operations over the reanalysis collections that are accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib. (3) An Open Geospatial Consortium (OGC) WPS-compliant Web service interface to CDSLib to accommodate ESGF's Web service endpoints. This presentation will report on the overall progress of this effort, with special attention to recent enhancements that have been made to the Reanalysis Ensemble Service, including the following: - An CDSlib Python library that supports full temporal, spatial, and grid-based resolution services - A new reanalysis collections reference model to enable operator design and implementation - An enhanced library of sample queries to demonstrate and develop use case scenarios - Extended operators that enable single- and multiple reanalysis area average, vertical average, re-gridding, and trend, climatology, and anomaly computations - Full support for the MERRA-2 reanalysis and the initial integration of two additional reanalyses - A prototype Jupyter notebook-based distribution mechanism that combines CDSlib documentation with interactive use case scenarios and personalized project management - Prototyped uncertainty quantification services that combine ensemble products with comparative observational products - Convenient, one-stop shopping for commonly used data products from multiple reanalyses, including basic subsetting and arithmetic operations over the data and extractions of trends, climatologies, and anomalies - The ability to compute and visualize multiple reanalysis intercomparisons</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015CG.....84...37J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015CG.....84...37J"><span>Ensemble of surrogates-based optimization for identifying an optimal surfactant-enhanced aquifer remediation strategy at heterogeneous DNAPL-contaminated sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin</p> <p>2015-11-01</p> <p>The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.H52B..02L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.H52B..02L"><span>Ensemble of Surrogates-based Optimization for Identifying an Optimal Surfactant-enhanced Aquifer Remediation Strategy at Heterogeneous DNAPL-contaminated Sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.</p> <p>2015-12-01</p> <p>The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22364774-dynamic-stability-solar-system-statistically-inconclusive-results-from-ensemble-integrations','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22364774-dynamic-stability-solar-system-statistically-inconclusive-results-from-ensemble-integrations"><span>DYNAMIC STABILITY OF THE SOLAR SYSTEM: STATISTICALLY INCONCLUSIVE RESULTS FROM ENSEMBLE INTEGRATIONS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zeebe, Richard E., E-mail: zeebe@soest.hawaii.edu</p> <p></p> <p>Due to the chaotic nature of the solar system, the question of its long-term stability can only be answered in a statistical sense, for instance, based on numerical ensemble integrations of nearby orbits. Destabilization of the inner planets, leading to close encounters and/or collisions can be initiated through a large increase in Mercury's eccentricity, with a currently assumed likelihood of ∼1%. However, little is known at present about the robustness of this number. Here I report ensemble integrations of the full equations of motion of the eight planets and Pluto over 5 Gyr, including contributions from general relativity. The resultsmore » show that different numerical algorithms lead to statistically different results for the evolution of Mercury's eccentricity (e{sub M}). For instance, starting at present initial conditions (e{sub M}≃0.21), Mercury's maximum eccentricity achieved over 5 Gyr is, on average, significantly higher in symplectic ensemble integrations using heliocentric rather than Jacobi coordinates and stricter error control. In contrast, starting at a possible future configuration (e{sub M}≃0.53), Mercury's maximum eccentricity achieved over the subsequent 500 Myr is, on average, significantly lower using heliocentric rather than Jacobi coordinates. For example, the probability for e{sub M} to increase beyond 0.53 over 500 Myr is >90% (Jacobi) versus only 40%-55% (heliocentric). This poses a dilemma because the physical evolution of the real system—and its probabilistic behavior—cannot depend on the coordinate system or the numerical algorithm chosen to describe it. Some tests of the numerical algorithms suggest that symplectic integrators using heliocentric coordinates underestimate the odds for destabilization of Mercury's orbit at high initial e{sub M}.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JHyd..556..634M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JHyd..556..634M"><span>Comprehensive evaluation of Ensemble Multi-Satellite Precipitation Dataset using the Dynamic Bayesian Model Averaging scheme over the Tibetan plateau</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ma, Yingzhao; Yang, Yuan; Han, Zhongying; Tang, Guoqiang; Maguire, Lane; Chu, Zhigang; Hong, Yang</p> <p>2018-01-01</p> <p>The objective of this study is to comprehensively evaluate the new Ensemble Multi-Satellite Precipitation Dataset using the Dynamic Bayesian Model Averaging scheme (EMSPD-DBMA) at daily and 0.25° scales from 2001 to 2015 over the Tibetan Plateau (TP). Error analysis against gauge observations revealed that EMSPD-DBMA captured the spatiotemporal pattern of daily precipitation with an acceptable Correlation Coefficient (CC) of 0.53 and a Relative Bias (RB) of -8.28%. Moreover, EMSPD-DBMA outperformed IMERG and GSMaP-MVK in almost all metrics in the summers of 2014 and 2015, with the lowest RB and Root Mean Square Error (RMSE) values of -2.88% and 8.01 mm/d, respectively. It also better reproduced the Probability Density Function (PDF) in terms of daily rainfall amount and estimated moderate and heavy rainfall better than both IMERG and GSMaP-MVK. Further, hydrological evaluation with the Coupled Routing and Excess STorage (CREST) model in the Upper Yangtze River region indicated that the EMSPD-DBMA forced simulation showed satisfying hydrological performance in terms of streamflow prediction, with Nash-Sutcliffe coefficient of Efficiency (NSE) values of 0.82 and 0.58, compared to gauge forced simulation (0.88 and 0.60) at the calibration and validation periods, respectively. EMSPD-DBMA also performed a greater fitness for peak flow simulation than a new Multi-Source Weighted-Ensemble Precipitation Version 2 (MSWEP V2) product, indicating a promising prospect of hydrological utility for the ensemble satellite precipitation data. This study belongs to early comprehensive evaluation of the blended multi-satellite precipitation data across the TP, which would be significant for improving the DBMA algorithm in regions with complex terrain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29080301','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29080301"><span>Assessing uncertainties in crop and pasture ensemble model simulations of productivity and N2 O emissions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ehrhardt, Fiona; Soussana, Jean-François; Bellocchi, Gianni; Grace, Peter; McAuliffe, Russel; Recous, Sylvie; Sándor, Renáta; Smith, Pete; Snow, Val; de Antoni Migliorati, Massimiliano; Basso, Bruno; Bhatia, Arti; Brilli, Lorenzo; Doltra, Jordi; Dorich, Christopher D; Doro, Luca; Fitton, Nuala; Giacomini, Sandro J; Grant, Brian; Harrison, Matthew T; Jones, Stephanie K; Kirschbaum, Miko U F; Klumpp, Katja; Laville, Patricia; Léonard, Joël; Liebig, Mark; Lieffering, Mark; Martin, Raphaël; Massad, Raia S; Meier, Elizabeth; Merbold, Lutz; Moore, Andrew D; Myrgiotis, Vasileios; Newton, Paul; Pattey, Elizabeth; Rolinski, Susanne; Sharp, Joanna; Smith, Ward N; Wu, Lianhai; Zhang, Qing</p> <p>2018-02-01</p> <p>Simulation models are extensively used to predict agricultural productivity and greenhouse gas emissions. However, the uncertainties of (reduced) model ensemble simulations have not been assessed systematically for variables affecting food security and climate change mitigation, within multi-species agricultural contexts. We report an international model comparison and benchmarking exercise, showing the potential of multi-model ensembles to predict productivity and nitrous oxide (N 2 O) emissions for wheat, maize, rice and temperate grasslands. Using a multi-stage modelling protocol, from blind simulations (stage 1) to partial (stages 2-4) and full calibration (stage 5), 24 process-based biogeochemical models were assessed individually or as an ensemble against long-term experimental data from four temperate grassland and five arable crop rotation sites spanning four continents. Comparisons were performed by reference to the experimental uncertainties of observed yields and N 2 O emissions. Results showed that across sites and crop/grassland types, 23%-40% of the uncalibrated individual models were within two standard deviations (SD) of observed yields, while 42 (rice) to 96% (grasslands) of the models were within 1 SD of observed N 2 O emissions. At stage 1, ensembles formed by the three lowest prediction model errors predicted both yields and N 2 O emissions within experimental uncertainties for 44% and 33% of the crop and grassland growth cycles, respectively. Partial model calibration (stages 2-4) markedly reduced prediction errors of the full model ensemble E-median for crop grain yields (from 36% at stage 1 down to 4% on average) and grassland productivity (from 44% to 27%) and to a lesser and more variable extent for N 2 O emissions. Yield-scaled N 2 O emissions (N 2 O emissions divided by crop yields) were ranked accurately by three-model ensembles across crop species and field sites. The potential of using process-based model ensembles to predict jointly productivity and N 2 O emissions at field scale is discussed. © 2017 John Wiley & Sons Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H43B1623L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H43B1623L"><span>Assessment of Surface Air Temperature over China Using Multi-criterion Model Ensemble Framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, J.; Zhu, Q.; Su, L.; He, X.; Zhang, X.</p> <p>2017-12-01</p> <p>The General Circulation Models (GCMs) are designed to simulate the present climate and project future trends. It has been noticed that the performances of GCMs are not always in agreement with each other over different regions. Model ensemble techniques have been developed to post-process the GCMs' outputs and improve their prediction reliabilities. To evaluate the performances of GCMs, root-mean-square error, correlation coefficient, and uncertainty are commonly used statistical measures. However, the simultaneous achievements of these satisfactory statistics cannot be guaranteed when using many model ensemble techniques. Meanwhile, uncertainties and future scenarios are critical for Water-Energy management and operation. In this study, a new multi-model ensemble framework was proposed. It uses a state-of-art evolutionary multi-objective optimization algorithm, termed Multi-Objective Complex Evolution Global Optimization with Principle Component Analysis and Crowding Distance (MOSPD), to derive optimal GCM ensembles and demonstrate the trade-offs among various solutions. Such trade-off information was further analyzed with a robust Pareto front with respect to different statistical measures. A case study was conducted to optimize the surface air temperature (SAT) ensemble solutions over seven geographical regions of China for the historical period (1900-2005) and future projection (2006-2100). The results showed that the ensemble solutions derived with MOSPD algorithm are superior over the simple model average and any single model output during the historical simulation period. For the future prediction, the proposed ensemble framework identified that the largest SAT change would occur in the South Central China under RCP 2.6 scenario, North Eastern China under RCP 4.5 scenario, and North Western China under RCP 8.5 scenario, while the smallest SAT change would occur in the Inner Mongolia under RCP 2.6 scenario, South Central China under RCP 4.5 scenario, and South Central China under RCP 8.5 scenario.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFMNG23C..02K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFMNG23C..02K"><span>4D Hybrid Ensemble-Variational Data Assimilation for the NCEP GFS: Outer Loops and Variable Transforms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kleist, D. T.; Ide, K.; Mahajan, R.; Thomas, C.</p> <p>2014-12-01</p> <p>The use of hybrid error covariance models has become quite popular for numerical weather prediction (NWP). One such method for incorporating localized covariances from an ensemble within the variational framework utilizes an augmented control variable (EnVar), and has been implemented in the operational NCEP data assimilation system (GSI). By taking the existing 3D EnVar algorithm in GSI and allowing for four-dimensional ensemble perturbations, coupled with the 4DVAR infrastructure already in place, a 4D EnVar capability has been developed. The 4D EnVar algorithm has a few attractive qualities relative to 4DVAR, including the lack of need for tangent-linear and adjoint model as well as reduced computational cost. Preliminary results using real observations have been encouraging, showing forecast improvements nearly as large as were found in moving from 3DVAR to hybrid 3D EnVar. 4D EnVar is the method of choice for the next generation assimilation system for use with the operational NCEP global model, the global forecast system (GFS). The use of an outer-loop has long been the method of choice for 4DVar data assimilation to help address nonlinearity. An outer loop involves the re-running of the (deterministic) background forecast from the updated initial condition at the beginning of the assimilation window, and proceeding with another inner loop minimization. Within 4D EnVar, a similar procedure can be adopted since the solver evaluates a 4D analysis increment throughout the window, consistent with the valid times of the 4D ensemble perturbations. In this procedure, the ensemble perturbations are kept fixed and centered about the updated background state. This is analogous to the quasi-outer loop idea developed for the EnKF. Here, we present results for both toy model and real NWP systems demonstrating the impact from incorporating outer loops to address nonlinearity within the 4D EnVar context. The appropriate amplitudes for observation and background error covariances in subsequent outer loops will be explored. Lastly, variable transformations on the ensemble perturbations will be utilized to help address issues of non-Gaussianity. This may be particularly important for variables that clearly have non-Gaussian error characteristics such as water vapor and cloud condensate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5296775','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5296775"><span>Role of Dorsomedial Striatum Neuronal Ensembles in Incubation of Methamphetamine Craving after Voluntary Abstinence</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Venniro, Marco; Zhang, Michelle; Bossert, Jennifer M.; Warren, Brandon L.; Hope, Bruce T.</p> <p>2017-01-01</p> <p>We recently developed a rat model of incubation of methamphetamine craving after choice-based voluntary abstinence. Here, we studied the role of dorsolateral striatum (DLS) and dorsomedial striatum (DMS) in this incubation. We trained rats to self-administer palatable food pellets (6 d, 6 h/d) and methamphetamine (12 d, 6 h/d). We then assessed relapse to methamphetamine seeking under extinction conditions after 1 and 21 abstinence days. Between tests, the rats underwent voluntary abstinence (using a discrete choice procedure between methamphetamine and food; 20 trials/d) for 19 d. We used in situ hybridization to measure the colabeling of the activity marker Fos with Drd1 and Drd2 in DMS and DLS after the tests. Based on the in situ hybridization colabeling results, we tested the causal role of DMS D1 and D2 family receptors, and DMS neuronal ensembles in “incubated” methamphetamine seeking, using selective dopamine receptor antagonists (SCH39166 or raclopride) and the Daun02 chemogenetic inactivation procedure, respectively. Methamphetamine seeking was higher after 21 d of voluntary abstinence than after 1 d (incubation of methamphetamine craving). The incubated response was associated with increased Fos expression in DMS but not in DLS; Fos was colabeled with both Drd1 and Drd2. DMS injections of SCH39166 or raclopride selectively decreased methamphetamine seeking after 21 abstinence days. In Fos-lacZ transgenic rats, selective inactivation of relapse test-activated Fos neurons in DMS on abstinence day 18 decreased incubated methamphetamine seeking on day 21. Results demonstrate a role of DMS dopamine D1 and D2 receptors in the incubation of methamphetamine craving after voluntary abstinence and that DMS neuronal ensembles mediate this incubation. SIGNIFICANCE STATEMENT In human addicts, abstinence is often self-imposed and relapse can be triggered by exposure to drug-associated cues that induce drug craving. We recently developed a rat model of incubation of methamphetamine craving after choice-based voluntary abstinence. Here, we used classical pharmacology, in situ hybridization, immunohistochemistry, and the Daun02 inactivation procedure to demonstrate a critical role of dorsomedial striatum neuronal ensembles in this new form of incubation of drug craving. PMID:28123032</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2662860','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2662860"><span>Improving consensus structure by eliminating averaging artifacts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>KC, Dukka B</p> <p>2009-01-01</p> <p>Background Common structural biology methods (i.e., NMR and molecular dynamics) often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA) is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure) towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures). However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%); in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38) of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA [1], our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction [2], which could also benefit from our approach. PMID:19267905</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003ITNS...50.2265W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003ITNS...50.2265W"><span>Evaluating average and atypical response in radiation effects simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Weller, R. A.; Sternberg, A. L.; Massengill, L. W.; Schrimpf, R. D.; Fleetwood, D. M.</p> <p>2003-12-01</p> <p>We examine the limits of performing single-event simulations using pre-averaged radiation events. Geant4 simulations show the necessity, for future devices, to supplement current methods with ensemble averaging of device-level responses to physically realistic radiation events. Initial Monte Carlo simulations have generated a significant number of extremal events in local energy deposition. These simulations strongly suggest that proton strikes of sufficient energy, even those that initiate purely electronic interactions, can initiate device response capable in principle of producing single event upset or microdose damage in highly scaled devices.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010NHESS..10.2371V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010NHESS..10.2371V"><span>Multiphysics superensemble forecast applied to Mediterranean heavy precipitation situations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vich, M.; Romero, R.</p> <p>2010-11-01</p> <p>The high-impact precipitation events that regularly affect the western Mediterranean coastal regions are still difficult to predict with the current prediction systems. Bearing this in mind, this paper focuses on the superensemble technique applied to the precipitation field. Encouraged by the skill shown by a previous multiphysics ensemble prediction system applied to western Mediterranean precipitation events, the superensemble is fed with this ensemble. The training phase of the superensemble contributes to the actual forecast with weights obtained by comparing the past performance of the ensemble members and the corresponding observed states. The non-hydrostatic MM5 mesoscale model is used to run the multiphysics ensemble. Simulations are performed with a 22.5 km resolution domain (Domain 1 in <a href=" http://mm5forecasts.uib.es" target ="_blank"> http://mm5forecasts.uib.es</a>) nested in the ECMWF forecast fields. The period between September and December 2001 is used to train the superensemble and a collection of 19~MEDEX cyclones is used to test it. The verification procedure involves testing the superensemble performance and comparing it with that of the poor-man and bias-corrected ensemble mean and the multiphysic EPS control member. The results emphasize the need of a well-behaved training phase to obtain good results with the superensemble technique. A strategy to obtain this improved training phase is already outlined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012amld.book..563R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012amld.book..563R"><span>Ensemble Methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Re, Matteo; Valentini, Giorgio</p> <p>2012-03-01</p> <p>Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been proposed to explain the characteristics and the successful application of ensembles to different application domains. For instance, Allwein, Schapire, and Singer interpreted the improved generalization capabilities of ensembles of learning machines in the framework of large margin classifiers [4,177], Kleinberg in the context of stochastic discrimination theory [112], and Breiman and Friedman in the light of the bias-variance analysis borrowed from classical statistics [21,70]. Empirical studies showed that both in classification and regression problems, ensembles improve on single learning machines, and moreover large experimental studies compared the effectiveness of different ensemble methods on benchmark data sets [10,11,49,188]. The interest in this research area is motivated also by the availability of very fast computers and networks of workstations at a relatively low cost that allow the implementation and the experimentation of complex ensemble methods using off-the-shelf computer platforms. However, as explained in Section 26.2 there are deeper reasons to use ensembles of learning machines, motivated by the intrinsic characteristics of the ensemble methods. The main aim of this chapter is to introduce ensemble methods and to provide an overview and a bibliography of the main areas of research, without pretending to be exhaustive or to explain the detailed characteristics of each ensemble method. The paper is organized as follows. In the next section, the main theoretical and practical reasons for combining multiple learners are introduced. Section 26.3 depicts the main taxonomies on ensemble methods proposed in the literature. In Section 26.4 and 26.5, we present an overview of the main supervised ensemble methods reported in the literature, adopting a simple taxonomy, originally proposed in Ref. [201]. Applications of ensemble methods are only marginally considered, but a specific section on some relevant applications of ensemble methods in astronomy and astrophysics has been added (Section 26.6). The conclusion (Section 26.7) ends this paper and lists some issues not covered in this work.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23617269','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23617269"><span>Ensemble-based prediction of RNA secondary structures.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Aghaeepour, Nima; Hoos, Holger H</p> <p>2013-04-24</p> <p>Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between false negative and false positive base pair predictions. Finally, AveRNA can make use of arbitrary sets of secondary structure prediction procedures and can therefore be used to leverage improvements in prediction accuracy offered by algorithms and energy models developed in the future. Our data, MATLAB software and a web-based version of AveRNA are publicly available at http://www.cs.ubc.ca/labs/beta/Software/AveRNA.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25330243','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25330243"><span>Multimodel ensembles of wheat growth: many models are better than one.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W; Rötter, Reimund P; Boote, Kenneth J; Ruane, Alex C; Thorburn, Peter J; Cammarano, Davide; Hatfield, Jerry L; Rosenzweig, Cynthia; Aggarwal, Pramod K; Angulo, Carlos; Basso, Bruno; Bertuzzi, Patrick; Biernath, Christian; Brisson, Nadine; Challinor, Andrew J; Doltra, Jordi; Gayler, Sebastian; Goldberg, Richie; Grant, Robert F; Heng, Lee; Hooker, Josh; Hunt, Leslie A; Ingwersen, Joachim; Izaurralde, Roberto C; Kersebaum, Kurt Christian; Müller, Christoph; Kumar, Soora Naresh; Nendel, Claas; O'leary, Garry; Olesen, Jørgen E; Osborne, Tom M; Palosuo, Taru; Priesack, Eckart; Ripoche, Dominique; Semenov, Mikhail A; Shcherbak, Iurii; Steduto, Pasquale; Stöckle, Claudio O; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Travasso, Maria; Waha, Katharina; White, Jeffrey W; Wolf, Joost</p> <p>2015-02-01</p> <p>Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24-38% for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models. © 2014 John Wiley & Sons Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JChPh.135t4101W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JChPh.135t4101W"><span>Force-momentum-based self-guided Langevin dynamics: A rapid sampling method that approaches the canonical ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, Xiongwu; Brooks, Bernard R.</p> <p>2011-11-01</p> <p>The self-guided Langevin dynamics (SGLD) is a method to accelerate conformational searching. This method is unique in the way that it selectively enhances and suppresses molecular motions based on their frequency to accelerate conformational searching without modifying energy surfaces or raising temperatures. It has been applied to studies of many long time scale events, such as protein folding. Recent progress in the understanding of the conformational distribution in SGLD simulations makes SGLD also an accurate method for quantitative studies. The SGLD partition function provides a way to convert the SGLD conformational distribution to the canonical ensemble distribution and to calculate ensemble average properties through reweighting. Based on the SGLD partition function, this work presents a force-momentum-based self-guided Langevin dynamics (SGLDfp) simulation method to directly sample the canonical ensemble. This method includes interaction forces in its guiding force to compensate the perturbation caused by the momentum-based guiding force so that it can approximately sample the canonical ensemble. Using several example systems, we demonstrate that SGLDfp simulations can approximately maintain the canonical ensemble distribution and significantly accelerate conformational searching. With optimal parameters, SGLDfp and SGLD simulations can cross energy barriers of more than 15 kT and 20 kT, respectively, at similar rates for LD simulations to cross energy barriers of 10 kT. The SGLDfp method is size extensive and works well for large systems. For studies where preserving accessible conformational space is critical, such as free energy calculations and protein folding studies, SGLDfp is an efficient approach to search and sample the conformational space.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150000778','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150000778"><span>Multimodel Ensembles of Wheat Growth: More Models are Better than One</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W.; Rotter, Reimund P.; Boote, Kenneth J.; Ruane, Alex C.; Thorburn, Peter J.; Cammarano, Davide; <a style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20150000778'); toggleEditAbsImage('author_20150000778_show'); toggleEditAbsImage('author_20150000778_hide'); "> <img style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20150000778_show"> <img style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20150000778_hide"></p> <p>2015-01-01</p> <p>Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24-38% for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130014808','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130014808"><span>The Impact of Model and Rainfall Forcing Errors on Characterizing Soil Moisture Uncertainty in Land Surface Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Maggioni, V.; Anagnostou, E. N.; Reichle, R. H.</p> <p>2013-01-01</p> <p>The contribution of rainfall forcing errors relative to model (structural and parameter) uncertainty in the prediction of soil moisture is investigated by integrating the NASA Catchment Land Surface Model (CLSM), forced with hydro-meteorological data, in the Oklahoma region. Rainfall-forcing uncertainty is introduced using a stochastic error model that generates ensemble rainfall fields from satellite rainfall products. The ensemble satellite rain fields are propagated through CLSM to produce soil moisture ensembles. Errors in CLSM are modeled with two different approaches: either by perturbing model parameters (representing model parameter uncertainty) or by adding randomly generated noise (representing model structure and parameter uncertainty) to the model prognostic variables. Our findings highlight that the method currently used in the NASA GEOS-5 Land Data Assimilation System to perturb CLSM variables poorly describes the uncertainty in the predicted soil moisture, even when combined with rainfall model perturbations. On the other hand, by adding model parameter perturbations to rainfall forcing perturbations, a better characterization of uncertainty in soil moisture simulations is observed. Specifically, an analysis of the rank histograms shows that the most consistent ensemble of soil moisture is obtained by combining rainfall and model parameter perturbations. When rainfall forcing and model prognostic perturbations are added, the rank histogram shows a U-shape at the domain average scale, which corresponds to a lack of variability in the forecast ensemble. The more accurate estimation of the soil moisture prediction uncertainty obtained by combining rainfall and parameter perturbations is encouraging for the application of this approach in ensemble data assimilation systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20160001114&hterms=wheat&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dwheat','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20160001114&hterms=wheat&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dwheat"><span>Multimodel Ensembles of Wheat Growth: Many Models are Better than One</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W.; Rotter, Reimund P.; Boote, Kenneth J.; Ruane, Alexander C.; Thorburn, Peter J.; Cammarano, Davide; <a style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20160001114'); toggleEditAbsImage('author_20160001114_show'); toggleEditAbsImage('author_20160001114_hide'); "> <img style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20160001114_show"> <img style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20160001114_hide"></p> <p>2015-01-01</p> <p>Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop model scan give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 2438 for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19870004228','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19870004228"><span>Turbine Vane External Heat Transfer. Volume 2. Numerical Solutions of the Navier-stokes Equations for Two- and Three-dimensional Turbine Cascades with Heat Transfer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Yang, R. J.; Weinberg, B. C.; Shamroth, S. J.; Mcdonald, H.</p> <p>1985-01-01</p> <p>The application of the time-dependent ensemble-averaged Navier-Stokes equations to transonic turbine cascade flow fields was examined. In particular, efforts focused on an assessment of the procedure in conjunction with a suitable turbulence model to calculate steady turbine flow fields using an O-type coordinate system. Three cascade configurations were considered. Comparisons were made between the predicted and measured surface pressures and heat transfer distributions wherever available. In general, the pressure predictions were in good agreement with the data. Heat transfer calculations also showed good agreement when an empirical transition model was used. However, further work in the development of laminar-turbulent transitional models is indicated. The calculations showed most of the known features associated with turbine cascade flow fields. These results indicate the ability of the Navier-Stokes analysis to predict, in reasonable amounts of computation time, the surface pressure distribution, heat transfer rates, and viscous flow development for turbine cascades operating at realistic conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JChPh.125r4106H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JChPh.125r4106H"><span>Improved diffusion Monte Carlo propagators for bosonic systems using Itô calculus</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hâkansson, P.; Mella, M.; Bressanini, Dario; Morosi, Gabriele; Patrone, Marta</p> <p>2006-11-01</p> <p>The construction of importance sampled diffusion Monte Carlo (DMC) schemes accurate to second order in the time step is discussed. A central aspect in obtaining efficient second order schemes is the numerical solution of the stochastic differential equation (SDE) associated with the Fokker-Plank equation responsible for the importance sampling procedure. In this work, stochastic predictor-corrector schemes solving the SDE and consistent with Itô calculus are used in DMC simulations of helium clusters. These schemes are numerically compared with alternative algorithms obtained by splitting the Fokker-Plank operator, an approach that we analyze using the analytical tools provided by Itô calculus. The numerical results show that predictor-corrector methods are indeed accurate to second order in the time step and that they present a smaller time step bias and a better efficiency than second order split-operator derived schemes when computing ensemble averages for bosonic systems. The possible extension of the predictor-corrector methods to higher orders is also discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4082491','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4082491"><span>Crystal cryocooling distorts conformational heterogeneity in a model Michaelis complex of DHFR</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Keedy, Daniel A.; van den Bedem, Henry; Sivak, David A.; Petsko, Gregory A.; Ringe, Dagmar; Wilson, Mark A.; Fraser, James S.</p> <p>2014-01-01</p> <p>Summary Most macromolecular X-ray structures are determined from cryocooled crystals, but it is unclear whether cryocooling distorts functionally relevant flexibility. Here we compare independently acquired pairs of high-resolution datasets of a model Michaelis complex of dihydrofolate reductase (DHFR), collected by separate groups at both room and cryogenic temperatures. These datasets allow us to isolate the differences between experimental procedures and between temperatures. Our analyses of multiconformer models and time-averaged ensembles suggest that cryocooling suppresses and otherwise modifies sidechain and mainchain conformational heterogeneity, quenching dynamic contact networks. Despite some idiosyncratic differences, most changes from room temperature to cryogenic temperature are conserved, and likely reflect temperature-dependent solvent remodeling. Both cryogenic datasets point to additional conformations not evident in the corresponding room-temperature datasets, suggesting that cryocooling does not merely trap pre-existing conformational heterogeneity. Our results demonstrate that crystal cryocooling consistently distorts the energy landscape of DHFR, a paragon for understanding functional protein dynamics. PMID:24882744</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1014360','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1014360"><span>A Community Terrain-Following Ocean Modeling System (ROMS)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2015-09-30</p> <p>funded NOPP project titled: Toward the Development of a Coupled COAMPS-ROMS Ensemble Kalman filter and adjoint with a focus on the Indian Ocean and the...surface temperature and surface salinity daily averages for 31-Jan-2014. Similarly, Figure 3 shows the sea surface height averaged solution for 31-Jan... temperature (upper panel; Celsius) and surface salinity (lower panel) for 31-Jan-2014. The refined solution for the Hudson Canyon grid is overlaid on</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015NatSR...515610L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015NatSR...515610L"><span>Heralded high-efficiency quantum repeater with atomic ensembles assisted by faithful single-photon transmission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Tao; Deng, Fu-Guo</p> <p>2015-10-01</p> <p>Quantum repeater is one of the important building blocks for long distance quantum communication network. The previous quantum repeaters based on atomic ensembles and linear optical elements can only be performed with a maximal success probability of 1/2 during the entanglement creation and entanglement swapping procedures. Meanwhile, the polarization noise during the entanglement distribution process is harmful to the entangled channel created. Here we introduce a general interface between a polarized photon and an atomic ensemble trapped in a single-sided optical cavity, and with which we propose a high-efficiency quantum repeater protocol in which the robust entanglement distribution is accomplished by the stable spatial-temporal entanglement and it can in principle create the deterministic entanglement between neighboring atomic ensembles in a heralded way as a result of cavity quantum electrodynamics. Meanwhile, the simplified parity-check gate makes the entanglement swapping be completed with unity efficiency, other than 1/2 with linear optics. We detail the performance of our protocol with current experimental parameters and show its robustness to the imperfections, i.e., detuning and coupling variation, involved in the reflection process. These good features make it a useful building block in long distance quantum communication.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JAMES..10..989L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JAMES..10..989L"><span>Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei</p> <p>2018-04-01</p> <p>Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26502993','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26502993"><span>Heralded high-efficiency quantum repeater with atomic ensembles assisted by faithful single-photon transmission.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Tao; Deng, Fu-Guo</p> <p>2015-10-27</p> <p>Quantum repeater is one of the important building blocks for long distance quantum communication network. The previous quantum repeaters based on atomic ensembles and linear optical elements can only be performed with a maximal success probability of 1/2 during the entanglement creation and entanglement swapping procedures. Meanwhile, the polarization noise during the entanglement distribution process is harmful to the entangled channel created. Here we introduce a general interface between a polarized photon and an atomic ensemble trapped in a single-sided optical cavity, and with which we propose a high-efficiency quantum repeater protocol in which the robust entanglement distribution is accomplished by the stable spatial-temporal entanglement and it can in principle create the deterministic entanglement between neighboring atomic ensembles in a heralded way as a result of cavity quantum electrodynamics. Meanwhile, the simplified parity-check gate makes the entanglement swapping be completed with unity efficiency, other than 1/2 with linear optics. We detail the performance of our protocol with current experimental parameters and show its robustness to the imperfections, i.e., detuning and coupling variation, involved in the reflection process. These good features make it a useful building block in long distance quantum communication.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4621506','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4621506"><span>Heralded high-efficiency quantum repeater with atomic ensembles assisted by faithful single-photon transmission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Tao; Deng, Fu-Guo</p> <p>2015-01-01</p> <p>Quantum repeater is one of the important building blocks for long distance quantum communication network. The previous quantum repeaters based on atomic ensembles and linear optical elements can only be performed with a maximal success probability of 1/2 during the entanglement creation and entanglement swapping procedures. Meanwhile, the polarization noise during the entanglement distribution process is harmful to the entangled channel created. Here we introduce a general interface between a polarized photon and an atomic ensemble trapped in a single-sided optical cavity, and with which we propose a high-efficiency quantum repeater protocol in which the robust entanglement distribution is accomplished by the stable spatial-temporal entanglement and it can in principle create the deterministic entanglement between neighboring atomic ensembles in a heralded way as a result of cavity quantum electrodynamics. Meanwhile, the simplified parity-check gate makes the entanglement swapping be completed with unity efficiency, other than 1/2 with linear optics. We detail the performance of our protocol with current experimental parameters and show its robustness to the imperfections, i.e., detuning and coupling variation, involved in the reflection process. These good features make it a useful building block in long distance quantum communication. PMID:26502993</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29911678','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29911678"><span>Ensemble stacking mitigates biases in inference of synaptic connectivity.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N</p> <p>2018-01-01</p> <p>A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1407038-statistical-hadronization-microcanonical-ensemble','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1407038-statistical-hadronization-microcanonical-ensemble"><span>Statistical hadronization and microcanonical ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Becattini, F.; Ferroni, L.</p> <p>2004-01-01</p> <p>We present a Monte Carlo calculation of the microcanonical ensemble of the of the ideal hadron-resonance gas including all known states up to a mass of 1. 8 GeV, taking into account quantum statistics. The computing method is a development of a previous one based on a Metropolis Monte Carlo algorithm, with a the grand-canonical limit of the multi-species multiplicity distribution as proposal matrix. The microcanonical average multiplicities of the various hadron species are found to converge to the canonical ones for moderately low values of the total energy. This algorithm opens the way for event generators based for themore » statistical hadronization model.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997PhRvE..55.3727D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997PhRvE..55.3727D"><span>Statistical mechanics of Fermi-Pasta-Ulam chains with the canonical ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Demirel, Melik C.; Sayar, Mehmet; Atılgan, Ali R.</p> <p>1997-03-01</p> <p>Low-energy vibrations of a Fermi-Pasta-Ulam-Β (FPU-Β) chain with 16 repeat units are analyzed with the aid of numerical experiments and the statistical mechanics equations of the canonical ensemble. Constant temperature numerical integrations are performed by employing the cubic coupling scheme of Kusnezov et al. [Ann. Phys. 204, 155 (1990)]. Very good agreement is obtained between numerical results and theoretical predictions for the probability distributions of the generalized coordinates and momenta both of the chain and of the thermal bath. It is also shown that the average energy of the chain scales linearly with the bath temperature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012WRR....48.5520R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012WRR....48.5520R"><span>Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry</p> <p>2012-05-01</p> <p>Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013GeoRL..40.3342H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013GeoRL..40.3342H"><span>Are atmospheric surface layer flows ergodic?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Higgins, Chad W.; Katul, Gabriel G.; Froidevaux, Martin; Simeonov, Valentin; Parlange, Marc B.</p> <p>2013-06-01</p> <p>The transposition of atmospheric turbulence statistics from the time domain, as conventionally sampled in field experiments, is explained by the so-called ergodic hypothesis. In micrometeorology, this hypothesis assumes that the time average of a measured flow variable represents an ensemble of independent realizations from similar meteorological states and boundary conditions. That is, the averaging duration must be sufficiently long to include a large number of independent realizations of the sampled flow variable so as to represent the ensemble. While the validity of the ergodic hypothesis for turbulence has been confirmed in laboratory experiments, and numerical simulations for idealized conditions, evidence for its validity in the atmospheric surface layer (ASL), especially for nonideal conditions, continues to defy experimental efforts. There is some urgency to make progress on this problem given the proliferation of tall tower scalar concentration networks aimed at constraining climate models yet are impacted by nonideal conditions at the land surface. Recent advancements in water vapor concentration lidar measurements that simultaneously sample spatial and temporal series in the ASL are used to investigate the validity of the ergodic hypothesis for the first time. It is shown that ergodicity is valid in a strict sense above uniform surfaces away from abrupt surface transitions. Surprisingly, ergodicity may be used to infer the ensemble concentration statistics of a composite grass-lake system using only water vapor concentration measurements collected above the sharp transition delineating the lake from the grass surface.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ACP....18.5147D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ACP....18.5147D"><span>The influence of internal variability on Earth's energy balance framework and implications for estimating climate sensitivity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dessler, Andrew E.; Mauritsen, Thorsten; Stevens, Bjorn</p> <p>2018-04-01</p> <p>Our climate is constrained by the balance between solar energy absorbed by the Earth and terrestrial energy radiated to space. This energy balance has been widely used to infer equilibrium climate sensitivity (ECS) from observations of 20th-century warming. Such estimates yield lower values than other methods, and these have been influential in pushing down the consensus ECS range in recent assessments. Here we test the method using a 100-member ensemble of the Max Planck Institute Earth System Model (MPI-ESM1.1) simulations of the period 1850-2005 with known forcing. We calculate ECS in each ensemble member using energy balance, yielding values ranging from 2.1 to 3.9 K. The spread in the ensemble is related to the central assumption in the energy budget framework: that global average surface temperature anomalies are indicative of anomalies in outgoing energy (either of terrestrial origin or reflected solar energy). We find that this assumption is not well supported over the historical temperature record in the model ensemble or more recent satellite observations. We find that framing energy balance in terms of 500 hPa tropical temperature better describes the planet's energy balance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23144222','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23144222"><span>Quantum teleportation between remote atomic-ensemble quantum memories.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei</p> <p>2012-12-11</p> <p>Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a "quantum channel," quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895-1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼10(8) rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26736882','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26736882"><span>A comparative study of breast cancer diagnosis based on neural network ensemble via improved training algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Azami, Hamed; Escudero, Javier</p> <p>2015-08-01</p> <p>Breast cancer is one of the most common types of cancer in women all over the world. Early diagnosis of this kind of cancer can significantly increase the chances of long-term survival. Since diagnosis of breast cancer is a complex problem, neural network (NN) approaches have been used as a promising solution. Considering the low speed of the back-propagation (BP) algorithm to train a feed-forward NN, we consider a number of improved NN trainings for the Wisconsin breast cancer dataset: BP with momentum, BP with adaptive learning rate, BP with adaptive learning rate and momentum, Polak-Ribikre conjugate gradient algorithm (CGA), Fletcher-Reeves CGA, Powell-Beale CGA, scaled CGA, resilient BP (RBP), one-step secant and quasi-Newton methods. An NN ensemble, which is a learning paradigm to combine a number of NN outputs, is used to improve the accuracy of the classification task. Results demonstrate that NN ensemble-based classification methods have better performance than NN-based algorithms. The highest overall average accuracy is 97.68% obtained by NN ensemble trained by RBP for 50%-50% training-test evaluation method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20070022841','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20070022841"><span>Unsteady Velocity Measurements in the NASA Research Low Speed Axial Compressor: Smooth Wall Configuration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lepicovsky, Jan</p> <p>2007-01-01</p> <p>The report is a collection of experimental unsteady data acquired in the first stage of the NASA Low Speed Axial Compressor in configuration with smooth (solid) wall treatment over the first rotor. The aim of the report is to present a reliable experimental data base that can be used for analysis of the compressor flow behavior, and hopefully help with further improvements of compressor CFD codes. All data analysis is strictly restricted to verification of reliability of the experimental data reported. The report is divided into six main sections. First two sections cover the low speed axial compressor, the basic instrumentation, and the in-house developed methodology of unsteady velocity measurements using a thermo-anemometric split-fiber probe. The next two sections contain experimental data presented as averaged radial distributions for three compressor operation conditions, including the distribution of the total temperature rise over the first rotor, and ensemble averages of unsteady flow data based on a rotor blade passage period. Ensemble averages based on the rotor revolution period, and spectral analysis of unsteady flow parameters are presented in the last two sections. The report is completed with two appendices where performance and dynamic response of thermo-anemometric probes is discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhA...51q5303L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhA...51q5303L"><span>Random SU(2) invariant tensors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Youning; Han, Muxin; Ruan, Dong; Zeng, Bei</p> <p>2018-04-01</p> <p>SU(2) invariant tensors are states in the (local) SU(2) tensor product representation but invariant under the global group action. They are of importance in the study of loop quantum gravity. A random tensor is an ensemble of tensor states. An average over the ensemble is carried out when computing any physical quantities. The random tensor exhibits a phenomenon known as ‘concentration of measure’, which states that for any bipartition the average value of entanglement entropy of its reduced density matrix is asymptotically the maximal possible as the local dimensions go to infinity. We show that this phenomenon is also true when the average is over the SU(2) invariant subspace instead of the entire space for rank-n tensors in general. It is shown in our earlier work Li et al (2017 New J. Phys. 19 063029) that the subleading correction of the entanglement entropy has a mild logarithmic divergence when n  =  4. In this paper, we show that for n  >  4 the subleading correction is not divergent but a finite number. In some special situation, the number could be even smaller than 1/2, which is the subleading correction of random state over the entire Hilbert space of tensors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4737209','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4737209"><span>Metainference: A Bayesian inference method for heterogeneous systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele</p> <p>2016-01-01</p> <p>Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000PhDT.......196S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000PhDT.......196S"><span>Statistical characterization of planar two-dimensional Rayleigh-Taylor mixing layers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sendersky, Dmitry</p> <p>2000-10-01</p> <p>The statistical evolution of a planar, randomly perturbed fluid interface subject to Rayleigh-Taylor instability is explored through numerical simulation in two space dimensions. The data set, generated by the front-tracking code FronTier, is highly resolved and covers a large ensemble of initial perturbations, allowing a more refined analysis of closure issues pertinent to the stochastic modeling of chaotic fluid mixing. We closely approach a two-fold convergence of the mean two-phase flow: convergence of the numerical solution under computational mesh refinement, and statistical convergence under increasing ensemble size. Quantities that appear in the two-phase averaged Euler equations are computed directly and analyzed for numerical and statistical convergence. Bulk averages show a high degree of convergence, while interfacial averages are convergent only in the outer portions of the mixing zone, where there is a coherent array of bubble and spike tips. Comparison with the familiar bubble/spike penetration law h = alphaAgt 2 is complicated by the lack of scale invariance, inability to carry the simulations to late time, the increasing Mach numbers of the bubble/spike tips, and sensitivity to the method of data analysis. Finally, we use the simulation data to analyze some constitutive properties of the mixing process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EPJWC.14002023S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EPJWC.14002023S"><span>Wave propagation of spectral energy content in a granular chain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shrivastava, Rohit Kumar; Luding, Stefan</p> <p>2017-06-01</p> <p>A mechanical wave is propagation of vibration with transfer of energy and momentum. Understanding the spectral energy characteristics of a propagating wave through disordered granular media can assist in understanding the overall properties of wave propagation through inhomogeneous materials like soil. The study of these properties is aimed at modeling wave propagation for oil, mineral or gas exploration (seismic prospecting) or non-destructive testing of the internal structure of solids. The focus is on the total energy content of a pulse propagating through an idealized one-dimensional discrete particle system like a mass disordered granular chain, which allows understanding the energy attenuation due to disorder since it isolates the longitudinal P-wave from shear or rotational modes. It is observed from the signal that stronger disorder leads to faster attenuation of the signal. An ordered granular chain exhibits ballistic propagation of energy whereas, a disordered granular chain exhibits more diffusive like propagation, which eventually becomes localized at long time periods. For obtaining mean-field macroscopic/continuum properties, ensemble averaging has been used, however, such an ensemble averaged spectral energy response does not resolve multiple scattering, leading to loss of information, indicating the need for a different framework for micro-macro averaging.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED062362.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED062362.pdf"><span>Music: Chorus, Junior.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Owen, Joan; And Others</p> <p></p> <p>A music course of instruction in junior chorus, to develop students' performance skills individually and in ensemble, is described. A prerequisite for pupils is the ability to read music. Outlined are: the course description; enrollment guidelines; study objectives; course content; procedures; resources for pupils and teachers; and the assessment.…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1711727H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1711727H"><span>Trends in the predictive performance of raw ensemble weather forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hemri, Stephan; Scheuerer, Michael; Pappenberger, Florian; Bogner, Konrad; Haiden, Thomas</p> <p>2015-04-01</p> <p>Over the last two decades the paradigm in weather forecasting has shifted from being deterministic to probabilistic. Accordingly, numerical weather prediction (NWP) models have been run increasingly as ensemble forecasting systems. The goal of such ensemble forecasts is to approximate the forecast probability distribution by a finite sample of scenarios. Global ensemble forecast systems, like the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble, are prone to probabilistic biases, and are therefore not reliable. They particularly tend to be underdispersive for surface weather parameters. Hence, statistical post-processing is required in order to obtain reliable and sharp forecasts. In this study we apply statistical post-processing to ensemble forecasts of near-surface temperature, 24-hour precipitation totals, and near-surface wind speed from the global ECMWF model. Our main objective is to evaluate the evolution of the difference in skill between the raw ensemble and the post-processed forecasts. The ECMWF ensemble is under continuous development, and hence its forecast skill improves over time. Parts of these improvements may be due to a reduction of probabilistic bias. Thus, we first hypothesize that the gain by post-processing decreases over time. Based on ECMWF forecasts from January 2002 to March 2014 and corresponding observations from globally distributed stations we generate post-processed forecasts by ensemble model output statistics (EMOS) for each station and variable. Parameter estimates are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over rolling training periods that consist of the n days preceding the initialization dates. Given the higher average skill in terms of CRPS of the post-processed forecasts for all three variables, we analyze the evolution of the difference in skill between raw ensemble and EMOS forecasts. The fact that the gap in skill remains almost constant over time, especially for near-surface wind speed, suggests that improvements to the atmospheric model have an effect quite different from what calibration by statistical post-processing is doing. That is, they are increasing potential skill. Thus this study indicates that (a) further model development is important even if one is just interested in point forecasts, and (b) statistical post-processing is important because it will keep adding skill in the foreseeable future.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1713741V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1713741V"><span>Using ensembles in water management: forecasting dry and wet episodes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van het Schip-Haverkamp, Tessa; van den Berg, Wim; van de Beek, Remco</p> <p>2015-04-01</p> <p>Extreme weather situations as droughts and extensive precipitation are becoming more frequent, which makes it more important to obtain accurate weather forecasts for the short and long term. Ensembles can provide a solution in terms of scenario forecasts. MeteoGroup uses ensembles in a new forecasting technique which presents a number of weather scenarios for a dynamical water management project, called Water-Rijk, in which water storage and water retention plays a large role. The Water-Rijk is part of Park Lingezegen, which is located between Arnhem and Nijmegen in the Netherlands. In collaboration with the University of Wageningen, Alterra and Eijkelkamp a forecasting system is developed for this area which can provide water boards with a number of weather and hydrology scenarios in order to assist in the decision whether or not water retention or water storage is necessary in the near future. In order to make a forecast for drought and extensive precipitation, the difference 'precipitation- evaporation' is used as a measurement of drought in the weather forecasts. In case of an upcoming drought this difference will take larger negative values. In case of a wet episode, this difference will be positive. The Makkink potential evaporation is used which gives the most accurate potential evaporation values during the summer, when evaporation plays an important role in the availability of surface water. Scenarios are determined by reducing the large number of forecasts in the ensemble to a number of averaged members with each its own likelihood of occurrence. For the Water-Rijk project 5 scenario forecasts are calculated: extreme dry, dry, normal, wet and extreme wet. These scenarios are constructed for two forecasting periods, each using its own ensemble technique: up to 48 hours ahead and up to 15 days ahead. The 48-hour forecast uses an ensemble constructed from forecasts of multiple high-resolution regional models: UKMO's Euro4 model,the ECMWF model, WRF and Hirlam. Using multiple model runs and additional post processing, an ensemble can be created from non-ensemble models. The 15-day forecast uses the ECMWF Ensemble Prediction System forecast from which scenarios can be deduced directly. A combination of the ensembles from the two forecasting periods is used in order to have the highest possible resolution of the forecast for the first 48 hours followed by the lower resolution long term forecast.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyA..492..941J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyA..492..941J"><span>Symmetry associated with symmetry break: Revisiting ants and humans escaping from multiple-exit rooms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ji, Q.; Xin, C.; Tang, S. X.; Huang, J. P.</p> <p>2018-02-01</p> <p>Crowd panic has incurred massive injuries or deaths throughout the world, and thus understanding it is particularly important. It is now a common knowledge that crowd panic induces "symmetry break" in which some exits are jammed while others are underutilized. Amazingly, here we show, by experiment, simulation and theory, that a class of symmetry patterns come to appear for ants and humans escaping from multiple-exit rooms while the symmetry break exists. Our symmetry pattern is described by the fact that the ratio between the ensemble-averaging numbers of ants or humans escaping from different exits is equal to the ratio between the widths of the exits. The mechanism lies in the effect of heterogeneous preferences of agents with limited information for achieving the Nash equilibrium. This work offers new insights into how to improve public safety because large public areas are always equipped with multiple exits, and it also brings an ensemble-averaging method for seeking symmetry associated with symmetry breaking.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29270990','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29270990"><span>Using the fast fourier transform in binding free energy calculations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nguyen, Trung Hai; Zhou, Huan-Xiang; Minh, David D L</p> <p>2018-04-30</p> <p>According to implicit ligand theory, the standard binding free energy is an exponential average of the binding potential of mean force (BPMF), an exponential average of the interaction energy between the unbound ligand ensemble and a rigid receptor. Here, we use the fast Fourier transform (FFT) to efficiently evaluate BPMFs by calculating interaction energies when rigid ligand configurations from the unbound ensemble are discretely translated across rigid receptor conformations. Results for standard binding free energies between T4 lysozyme and 141 small organic molecules are in good agreement with previous alchemical calculations based on (1) a flexible complex ( R≈0.9 for 24 systems) and (2) flexible ligand with multiple rigid receptor configurations ( R≈0.8 for 141 systems). While the FFT is routinely used for molecular docking, to our knowledge this is the first time that the algorithm has been used for rigorous binding free energy calculations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19860009818','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19860009818"><span>A two-dimensional numerical study of the flow inside the combustion chambers of a motored rotary engine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shih, T. I. P.; Yang, S. L.; Schock, H. J.</p> <p>1986-01-01</p> <p>A numerical study was performed to investigate the unsteady, multidimensional flow inside the combustion chambers of an idealized, two-dimensional, rotary engine under motored conditions. The numerical study was based on the time-dependent, two-dimensional, density-weighted, ensemble-averaged conservation equations of mass, species, momentum, and total energy valid for two-component ideal gas mixtures. The ensemble-averaged conservation equations were closed by a K-epsilon model of turbulence. This K-epsilon model of turbulence was modified to account for some of the effects of compressibility, streamline curvature, low-Reynolds number, and preferential stress dissipation. Numerical solutions to the conservation equations were obtained by the highly efficient implicit-factored method of Beam and Warming. The grid system needed to obtain solutions were generated by an algebraic grid generation technique based on transfinite interpolation. Results of the numerical study are presented in graphical form illustrating the flow patterns during intake, compression, gaseous fuel injection, expansion, and exhaust.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017GeoRL..44.5540Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017GeoRL..44.5540Y"><span>Regional patterns of future runoff changes from Earth system models constrained by observation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Hui; Zhou, Feng; Piao, Shilong; Huang, Mengtian; Chen, Anping; Ciais, Philippe; Li, Yue; Lian, Xu; Peng, Shushi; Zeng, Zhenzhong</p> <p>2017-06-01</p> <p>In the recent Intergovernmental Panel on Climate Change assessment, multimodel ensembles (arithmetic model averaging, AMA) were constructed with equal weights given to Earth system models, without considering the performance of each model at reproducing current conditions. Here we use Bayesian model averaging (BMA) to construct a weighted model ensemble for runoff projections. Higher weights are given to models with better performance in estimating historical decadal mean runoff. Using the BMA method, we find that by the end of this century, the increase of global runoff (9.8 ± 1.5%) under Representative Concentration Pathway 8.5 is significantly lower than estimated from AMA (12.2 ± 1.3%). BMA presents a less severe runoff increase than AMA at northern high latitudes and a more severe decrease in Amazonia. Runoff decrease in Amazonia is stronger than the intermodel difference. The intermodel difference in runoff changes is mainly caused not only by precipitation differences among models, but also by evapotranspiration differences at the high northern latitudes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25325571','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25325571"><span>Infinitely dilute partial molar properties of proteins from computer simulation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ploetz, Elizabeth A; Smith, Paul E</p> <p>2014-11-13</p> <p>A detailed understanding of temperature and pressure effects on an infinitely dilute protein's conformational equilibrium requires knowledge of the corresponding infinitely dilute partial molar properties. Established molecular dynamics methodologies generally have not provided a way to calculate these properties without either a loss of thermodynamic rigor, the introduction of nonunique parameters, or a loss of information about which solute conformations specifically contributed to the output values. Here we implement a simple method that is thermodynamically rigorous and possesses none of the above disadvantages, and we report on the method's feasibility and computational demands. We calculate infinitely dilute partial molar properties for two proteins and attempt to distinguish the thermodynamic differences between a native and a denatured conformation of a designed miniprotein. We conclude that simple ensemble average properties can be calculated with very reasonable amounts of computational power. In contrast, properties corresponding to fluctuating quantities are computationally demanding to calculate precisely, although they can be obtained more easily by following the temperature and/or pressure dependence of the corresponding ensemble averages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24111326','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24111326"><span>A frequency domain analysis of respiratory variations in the seismocardiogram signal.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pandia, Keya; Inan, Omer T; Kovacs, Gregory T A</p> <p>2013-01-01</p> <p>The seismocardiogram (SCG) signal traditionally measured using a chest-mounted accelerometer contains low-frequency (0-100 Hz) cardiac vibrations that can be used to derive diagnostically relevant information about cardiovascular and cardiopulmonary health. This work is aimed at investigating the effects of respiration on the frequency domain characteristics of SCG signals measured from 18 healthy subjects. Toward this end, the 0-100 Hz SCG signal bandwidth of interest was sub-divided into 5 Hz and 10 Hz frequency bins to compare the spectral energy in corresponding frequency bins of the SCG signal measured during three key conditions of respiration--inspiration, expiration, and apnea. Statistically significant differences were observed between the power in ensemble averaged inspiratory and expiratory SCG beats and between ensemble averaged inspiratory and apneaic beats across the 18 subjects for multiple frequency bins in the 10-40 Hz frequency range. Accordingly, the spectral analysis methods described in this paper could provide complementary and improved classification of respiratory modulations in the SCG signal over and above time-domain SCG analysis methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19870041350&hterms=rotary+engine&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Drotary%2Bengine','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19870041350&hterms=rotary+engine&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Drotary%2Bengine"><span>A two-dimensional numerical study of the flow inside the combustion chamber of a motored rotary engine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shih, T. I-P.; Yang, S. L.; Schock, H. J.</p> <p>1986-01-01</p> <p>A numerical study was performed to investigate the unsteady, multidimensional flow inside the combustion chambers of an idealized, two-dimensional, rotary engine under motored conditions. The numerical study was based on the time-dependent, two-dimensional, density-weighted, ensemble-averaged conservation equations of mass, species, momentum, and total energy valid for two-component ideal gas mixtures. The ensemble-averaged conservation equations were closed by a K-epsilon model of turbulence. This K-epsilon model of turbulence was modified to account for some of the effects of compressibility, streamline curvature, low-Reynolds number, and preferential stress dissipation. Numerical solutions to the conservation equations were obtained by the highly efficient implicit-factored method of Beam and Warming. The grid system needed to obtain solutions were generated by an algebraic grid generation technique based on transfinite interpolation. Results of the numerical study are presented in graphical form illustrating the flow patterns during intake, compression, gaseous fuel injection, expansion, and exhaust.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvA..93d3405D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvA..93d3405D"><span>Effects of quantum coherence and interference in atoms near nanoparticles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dhayal, Suman; Rostovtsev, Yuri V.</p> <p>2016-04-01</p> <p>Optical properties of ensembles of realistic quantum emitters coupled to plasmonic systems are studied by using adequate models that can take into account full atomic geometry. In particular, the coherent effects such as forming "dark states," optical pumping, coherent Raman scattering, and the stimulated Raman adiabatic passage (STIRAP) are revisited in the presence of metallic nanoparticles. It is shown that the dark states are still formed but they have more complicated structure, and the optical pumping and the STIRAP cannot be employed in the vicinity of plasmonic nanostructures. Also, there is a huge difference in the behavior of the local atomic polarization and the atomic polarization averaged over an ensemble of atoms homogeneously spread near nanoparticles. The average polarization is strictly related to the polarization induced by the external field, while the local polarization can be very different from the one induced by the external field. This is important for the excitation of single molecules, e.g., different components of scattering from single molecules can be used for their efficient detection.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28930544','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28930544"><span>Neural signatures of attention: insights from decoding population activity patterns.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sapountzis, Panagiotis; Gregoriou, Georgia G</p> <p>2018-01-01</p> <p>Understanding brain function and the computations that individual neurons and neuronal ensembles carry out during cognitive functions is one of the biggest challenges in neuroscientific research. To this end, invasive electrophysiological studies have provided important insights by recording the activity of single neurons in behaving animals. To average out noise, responses are typically averaged across repetitions and across neurons that are usually recorded on different days. However, the brain makes decisions on short time scales based on limited exposure to sensory stimulation by interpreting responses of populations of neurons on a moment to moment basis. Recent studies have employed machine-learning algorithms in attention and other cognitive tasks to decode the information content of distributed activity patterns across neuronal ensembles on a single trial basis. Here, we review results from studies that have used pattern-classification decoding approaches to explore the population representation of cognitive functions. These studies have offered significant insights into population coding mechanisms. Moreover, we discuss how such advances can aid the development of cognitive brain-computer interfaces.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015NJPh...17b3013K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015NJPh...17b3013K"><span>Unimodular lattice triangulations as small-world and scale-free random graphs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Krüger, B.; Schmidt, E. M.; Mecke, K.</p> <p>2015-02-01</p> <p>Real-world networks, e.g., the social relations or world-wide-web graphs, exhibit both small-world and scale-free behaviour. We interpret lattice triangulations as planar graphs by identifying triangulation vertices with graph nodes and one-dimensional simplices with edges. Since these triangulations are ergodic with respect to a certain Pachner flip, applying different Monte Carlo simulations enables us to calculate average properties of random triangulations, as well as canonical ensemble averages, using an energy functional that is approximately the variance of the degree distribution. All considered triangulations have clustering coefficients comparable with real-world graphs; for the canonical ensemble there are inverse temperatures with small shortest path length independent of system size. Tuning the inverse temperature to a quasi-critical value leads to an indication of scale-free behaviour for degrees k≥slant 5. Using triangulations as a random graph model can improve the understanding of real-world networks, especially if the actual distance of the embedded nodes becomes important.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29758645','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29758645"><span>Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Edwards, James P; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel</p> <p>2018-04-01</p> <p>We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010ChPhB..19f0502F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010ChPhB..19f0502F"><span>Entropy-variation with resistance in a quantized RLC circuit derived by the generalized Hellmann-Feynman theorem</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fan, Hong-Yi; Xu, Xue-Xiang; Hu, Li-Yun</p> <p>2010-06-01</p> <p>By virtue of the generalized Hellmann-Feynman theorem for the ensemble average, we obtain the internal energy and average energy consumed by the resistance R in a quantized resistance-inductance-capacitance (RLC) electric circuit. We also calculate the entropy-variation with R. The relation between entropy and R is also derived. By the use of figures we indeed see that the entropy increases with the increment of R.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvE..97d2114E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvE..97d2114E"><span>Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Edwards, James P.; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel</p> <p>2018-04-01</p> <p>We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=elephants&pg=3&id=EJ939635','ERIC'); return false;" href="https://eric.ed.gov/?q=elephants&pg=3&id=EJ939635"><span>The Elephant in the Room</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Williams, David A.</p> <p>2011-01-01</p> <p>Practically all teenagers find pleasure in music, yet the majority are not involved in traditional school music ensembles. College requirements, the quest for high grade point averages, scheduling conflicts, uncooperative counselors, block schedules, students with too many competing interests, or the need to work may limit participation in music…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.usgs.gov/sir/2016/5003/sir20165003.pdf','USGSPUBS'); return false;" href="https://pubs.usgs.gov/sir/2016/5003/sir20165003.pdf"><span>Estimates of peak flood discharge for 21 sites in the Front Range in Colorado in response to extreme rainfall in September 2013</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Moody, John A.</p> <p>2016-03-21</p> <p>Extreme rainfall in September 2013 caused destructive floods in part of the Front Range in Boulder County, Colorado. Erosion from these floods cut roads and isolated mountain communities for several weeks, and large volumes of eroded sediment were deposited downstream, which caused further damage of property and infrastructures. Estimates of peak discharge for these floods and the associated rainfall characteristics will aid land and emergency managers in the future. Several methods (an ensemble) were used to estimate peak discharge at 21 measurement sites, and the ensemble average and standard deviation provided a final estimate of peak discharge and its uncertainty. Because of the substantial erosion and deposition of sediment, an additional estimate of peak discharge was made based on the flow resistance caused by sediment transport effects.Although the synoptic-scale rainfall was extreme (annual exceedance probability greater than 1,000 years, about 450 millimeters in 7 days) for these mountains, the resulting peak discharges were not. Ensemble average peak discharges per unit drainage area (unit peak discharge, [Qu]) for the floods were 1–2 orders of magnitude less than those for the maximum worldwide floods with similar drainage areas and had a wide range of values (0.21–16.2 cubic meters per second per square kilometer [m3 s-1 km-2]). One possible explanation for these differences was that the band of high-accumulation, high-intensity rainfall was narrow (about 50 kilometers wide), oriented nearly perpendicular to the predominant drainage pattern of the mountains, and therefore entire drainage areas were not subjected to the same range of extreme rainfall. A linear relation (coefficient of determination [R2]=0.69) between Qu and the rainfall intensity (ITc, computed for a time interval equal to the time-of-concentration for the drainage area upstream from each site), had the form: Qu=0.26(ITc-8.6), where the coefficient 0.26 can be considered to be an area-averaged peak runoff coefficient for the September 2013 rain storms in Boulder County, and the 8.6 millimeters per hour to be the rainfall intensity corresponding to a soil moisture threshold that controls the soil infiltration rate. Peak discharge estimates based on the sediment transport effects were generally less than the ensemble average and indicated that sediment transport may be a mechanism that limits velocities in these types of mountain streams such that the Froude number fluctuates about 1 suggesting that this type of floodflow can be approximated as critical flow.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1910023G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1910023G"><span>Ensemble hydrological forecast efficiency evolution over various issue dates and lead-time: case study for the Cheboksary reservoir (Volga River)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gelfan, Alexander; Moreido, Vsevolod</p> <p>2017-04-01</p> <p>Ensemble hydrological forecasting allows for describing uncertainty caused by variability of meteorological conditions in the river basin for the forecast lead-time. At the same time, in snowmelt-dependent river basins another significant source of uncertainty relates to variability of initial conditions of the basin (snow water equivalent, soil moisture content, etc.) prior to forecast issue. Accurate long-term hydrological forecast is most crucial for large water management systems, such as the Cheboksary reservoir (the catchment area is 374 000 sq.km) located in the Middle Volga river in Russia. Accurate forecasts of water inflow volume, maximum discharge and other flow characteristics are of great value for this basin, especially before the beginning of the spring freshet season that lasts here from April to June. The semi-distributed hydrological model ECOMAG was used to develop long-term ensemble forecast of daily water inflow into the Cheboksary reservoir. To describe variability of the meteorological conditions and construct ensemble of possible weather scenarios for the lead-time of the forecast, two approaches were applied. The first one utilizes 50 weather scenarios observed in the previous years (similar to the ensemble streamflow prediction (ESP) procedure), the second one uses 1000 synthetic scenarios simulated by a stochastic weather generator. We investigated the evolution of forecast uncertainty reduction, expressed as forecast efficiency, over various consequent forecast issue dates and lead time. We analyzed the Nash-Sutcliffe efficiency of inflow hindcasts for the period 1982 to 2016 starting from 1st of March with 15 days frequency for lead-time of 1 to 6 months. This resulted in the forecast efficiency matrix with issue dates versus lead-time that allows for predictability identification of the basin. The matrix was constructed separately for observed and synthetic weather ensembles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19418077','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19418077"><span>Characterizing rare-event property distributions via replicate molecular dynamics simulations of proteins.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Krishnan, Ranjani; Walton, Emily B; Van Vliet, Krystyn J</p> <p>2009-11-01</p> <p>As computational resources increase, molecular dynamics simulations of biomolecules are becoming an increasingly informative complement to experimental studies. In particular, it has now become feasible to use multiple initial molecular configurations to generate an ensemble of replicate production-run simulations that allows for more complete characterization of rare events such as ligand-receptor unbinding. However, there are currently no explicit guidelines for selecting an ensemble of initial configurations for replicate simulations. Here, we use clustering analysis and steered molecular dynamics simulations to demonstrate that the configurational changes accessible in molecular dynamics simulations of biomolecules do not necessarily correlate with observed rare-event properties. This informs selection of a representative set of initial configurations. We also employ statistical analysis to identify the minimum number of replicate simulations required to sufficiently sample a given biomolecular property distribution. Together, these results suggest a general procedure for generating an ensemble of replicate simulations that will maximize accurate characterization of rare-event property distributions in biomolecules.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvL.120j4501A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvL.120j4501A"><span>Exact Results for the Nonergodicity of d -Dimensional Generalized Lévy Walks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Albers, Tony; Radons, Günter</p> <p>2018-03-01</p> <p>We provide analytical results for the ensemble-averaged and time-averaged squared displacement, and the randomness of the latter, in the full two-dimensional parameter space of the d -dimensional generalized Lévy walk introduced by Shlesinger et al. [Phys. Rev. Lett. 58, 1100 (1987), 10.1103/PhysRevLett.58.1100]. In certain regions of the parameter plane, we obtain surprising results such as the divergence of the mean-squared displacements, the divergence of the ergodicity breaking parameter despite a finite mean-squared displacement, and subdiffusion which appears superdiffusive when one only considers time averages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ThApC.129.1263P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ThApC.129.1263P"><span>Dynamical downscaling of regional climate over eastern China using RSM with multiple physics scheme ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Peishu, Zong; Jianping, Tang; Shuyu, Wang; Lingyun, Xie; Jianwei, Yu; Yunqian, Zhu; Xiaorui, Niu; Chao, Li</p> <p>2017-08-01</p> <p>The parameterization of physical processes is one of the critical elements to properly simulate the regional climate over eastern China. It is essential to conduct detailed analyses on the effect of physical parameterization schemes on regional climate simulation, to provide more reliable regional climate change information. In this paper, we evaluate the 25-year (1983-2007) summer monsoon climate characteristics of precipitation and surface air temperature by using the regional spectral model (RSM) with different physical schemes. The ensemble results using the reliability ensemble averaging (REA) method are also assessed. The result shows that the RSM model has the capacity to reproduce the spatial patterns, the variations, and the temporal tendency of surface air temperature and precipitation over eastern China. And it tends to predict better climatology characteristics over the Yangtze River basin and the South China. The impact of different physical schemes on RSM simulations is also investigated. Generally, the CLD3 cloud water prediction scheme tends to produce larger precipitation because of its overestimation of the low-level moisture. The systematic biases derived from the KF2 cumulus scheme are larger than those from the RAS scheme. The scale-selective bias correction (SSBC) method improves the simulation of the temporal and spatial characteristics of surface air temperature and precipitation and advances the circulation simulation capacity. The REA ensemble results show significant improvement in simulating temperature and precipitation distribution, which have much higher correlation coefficient and lower root mean square error. The REA result of selected experiments is better than that of nonselected experiments, indicating the necessity of choosing better ensemble samples for ensemble.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.974a2031A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.974a2031A"><span>Rainfall estimation with TFR model using Ensemble Kalman filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Asyiqotur Rohmah, Nabila; Apriliani, Erna</p> <p>2018-03-01</p> <p>Rainfall fluctuation can affect condition of other environment, correlated with economic activity and public health. The increasing of global average temperature is influenced by the increasing of CO2 in the atmosphere, which caused climate change. Meanwhile, the forests as carbon sinks that help keep the carbon cycle and climate change mitigation. Climate change caused by rainfall intensity deviations can affect the economy of a region, and even countries. It encourages research on rainfall associated with an area of forest. In this study, the mathematics model that used is a model which describes the global temperatures, forest cover, and seasonal rainfall called the TFR (temperature, forest cover, and rainfall) model. The model will be discretized first, and then it will be estimated by the method of Ensemble Kalman Filter (EnKF). The result shows that the more ensembles used in estimation, the better the result is. Also, the accurateness of simulation result is influenced by measurement variable. If a variable is measurement data, the result of simulation is better.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvD..93i4010N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvD..93i4010N"><span>Regge spectra of excited mesons, harmonic confinement, and QCD vacuum structure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nedelko, Sergei N.; Voronin, Vladimir E.</p> <p>2016-05-01</p> <p>An approach to QCD vacuum as a medium describable in terms of a statistical ensemble of almost everywhere homogeneous Abelian (anti-)self-dual gluon fields is briefly reviewed. These fields play the role of the confining medium for color charged fields as well as underline the mechanism of realization of chiral S UL(Nf)×S UR(Nf) and UA(1 ) symmetries. Hadronization formalism based on this ensemble leads to manifestly defined quantum effective meson action. Strong, electromagnetic, and weak interactions of mesons are represented in the action in terms of nonlocal n -point interaction vertices given by the quark-gluon loops averaged over the background ensemble. New systematic results for the mass spectrum and decay constants of radially excited light, heavy-light mesons, and heavy quarkonia are presented. The interrelation between the present approach, models based on ideas of soft-wall anti-de Sitter/QCD, light-front holographic QCD, and the picture of harmonic confinement is outlined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26819595','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26819595"><span>A Fuzzy Integral Ensemble Method in Visual P300 Brain-Computer Interface.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cavrini, Francesco; Bianchi, Luigi; Quitadamo, Lucia Rita; Saggio, Giovanni</p> <p>2016-01-01</p> <p>We evaluate the possibility of application of combination of classifiers using fuzzy measures and integrals to Brain-Computer Interface (BCI) based on electroencephalography. In particular, we present an ensemble method that can be applied to a variety of systems and evaluate it in the context of a visual P300-based BCI. Offline analysis of data relative to 5 subjects lets us argue that the proposed classification strategy is suitable for BCI. Indeed, the achieved performance is significantly greater than the average of the base classifiers and, broadly speaking, similar to that of the best one. Thus the proposed methodology allows realizing systems that can be used by different subjects without the need for a preliminary configuration phase in which the best classifier for each user has to be identified. Moreover, the ensemble is often capable of detecting uncertain situations and turning them from misclassifications into abstentions, thereby improving the level of safety in BCI for environmental or device control.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29795785','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29795785"><span>Conformational ensembles of RNA oligonucleotides from integrating NMR and molecular simulations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bottaro, Sandro; Bussi, Giovanni; Kennedy, Scott D; Turner, Douglas H; Lindorff-Larsen, Kresten</p> <p>2018-05-01</p> <p>RNA molecules are key players in numerous cellular processes and are characterized by a complex relationship between structure, dynamics, and function. Despite their apparent simplicity, RNA oligonucleotides are very flexible molecules, and understanding their internal dynamics is particularly challenging using experimental data alone. We show how to reconstruct the conformational ensemble of four RNA tetranucleotides by combining atomistic molecular dynamics simulations with nuclear magnetic resonance spectroscopy data. The goal is achieved by reweighting simulations using a maximum entropy/Bayesian approach. In this way, we overcome problems of current simulation methods, as well as in interpreting ensemble- and time-averaged experimental data. We determine the populations of different conformational states by considering several nuclear magnetic resonance parameters and point toward properties that are not captured by state-of-the-art molecular force fields. Although our approach is applied on a set of model systems, it is fully general and may be used to study the conformational dynamics of flexible biomolecules and to detect inaccuracies in molecular dynamics force fields.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1368077-cooling-mechanical-resonator-nitrogen-vacancy-centres-using-room-temperature-excited-state-spin-strain-interaction','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1368077-cooling-mechanical-resonator-nitrogen-vacancy-centres-using-room-temperature-excited-state-spin-strain-interaction"><span>Cooling a Mechanical Resonator with Nitrogen-Vacancy Centres Using a Room Temperature Excited State Spin-Strain Interaction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>MacQuarrie, E. R.; Otten, M.; Gray, S. K.; ...</p> <p>2017-02-06</p> <p>Cooling a mechanical resonator mode to a sub-thermal state has been a long-standing challenge in physics. This pursuit has recently found traction in the field of optomechanics in which a mechanical mode is coupled to an optical cavity. An alternate method is to couple the resonator to a well-controlled two-level system. Here we propose a protocol to dissipatively cool a room temperature mechanical resonator using a nitrogen-vacancy centre ensemble. The spin ensemble is coupled to the resonator through its orbitally-averaged excited state, which has a spin-strain interaction that has not been previously studied. We experimentally demonstrate that the spin-strain couplingmore » in the excited state is 13.5 ± 0.5 times stronger than the ground state spin-strain coupling. Lastly, we then theoretically show that this interaction, combined with a high-density spin ensemble, enables the cooling of a mechanical resonator from room temperature to a fraction of its thermal phonon occupancy.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CNSNS..55..225L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CNSNS..55..225L"><span>Investigation of stickiness influence in the anomalous transport and diffusion for a non-dissipative Fermi-Ulam model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Livorati, André L. P.; Palmero, Matheus S.; Díaz-I, Gabriel; Dettmann, Carl P.; Caldas, Iberê L.; Leonel, Edson D.</p> <p>2018-02-01</p> <p>We study the dynamics of an ensemble of non interacting particles constrained by two infinitely heavy walls, where one of them is moving periodically in time, while the other is fixed. The system presents mixed dynamics, where the accessible region for the particle to diffuse chaotically is bordered by an invariant spanning curve. Statistical analysis for the root mean square velocity, considering high and low velocity ensembles, leads the dynamics to the same steady state plateau for long times. A transport investigation of the dynamics via escape basins reveals that depending of the initial velocity ensemble, the decay rates of the survival probability present different shapes and bumps, in a mix of exponential, power law and stretched exponential decays. After an analysis of step-size averages, we found that the stable manifolds play the role of a preferential path for faster escape, being responsible for the bumps and different shapes of the survival probability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1368077','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1368077"><span>Cooling a Mechanical Resonator with Nitrogen-Vacancy Centres Using a Room Temperature Excited State Spin-Strain Interaction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>MacQuarrie, E. R.; Otten, M.; Gray, S. K.</p> <p></p> <p>Cooling a mechanical resonator mode to a sub-thermal state has been a long-standing challenge in physics. This pursuit has recently found traction in the field of optomechanics in which a mechanical mode is coupled to an optical cavity. An alternate method is to couple the resonator to a well-controlled two-level system. Here we propose a protocol to dissipatively cool a room temperature mechanical resonator using a nitrogen-vacancy centre ensemble. The spin ensemble is coupled to the resonator through its orbitally-averaged excited state, which has a spin-strain interaction that has not been previously studied. We experimentally demonstrate that the spin-strain couplingmore » in the excited state is 13.5 ± 0.5 times stronger than the ground state spin-strain coupling. Lastly, we then theoretically show that this interaction, combined with a high-density spin ensemble, enables the cooling of a mechanical resonator from room temperature to a fraction of its thermal phonon occupancy.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFMGC43C1037V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFMGC43C1037V"><span>Climate Model Ensemble Methodology: Rationale and Challenges</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vezer, M. A.; Myrvold, W.</p> <p>2012-12-01</p> <p>A tractable model of the Earth's atmosphere, or, indeed, any large, complex system, is inevitably unrealistic in a variety of ways. This will have an effect on the model's output. Nonetheless, we want to be able to rely on certain features of the model's output in studies aiming to detect, attribute, and project climate change. For this, we need assurance that these features reflect the target system, and are not artifacts of the unrealistic assumptions that go into the model. One technique for overcoming these limitations is to study ensembles of models which employ different simplifying assumptions and different methods of modelling. One then either takes as reliable certain outputs on which models in the ensemble agree, or takes the average of these outputs as the best estimate. Since the Intergovernmental Panel on Climate Change's Fourth Assessment Report (IPCC AR4) modellers have aimed to improve ensemble analysis by developing techniques to account for dependencies among models, and to ascribe unequal weights to models according to their performance. The goal of this paper is to present as clearly and cogently as possible the rationale for climate model ensemble methodology, the motivation of modellers to account for model dependencies, and their efforts to ascribe unequal weights to models. The method of our analysis is as follows. We will consider a simpler, well-understood case of taking the mean of a number of measurements of some quantity. Contrary to what is sometimes said, it is not a requirement of this practice that the errors of the component measurements be independent; one must, however, compensate for any lack of independence. We will also extend the usual accounts to include cases of unknown systematic error. We draw parallels between this simpler illustration and the more complex example of climate model ensembles, detailing how ensembles can provide more useful information than any of their constituent models. This account emphasizes the epistemic importance of considering degrees of model dependence, and the practice of ascribing unequal weights to models of unequal skill.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A12H..05M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A12H..05M"><span>Variability of North Atlantic Hurricane Frequency in a Large Ensemble of High-Resolution Climate Simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mei, W.; Kamae, Y.; Xie, S. P.</p> <p>2017-12-01</p> <p>Forced and internal variability of North Atlantic hurricane frequency during 1951-2010 is studied using a large ensemble of climate simulations by a 60-km atmospheric general circulation model that is forced by observed sea surface temperatures (SSTs). The simulations well capture the interannual-to-decadal variability of hurricane frequency in best track data, and further suggest a possible underestimate of hurricane counts in the current best track data prior to 1966 when satellite measurements were unavailable. A genesis potential index (GPI) averaged over the Main Development Region (MDR) accounts for more than 80% of the forced variations in hurricane frequency, with potential intensity and vertical wind shear being the dominant factors. In line with previous studies, the difference between MDR SST and tropical mean SST is a simple but useful predictor; a one-degree increase in this SST difference produces 7.1±1.4 more hurricanes. The hurricane frequency also exhibits internal variability that is comparable in magnitude to the interannual variability. The 100-member ensemble allows us to address the following important questions: (1) Are the observations equivalent to one realization of such a large ensemble? (2) How many ensemble members are needed to reproduce the variability in observations and in the forced component of the simulations? The sources of the internal variability in hurricane frequency will be identified and discussed. The results provide an explanation for the relatively week correlation ( 0.6) between MDR GPI and hurricane frequency on interannual timescales in observations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=ideal+AND+gas&pg=6&id=EJ239382','ERIC'); return false;" href="https://eric.ed.gov/?q=ideal+AND+gas&pg=6&id=EJ239382"><span>Teaching Classical Statistical Mechanics: A Simulation Approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Sauer, G.</p> <p>1981-01-01</p> <p>Describes a one-dimensional model for an ideal gas to study development of disordered motion in Newtonian mechanics. A Monte Carlo procedure for simulation of the statistical ensemble of an ideal gas with fixed total energy is developed. Compares both approaches for a pseudoexperimental foundation of statistical mechanics. (Author/JN)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24987464','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24987464"><span>Using beta binomials to estimate classification uncertainty for ensemble models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin</p> <p>2014-01-01</p> <p>Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent submodels. Further, ensemble uncertainty estimation can often be improved by adjusting the voting or classification threshold based on the parameters of the error distribution. Finally, the profiles for models whose predictive uncertainty estimates are not reliable provide clues to that effect without the need for comparison to an external test set.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA424381','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA424381"><span>Experimental and Computational Analysis of Modes in a Partially Constrained Plate</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2004-03-01</p> <p>way to quantify a structure. One technique utilizing an energy method is the Statistical Energy Analysis (SEA). The SEA process involves regarding...B.R. Mace. “ Statistical Energy Analysis of Two Edge- Coupled Rectangular Plates: Ensemble Averages,” Journal of Sound and Vibration, 193(4): 793-822</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29251441','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29251441"><span>Monitoring of the Conformational Space of Dipeptides by Generative Topographic Mapping.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Horvath, Dragos; Marcou, Gilles; Varnek, Alexandre</p> <p>2018-01-01</p> <p>This work describes a procedure to build generative topographic maps (GTM) as 2D representation of the conformational space (CS) of dipeptides. GTMs with excellent propensities to support highly predictive landscapes of various conformational properties were reported for three dipeptides (AA, KE and KR). CS monitoring via GTMproceeds through the projection of conformer ensembles on the map, producing cumulated responsibility (CR) vectors characteristic of the CS areas covered by the ensemble. Overlap of the CS areas visited by two distinct simulations can be expressed by the Tanimoto coefficient Tc of the associated CRs. This idea was used to monitor the reproducibility of the stochastic evolutionary conformer generation process implemented in S4MPLE. It could be shown that conformers produced by <500 S4MPLE runs reproducibly cover the relevant CS zone at given setup of the driving force field. The propensity of a simulation to visit the native CS zone can thus be quantitatively estimated, as the Tc score with respect to the "native" CR, as defined by the ensemble of dipeptide geometries extracted from PDB proteins. It could be shown that low-energy CS regions were indeed found to fall within the native zone. The Tc overlap score behaved as a smooth function of force field parameters. This opens the perspective of a novel force field parameter tuning procedure, bound to simultaneously optimize the behavior of the in Silico simulations for every possible dipeptide. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..15.8055T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..15.8055T"><span>Evaluation of annual, global seismicity forecasts, including ensemble models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Taroni, Matteo; Zechar, Jeremy; Marzocchi, Warner</p> <p>2013-04-01</p> <p>In 2009, the Collaboratory for the Study of the Earthquake Predictability (CSEP) initiated a prototype global earthquake forecast experiment. Three models participated in this experiment for 2009, 2010 and 2011—each model forecast the number of earthquakes above magnitude 6 in 1x1 degree cells that span the globe. Here we use likelihood-based metrics to evaluate the consistency of the forecasts with the observed seismicity. We compare model performance with statistical tests and a new method based on the peer-to-peer gambling score. The results of the comparisons are used to build ensemble models that are a weighted combination of the individual models. Notably, in these experiments the ensemble model always performs significantly better than the single best-performing model. Our results indicate the following: i) time-varying forecasts, if not updated after each major shock, may not provide significant advantages with respect to time-invariant models in 1-year forecast experiments; ii) the spatial distribution seems to be the most important feature to characterize the different forecasting performances of the models; iii) the interpretation of consistency tests may be misleading because some good models may be rejected while trivial models may pass consistency tests; iv) a proper ensemble modeling seems to be a valuable procedure to get the best performing model for practical purposes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..14.2099C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..14.2099C"><span>When will European countries exceed the 2°C temperature increase?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Caminade, C.; Morse, A. P.</p> <p>2012-04-01</p> <p>Climatologists all agree that an increase of 2°C at global scale could have serious socio-economic consequences for the future. The Cancun agreement in 2010 officially stated that "With a view to reducing global greenhouse gas emissions so as to hold the increase in global average temperature below 2 °C above pre- industrial levels . . . Parties should take urgent action to meet this long-term goal." Recent studies highlighted that this threshold is likely to be reached by 2060 at global scale if we follow the higher greenhouse gases emission scenarios. However, this threshold might be crossed earlier over lands, by 2040, for Europe, Asia, North Africa and Canada. This study aims to highlight when this threshold might be reached at the country level for members states of the European Union. A large ensemble of regional climate model simulations driven by the SRESA1B emission scenario carried out within the ENSEMBLES project framework for the European continent is employed to achieve such a task. Results corroborate that the European continent is likely to warm faster than the global average temperatures, with the multi-model ensemble mean crossing the 2°C threshold by 2045-2055. Regionally, Eastern Europe, Scandinavia and the Mediterranean basin are likely to cross that threshold earlier than northwestern/central Europe. As an example of these regional differences, Cyprus is likely to experience a 2°C increase during the mid 2040s while this might happen over Ireland during the late 21st century.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5702678','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5702678"><span>Time-course, negative-stain electron microscopy–based analysis for investigating protein–protein interactions at the single-molecule level</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Nogal, Bartek; Bowman, Charles A.; Ward, Andrew B.</p> <p>2017-01-01</p> <p>Several biophysical approaches are available to study protein–protein interactions. Most approaches are conducted in bulk solution, and are therefore limited to an average measurement of the ensemble of molecular interactions. Here, we show how single-particle EM can enrich our understanding of protein–protein interactions at the single-molecule level and potentially capture states that are unobservable with ensemble methods because they are below the limit of detection or not conducted on an appropriate time scale. Using the HIV-1 envelope glycoprotein (Env) and its interaction with receptor CD4-binding site neutralizing antibodies as a model system, we both corroborate ensemble kinetics-derived parameters and demonstrate how time-course EM can further dissect stoichiometric states of complexes that are not readily observable with other methods. Visualization of the kinetics and stoichiometry of Env–antibody complexes demonstrated the applicability of our approach to qualitatively and semi-quantitatively differentiate two highly similar neutralizing antibodies. Furthermore, implementation of machine-learning techniques for sorting class averages of these complexes into discrete subclasses of particles helped reduce human bias. Our data provide proof of concept that single-particle EM can be used to generate a “visual” kinetic profile that should be amenable to studying many other protein–protein interactions, is relatively simple and complementary to well-established biophysical approaches. Moreover, our method provides critical insights into broadly neutralizing antibody recognition of Env, which may inform vaccine immunogen design and immunotherapeutic development. PMID:28972148</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3528515','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3528515"><span>Quantum teleportation between remote atomic-ensemble quantum memories</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei</p> <p>2012-01-01</p> <p>Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a “quantum channel,” quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895–1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼108 rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing. PMID:23144222</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A21F2221M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A21F2221M"><span>Probabilistic Near and Far-Future Climate Scenarios of Precipitation and Surface Temperature for the North American Monsoon Region Under a Weighted CMIP5-GCM Ensemble Approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Montero-Martinez, M. J.; Colorado, G.; Diaz-Gutierrez, D. E.; Salinas-Prieto, J. A.</p> <p>2017-12-01</p> <p>It is well known the North American Monsoon (NAM) region is already a very dry region which is under a lot of stress due to the lack of water resources on multiple locations of the area. However, it is very interesting that even under those conditions, the Mexican part of the NAM region is certainly the most productive in Mexico from the agricultural point of view. Thus, it is very important to have realistic climate scenarios for climate variables such as temperature, precipitation, relative humidity, radiation, etc. This study tries to tackle that problem by generating probabilistic climate scenarios using a weighted CMIP5-GCM ensemble approach based on the Xu et al. (2010) technique which is on itself an improved method from the better known Reliability Ensemble Averaging algorithm of Giorgi and Mearns (2002). In addition, it is compared the 20-plus GCMs individual performances and the weighted ensemble versus observed data (CRU TS2.1) by using different metrics and Taylor diagrams. This study focuses on probabilistic results reaching a certain threshold given the fact that those types of products could be of potential use for agricultural applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28123032','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28123032"><span>Role of Dorsomedial Striatum Neuronal Ensembles in Incubation of Methamphetamine Craving after Voluntary Abstinence.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Caprioli, Daniele; Venniro, Marco; Zhang, Michelle; Bossert, Jennifer M; Warren, Brandon L; Hope, Bruce T; Shaham, Yavin</p> <p>2017-01-25</p> <p>We recently developed a rat model of incubation of methamphetamine craving after choice-based voluntary abstinence. Here, we studied the role of dorsolateral striatum (DLS) and dorsomedial striatum (DMS) in this incubation. We trained rats to self-administer palatable food pellets (6 d, 6 h/d) and methamphetamine (12 d, 6 h/d). We then assessed relapse to methamphetamine seeking under extinction conditions after 1 and 21 abstinence days. Between tests, the rats underwent voluntary abstinence (using a discrete choice procedure between methamphetamine and food; 20 trials/d) for 19 d. We used in situ hybridization to measure the colabeling of the activity marker Fos with Drd1 and Drd2 in DMS and DLS after the tests. Based on the in situ hybridization colabeling results, we tested the causal role of DMS D 1 and D 2 family receptors, and DMS neuronal ensembles in "incubated" methamphetamine seeking, using selective dopamine receptor antagonists (SCH39166 or raclopride) and the Daun02 chemogenetic inactivation procedure, respectively. Methamphetamine seeking was higher after 21 d of voluntary abstinence than after 1 d (incubation of methamphetamine craving). The incubated response was associated with increased Fos expression in DMS but not in DLS; Fos was colabeled with both Drd1 and Drd2 DMS injections of SCH39166 or raclopride selectively decreased methamphetamine seeking after 21 abstinence days. In Fos-lacZ transgenic rats, selective inactivation of relapse test-activated Fos neurons in DMS on abstinence day 18 decreased incubated methamphetamine seeking on day 21. Results demonstrate a role of DMS dopamine D 1 and D 2 receptors in the incubation of methamphetamine craving after voluntary abstinence and that DMS neuronal ensembles mediate this incubation. In human addicts, abstinence is often self-imposed and relapse can be triggered by exposure to drug-associated cues that induce drug craving. We recently developed a rat model of incubation of methamphetamine craving after choice-based voluntary abstinence. Here, we used classical pharmacology, in situ hybridization, immunohistochemistry, and the Daun02 inactivation procedure to demonstrate a critical role of dorsomedial striatum neuronal ensembles in this new form of incubation of drug craving. Copyright © 2017 the authors 0270-6474/17/371014-14$15.00/0.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014BGD....11.1443E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014BGD....11.1443E"><span>An ensemble approach to simulate CO2 emissions from natural fires</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Eliseev, A. V.; Mokhov, I. I.; Chernokulsky, A. V.</p> <p>2014-01-01</p> <p>This paper presents ensemble simulations with the global climate model developed at the A. M. Obukhov Institute of Atmospheric Physics, Russian Academy of Sciences (IAP RAS CM). These simulations were forced by historical reconstruction of external forcings for 850-2005 AD and by the Representative Concentration Pathways (RCP) scenarios till year 2300. Different ensemble members were constructed by varying the governing parameters of the IAP RAS CM module to simulate natural fires. These members are constrained by the GFED-3.1 observational data set and further subjected to Bayesian averaging. This approach allows to select only changes in fire characteristics which are robust within the constrained ensemble. In our simulations, the present-day (1998-2011 AD) global area burnt due to natural fires is (2.1 ± 0.4) × 106 km2 yr-1 (ensemble means and intra-ensemble standard deviations are presented), and the respective CO2 emissions in the atmosphere are (1.4 ± 0.2) PgC yr-1. The latter value is in agreement with the corresponding observational estimates. Regionally, the model underestimates CO2 emissions in the tropics; in the extra-tropics, it underestimates these emissions in north-east Eurasia and overestimates them in Europe. In the 21st century, the ensemble mean global burnt area is increased by 13% (28%, 36%, 51%) under scenario RCP 2.6 (RCP 4.5, RCP 6.0, RCP 8.5). The corresponding global emissions increase is 14% (29%, 37%, 42%). In the 22nd-23rd centuries, under the mitigation scenario RCP 2.6 the ensemble mean global burnt area and respective CO2 emissions slightly decrease, both by 5% relative to their values in year 2100. Under other RCP scenarios, these variables continue to increase. Under scenario RCP 8.5 (RCP 6.0, RCP 4.5) the ensemble mean burnt area in year 2300 is higher by 83% (44%, 15%) than its value in year 2100, and the ensemble mean CO2 emissions are correspondingly higher by 31% (19%, 9%). All changes of natural fire characteristics in the 21st-23rd centuries are associated mostly with the corresponding changes in boreal regions of Eurasia and North America. However, under the RCP 8.5 scenario, increase of the burnt area and CO2 emissions in boreal regions during the 22nd-23rd centuries are accompanied by the respective decreases in the tropics and subtropics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7Be9ee8494-4816-4989-a73f-a766f77839c1%7D','PESTICIDES'); return false;" href="https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7Be9ee8494-4816-4989-a73f-a766f77839c1%7D"><span>EnviroAtlas - Minimum Temperature 1950 - 2099 for the Conterminous United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>The EnviroAtlas Climate Scenarios were generated from NASA Earth Exchange (NEX) Downscaled Climate Projections (NEX-DCP30) ensemble averages (the average of over 30 available climate models) for each of the four representative concentration pathways (RCP) for the contiguous U.S. at 30 arc-second (approx. 800 m2) spatial resolution. NEX-DCP30 mean monthly minimum temperature for the 4 RCPs (2.6, 4.5, 6.0, 8.5) were organized by season (Winter, Spring, Summer, and Fall) and annually for the years 2006 00e2?? 2099. Additionally, mean monthly minimum temperature for the ensemble average of all historic runs is organized similarly for the years 1950 00e2?? 2005. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7Ba206699d-da21-4624-b55e-7064a2791ae6%7D','PESTICIDES'); return false;" href="https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7Ba206699d-da21-4624-b55e-7064a2791ae6%7D"><span>EnviroAtlas - Precipitation 1950 - 2099 for the Conterminous United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>The EnviroAtlas Climate Scenarios were generated from NASA Earth Exchange (NEX) Downscaled Climate Projections (NEX-DCP30) ensemble averages (the average of over 30 available climate models) for each of the four representative concentration pathways (RCP) for the contiguous U.S. at 30 arc-second (approx. 800 m2) spatial resolution. NEX-DCP30 mean monthly precipitation rate for the 4 RCPs (2.6, 4.5, 6.0, 8.5) were organized by season (Winter, Spring, Summer, and Fall) and annually for the years 2006 00e2?? 2099. Additionally, mean monthly precipitation rate for the ensemble average of all historic runs is organized similarly for the years 1950 00e2?? 2005. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7Ba5660221-17a0-4cbf-ac85-d3684e7b5271%7D','PESTICIDES'); return false;" href="https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7Ba5660221-17a0-4cbf-ac85-d3684e7b5271%7D"><span>EnviroAtlas - Maximum Temperature 1950 - 2099 for the Conterminous United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>The EnviroAtlas Climate Scenarios were generated from NASA Earth Exchange (NEX) Downscaled Climate Projections (NEX-DCP30) ensemble averages (the average of over 30 available climate models) for each of the four representative concentration pathways (RCP) for the contiguous U.S. at 30 arc-second (approx. 800 m2) spatial resolution. NEX-DCP30 mean monthly maximum temperature for the 4 RCPs (2.6, 4.5, 6.0, 8.5) were organized by season (Winter, Spring, Summer, and Fall) and annually for the years 2006 00e2?? 2099. Additionally, mean monthly maximum temperature for the ensemble average of all historic runs is organized similarly for the years 1950 00e2?? 2005. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JQSRT.211..179M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JQSRT.211..179M"><span>Scattering and extinction by spherical particles immersed in an absorbing host medium</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mishchenko, Michael I.; Dlugach, Janna M.</p> <p>2018-05-01</p> <p>Many applications of electromagnetic scattering involve particles immersed in an absorbing rather than lossless medium, thereby making the conventional scattering theory potentially inapplicable. To analyze this issue quantitatively, we employ the FORTRAN program developed recently on the basis of the first-principles electromagnetic theory to study far-field scattering by spherical particles embedded in an absorbing infinite host medium. We further examine the phenomenon of negative extinction identified recently for monodisperse spheres and uncover additional evidence in favor of its interference origin. We identify the main effects of increasing the width of the size distribution on the ensemble-averaged extinction efficiency factor and show that negative extinction can be eradicated by averaging over a very narrow size distribution. We also analyze, for the first time, the effects of absorption inside the host medium and ensemble averaging on the phase function and other elements of the Stokes scattering matrix. It is shown in particular that increasing absorption significantly suppresses the interference structure and can result in a dramatic expansion of the areas of positive polarization. Furthermore, the phase functions computed for larger effective size parameters can develop a very deep minimum at side-scattering angles bracketed by a strong diffraction peak in the forward direction and a pronounced backscattering maximum.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..MAR.A5005G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..MAR.A5005G"><span>Transmembrane protein CD93 diffuses by a continuous time random walk.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Goiko, Maria; de Bruyn, John; Heit, Bryan</p> <p></p> <p>Molecular motion within the cell membrane is a poorly-defined process. In this study, we characterized the diffusion of the transmembrane protein CD93. By careful analysis of the dependence of the ensemble-averaged mean squared displacement (EA-MSD, r2) on time t and the ensemble-averaged, time-averaged MSD (EA-TAMSD, δ2) on lag time τ and total measurement time T, we showed that the motion of CD93 is well-described by a continuous-time random walk (CTRW). CD93 tracks were acquired using single particle tracking. The tracks were classified as confined or free, and the behavior of the MSD analyzed. EA-MSDs of both populations grew non-linearly with t, indicative of anomalous diffusion. Their EA-TAMSDs were found to depend on both τ and T, indicating non-ergodicity. Free molecules had r2 tα and δ2 (τ /T 1 - α) , with α 0 . 5 , consistent with a CTRW. Mean maximal excursion analysis supported this result. Confined CD93 had r2 t0 and δ2 (τ / T) α , with α 0 . 3 , consistent with a confined CTRW. CTRWs are described by a series of random jumps interspersed with power-law distributed waiting times, and may arise due to the interactions of CD93 with the endocytic machinery. NSERC.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1360147-experimental-investigation-gas-fuel-injection-ray-radiography','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1360147-experimental-investigation-gas-fuel-injection-ray-radiography"><span>An experimental investigation of gas fuel injection with X-ray radiography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Swantek, Andrew B.; Duke, D. J.; Kastengren, A. L.; ...</p> <p>2017-04-21</p> <p>In this paper, an outward-opening compressed natural gas, direct injection fuel injector has been studied with single-shot x-ray radiography. Three dimensional simulations have also been performed to compliment the x-ray data. Argon was used as a surrogate gas for experimental and safety reasons. This technique allows the acquisition of a quantitative mapping of the ensemble-average and standard deviation of the projected density throughout the injection event. Two dimensional, ensemble average and standard deviation data are presented to investigate the quasi-steady-state behavior of the jet. Upstream of the stagnation zone, minimal shot-to-shot variation is observed. Downstream of the stagnation zone, bulkmore » mixing is observed as the jet transitions to a subsonic turbulent jet. From the time averaged data, individual slices at all downstream locations are extracted and an Abel inversion was performed to compute the radial density distribution, which was interpolated to create three dimensional visualizations. The Abel reconstructions reveal that upstream of the stagnation zone, the gas forms an annulus with high argon density and large density gradients. Inside this annulus, a recirculation region with low argon density exists. Downstream, the jet transitions to a fully turbulent jet with Gaussian argon density distributions. This experimental data is intended to serve as a quantitative benchmark for simulations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005JPCM...17S4287B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005JPCM...17S4287B"><span>Occupation times and ergodicity breaking in biased continuous time random walks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bel, Golan; Barkai, Eli</p> <p>2005-12-01</p> <p>Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1360147','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1360147"><span>An experimental investigation of gas fuel injection with X-ray radiography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Swantek, Andrew B.; Duke, D. J.; Kastengren, A. L.</p> <p></p> <p>In this paper, an outward-opening compressed natural gas, direct injection fuel injector has been studied with single-shot x-ray radiography. Three dimensional simulations have also been performed to compliment the x-ray data. Argon was used as a surrogate gas for experimental and safety reasons. This technique allows the acquisition of a quantitative mapping of the ensemble-average and standard deviation of the projected density throughout the injection event. Two dimensional, ensemble average and standard deviation data are presented to investigate the quasi-steady-state behavior of the jet. Upstream of the stagnation zone, minimal shot-to-shot variation is observed. Downstream of the stagnation zone, bulkmore » mixing is observed as the jet transitions to a subsonic turbulent jet. From the time averaged data, individual slices at all downstream locations are extracted and an Abel inversion was performed to compute the radial density distribution, which was interpolated to create three dimensional visualizations. The Abel reconstructions reveal that upstream of the stagnation zone, the gas forms an annulus with high argon density and large density gradients. Inside this annulus, a recirculation region with low argon density exists. Downstream, the jet transitions to a fully turbulent jet with Gaussian argon density distributions. This experimental data is intended to serve as a quantitative benchmark for simulations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27387228','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27387228"><span>A potato model intercomparison across varying climates and productivity levels.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fleisher, David H; Condori, Bruno; Quiroz, Roberto; Alva, Ashok; Asseng, Senthold; Barreda, Carolina; Bindi, Marco; Boote, Kenneth J; Ferrise, Roberto; Franke, Angelinus C; Govindakrishnan, Panamanna M; Harahagazwe, Dieudonne; Hoogenboom, Gerrit; Naresh Kumar, Soora; Merante, Paolo; Nendel, Claas; Olesen, Jorgen E; Parker, Phillip S; Raes, Dirk; Raymundo, Rubi; Ruane, Alex C; Stockle, Claudio; Supit, Iwan; Vanuytrecht, Eline; Wolf, Joost; Woli, Prem</p> <p>2017-03-01</p> <p>A potato crop multimodel assessment was conducted to quantify variation among models and evaluate responses to climate change. Nine modeling groups simulated agronomic and climatic responses at low-input (Chinoli, Bolivia and Gisozi, Burundi)- and high-input (Jyndevad, Denmark and Washington, United States) management sites. Two calibration stages were explored, partial (P1), where experimental dry matter data were not provided, and full (P2). The median model ensemble response outperformed any single model in terms of replicating observed yield across all locations. Uncertainty in simulated yield decreased from 38% to 20% between P1 and P2. Model uncertainty increased with interannual variability, and predictions for all agronomic variables were significantly different from one model to another (P < 0.001). Uncertainty averaged 15% higher for low- vs. high-input sites, with larger differences observed for evapotranspiration (ET), nitrogen uptake, and water use efficiency as compared to dry matter. A minimum of five partial, or three full, calibrated models was required for an ensemble approach to keep variability below that of common field variation. Model variation was not influenced by change in carbon dioxide (C), but increased as much as 41% and 23% for yield and ET, respectively, as temperature (T) or rainfall (W) moved away from historical levels. Increases in T accounted for the highest amount of uncertainty, suggesting that methods and parameters for T sensitivity represent a considerable unknown among models. Using median model ensemble values, yield increased on average 6% per 100-ppm C, declined 4.6% per °C, and declined 2% for every 10% decrease in rainfall (for nonirrigated sites). Differences in predictions due to model representation of light utilization were significant (P < 0.01). These are the first reported results quantifying uncertainty for tuber/root crops and suggest modeling assessments of climate change impact on potato may be improved using an ensemble approach. © 2016 John Wiley & Sons Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.A43B0271S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.A43B0271S"><span>Improvement to microphysical schemes in WRF Model based on observed data, part I: size distribution function</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shan, Y.; Eric, W.; Gao, L.; Zhao, T.; Yin, Y.</p> <p>2015-12-01</p> <p>In this study, we have evaluated the performance of size distribution functions (SDF) with 2- and 3-moments in fitting the observed size distribution of rain droplets at three different heights. The goal is to improve the microphysics schemes in meso-scale models, such as Weather Research and Forecast (WRF). Rain droplets were observed during eight periods of different rain types at three stations on the Yellow Mountain in East China. The SDF in this study were M-P distribution with a fixed shape parameter in Gamma SDF(FSP). Where the Gamma SDFs were obtained with three diagnosis methods with the shape parameters based on Milbrandt (2010; denoted DSPM10), Milbrandt (2005; denoted DSPM05) and Seifert (2008; denoted DSPS08) for solving the shape parameter(SSP) and Lognormal SDF. Based on the preliminary experiments, three ensemble methods deciding Gamma SDF was also developed and assessed. The magnitude of average relative error caused by applying a FSP was 10-2 for fitting 0-order moment of the observed rain droplet distribution, and the magnitude of average relative error changed to 10-1 and 100 respectively for 1-4 order moments and 5-6 order moments. To different extent, DSPM10, DSPM05, DSPS08, SSP and ensemble methods could improve fitting accuracies for 0-6 order moments, especially the one coupling SSP and DSPS08 methods, which provided a average relative error 6.46% for 1-4 order moments and 11.90% for 5-6 order moments, respectively. The relative error of fitting three moments using the Lognormal SDF was much larger than that of Gamma SDF. The threshold value of shape parameter ranged from 0 to 8, because values beyond this range could cause overflow in the calculation. When average diameter of rain droplets was less than 2mm, the possibility of unavailable shape parameter value(USPV) increased with a decreasing droplet size. There was strong sensitivity of moment group in fitting accuracy. When ensemble method coupling SSP and DSPS08 was used, a better fit to 1-3-5 moments of the SDF was possible compared to fitting the 0-3-6 moment group.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930006396','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930006396"><span>Turbulent fluid motion IV-averages, Reynolds decomposition, and the closure problem</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Deissler, Robert G.</p> <p>1992-01-01</p> <p>Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25474476','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25474476"><span>Non-universal tracer diffusion in crowded media of non-inert obstacles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ghosh, Surya K; Cherstvy, Andrey G; Metzler, Ralf</p> <p>2015-01-21</p> <p>We study the diffusion of a tracer particle, which moves in continuum space between a lattice of excluded volume, immobile non-inert obstacles. In particular, we analyse how the strength of the tracer-obstacle interactions and the volume occupancy of the crowders alter the diffusive motion of the tracer. From the details of partitioning of the tracer diffusion modes between trapping states when bound to obstacles and bulk diffusion, we examine the degree of localisation of the tracer in the lattice of crowders. We study the properties of the tracer diffusion in terms of the ensemble and time averaged mean squared displacements, the trapping time distributions, the amplitude variation of the time averaged mean squared displacements, and the non-Gaussianity parameter of the diffusing tracer. We conclude that tracer-obstacle adsorption and binding triggers a transient anomalous diffusion. From a very narrow spread of recorded individual time averaged trajectories we exclude continuous type random walk processes as the underlying physical model of the tracer diffusion in our system. For moderate tracer-crowder attraction the motion is found to be fully ergodic, while at stronger attraction strength a transient disparity between ensemble and time averaged mean squared displacements occurs. We also put our results into perspective with findings from experimental single-particle tracking and simulations of the diffusion of tagged tracers in dense crowded suspensions. Our results have implications for the diffusion, transport, and spreading of chemical components in highly crowded environments inside living cells and other structured liquids.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JGRA..122.9652D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JGRA..122.9652D"><span>Local ensemble transform Kalman filter for ionospheric data assimilation: Observation influence analysis during a geomagnetic storm event</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Durazo, Juan A.; Kostelich, Eric J.; Mahalov, Alex</p> <p>2017-09-01</p> <p>We propose a targeted observation strategy, based on the influence matrix diagnostic, that optimally selects where additional observations may be placed to improve ionospheric forecasts. This strategy is applied in data assimilation observing system experiments, where synthetic electron density vertical profiles, which represent those of Constellation Observing System for Meteorology, Ionosphere, and Climate/Formosa satellite 3, are assimilated into the Thermosphere-Ionosphere-Electrodynamics General Circulation Model using the local ensemble transform Kalman filter during the 26 September 2011 geomagnetic storm. During each analysis step, the observation vector is augmented with five synthetic vertical profiles optimally placed to target electron density errors, using our targeted observation strategy. Forecast improvement due to assimilation of augmented vertical profiles is measured with the root-mean-square error (RMSE) of analyzed electron density, averaged over 600 km regions centered around the augmented vertical profile locations. Assimilating vertical profiles with targeted locations yields about 60%-80% reduction in electron density RMSE, compared to a 15% average reduction when assimilating randomly placed vertical profiles. Assimilating vertical profiles whose locations target the zonal component of neutral winds (Un) yields on average a 25% RMSE reduction in Un estimates, compared to a 2% average improvement obtained with randomly placed vertical profiles. These results demonstrate that our targeted strategy can improve data assimilation efforts during extreme events by detecting regions where additional observations would provide the largest benefit to the forecast.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhLA..382.1516Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhLA..382.1516Z"><span>Spectral density of mixtures of random density matrices for qubits</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Lin; Wang, Jiamei; Chen, Zhihua</p> <p>2018-06-01</p> <p>We derive the spectral density of the equiprobable mixture of two random density matrices of a two-level quantum system. We also work out the spectral density of mixture under the so-called quantum addition rule. We use the spectral densities to calculate the average entropy of mixtures of random density matrices, and show that the average entropy of the arithmetic-mean-state of n qubit density matrices randomly chosen from the Hilbert-Schmidt ensemble is never decreasing with the number n. We also get the exact value of the average squared fidelity. Some conjectures and open problems related to von Neumann entropy are also proposed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20060028978&hterms=kalman+filter&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dkalman%2Bfilter','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20060028978&hterms=kalman+filter&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dkalman%2Bfilter"><span>An optimal modification of a Kalman filter for time scales</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Greenhall, C. A.</p> <p>2003-01-01</p> <p>The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20180001937','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20180001937"><span>Asteroid Impact Risk: Ground Hazard versus Impactor Size</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mathias, Donovan; Wheeler, Lorien; Dotson, Jessie; Aftosmis, Michael; Tarano, Ana</p> <p>2017-01-01</p> <p>We utilized a probabilistic asteroid impact risk (PAIR) model to stochastically assess the impact risk due to an ensemble population of Near-Earth Objects (NEOs). Concretely, we present the variation of risk with impactor size. Results suggest that large impactors dominate the average risk, even when only considering the subset of undiscovered NEOs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4234426','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4234426"><span>Infinitely Dilute Partial Molar Properties of Proteins from Computer Simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2015-01-01</p> <p>A detailed understanding of temperature and pressure effects on an infinitely dilute protein’s conformational equilibrium requires knowledge of the corresponding infinitely dilute partial molar properties. Established molecular dynamics methodologies generally have not provided a way to calculate these properties without either a loss of thermodynamic rigor, the introduction of nonunique parameters, or a loss of information about which solute conformations specifically contributed to the output values. Here we implement a simple method that is thermodynamically rigorous and possesses none of the above disadvantages, and we report on the method’s feasibility and computational demands. We calculate infinitely dilute partial molar properties for two proteins and attempt to distinguish the thermodynamic differences between a native and a denatured conformation of a designed miniprotein. We conclude that simple ensemble average properties can be calculated with very reasonable amounts of computational power. In contrast, properties corresponding to fluctuating quantities are computationally demanding to calculate precisely, although they can be obtained more easily by following the temperature and/or pressure dependence of the corresponding ensemble averages. PMID:25325571</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016Chaos..26b3103P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016Chaos..26b3103P"><span>Evaluating gambles using dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Peters, O.; Gell-Mann, M.</p> <p>2016-02-01</p> <p>Gambles are random variables that model possible changes in wealth. Classic decision theory transforms money into utility through a utility function and defines the value of a gamble as the expectation value of utility changes. Utility functions aim to capture individual psychological characteristics, but their generality limits predictive power. Expectation value maximizers are defined as rational in economics, but expectation values are only meaningful in the presence of ensembles or in systems with ergodic properties, whereas decision-makers have no access to ensembles, and the variables representing wealth in the usual growth models do not have the relevant ergodic properties. Simultaneously addressing the shortcomings of utility and those of expectations, we propose to evaluate gambles by averaging wealth growth over time. No utility function is needed, but a dynamic must be specified to compute time averages. Linear and logarithmic "utility functions" appear as transformations that generate ergodic observables for purely additive and purely multiplicative dynamics, respectively. We highlight inconsistencies throughout the development of decision theory, whose correction clarifies that our perspective is legitimate. These invalidate a commonly cited argument for bounded utility functions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20010098882&hterms=statistics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dstatistics','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20010098882&hterms=statistics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dstatistics"><span>The Effect of Stochastic Perturbation of Fuel Distribution on the Criticality of a One Speed Reactor and the Development of Multi-Material Multinomial Line Statistics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jahshan, S. N.; Singleterry, R. C.</p> <p>2001-01-01</p> <p>The effect of random fuel redistribution on the eigenvalue of a one-speed reactor is investigated. An ensemble of such reactors that are identical to a homogeneous reference critical reactor except for the fissile isotope density distribution is constructed such that it meets a set of well-posed redistribution requirements. The average eigenvalue, <k(sub ff)>, is evaluated when the total fissile loading per ensemble element, or realization, is conserved. The perturbation is proven to increase the reactor criticality on average when it is uniformly distributed. The various causes of the change in reactivity, and their relative effects are identified and ranked. From this, a path towards identifying the causes. and relative effects of reactivity fluctuations for the energy dependent problem is pointed to. The perturbation method of using multinomial distributions for representing the perturbed reactor is developed. This method has some advantages that can be of use in other stochastic problems. Finally, some of the features of this perturbation problem are related to other techniques that have been used for addressing similar problems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140012057','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140012057"><span>The North American Multi-Model Ensemble (NMME): Phase-1 Seasonal to Interannual Prediction, Phase-2 Toward Developing Intra-Seasonal Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kirtman, Ben P.; Min, Dughong; Infanti, Johnna M.; Kinter, James L., III; Paolino, Daniel A.; Zhang, Qin; vandenDool, Huug; Saha, Suranjana; Mendez, Malaquias Pena; Becker, Emily; <a style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20140012057'); toggleEditAbsImage('author_20140012057_show'); toggleEditAbsImage('author_20140012057_hide'); "> <img style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20140012057_show"> <img style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20140012057_hide"></p> <p>2013-01-01</p> <p>The recent US National Academies report "Assessment of Intraseasonal to Interannual Climate Prediction and Predictability" was unequivocal in recommending the need for the development of a North American Multi-Model Ensemble (NMME) operational predictive capability. Indeed, this effort is required to meet the specific tailored regional prediction and decision support needs of a large community of climate information users. The multi-model ensemble approach has proven extremely effective at quantifying prediction uncertainty due to uncertainty in model formulation, and has proven to produce better prediction quality (on average) then any single model ensemble. This multi-model approach is the basis for several international collaborative prediction research efforts, an operational European system and there are numerous examples of how this multi-model ensemble approach yields superior forecasts compared to any single model. Based on two NOAA Climate Test Bed (CTB) NMME workshops (February 18, and April 8, 2011) a collaborative and coordinated implementation strategy for a NMME prediction system has been developed and is currently delivering real-time seasonal-to-interannual predictions on the NOAA Climate Prediction Center (CPC) operational schedule. The hindcast and real-time prediction data is readily available (e.g., http://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/) and in graphical format from CPC (http://origin.cpc.ncep.noaa.gov/products/people/wd51yf/NMME/index.html). Moreover, the NMME forecast are already currently being used as guidance for operational forecasters. This paper describes the new NMME effort, presents an overview of the multi-model forecast quality, and the complementary skill associated with individual models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996JChPh.105.4211S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996JChPh.105.4211S"><span>Green-Kubo relations for the viscosity of biaxial nematic liquid crystals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sarman, Sten</p> <p>1996-09-01</p> <p>We derive Green-Kubo relations for the viscosities of a biaxial nematic liquid crystal. In this system there are seven shear viscosities, three twist viscosities, and three cross coupling coefficients between the antisymmetric strain rate and the symmetric traceless pressure tensor. According to the Onsager reciprocity relations these couplings are equal to the cross couplings between the symmetric traceless strain rate and the antisymmetric pressure. Our method is based on a comparison of the microscopic linear response generated by the SLLOD equations of motion for planar Couette flow (so named because of their close connection to the Doll's tensor Hamiltonian) and the macroscopic linear phenomenological relations between the pressure tensor and the strain rate. In order to obtain simple Green-Kubo relations we employ an equilibrium ensemble where the angular velocities of the directors are identically zero. This is achieved by adding constraint torques to the equations for the molecular angular accelerations. One finds that all the viscosity coefficients can be expressed as linear combinations of time correlation function integrals (TCFIs). This is much simpler compared to the expressions in the conventional canonical ensemble, where the viscosities are complicated rational functions of the TCFIs. The reason for this is, that in the constrained angular velocity ensemble, the thermodynamic forces are given external parameters whereas the thermodynamic fluxes are ensemble averages of phase functions. This is not the case in the canonical ensemble. The simplest way of obtaining numerical estimates of viscosity coefficients of a particular molecular model system is to evaluate these fluctuation relations by equilibrium molecular dynamics simulations.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1375303-metrics-diurnal-cycle-precipitation-toward-routine-benchmarks-climate-models','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1375303-metrics-diurnal-cycle-precipitation-toward-routine-benchmarks-climate-models"><span>Metrics for the Diurnal Cycle of Precipitation: Toward Routine Benchmarks for Climate Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Covey, Curt; Gleckler, Peter J.; Doutriaux, Charles; ...</p> <p>2016-06-08</p> <p>In this paper, metrics are proposed—that is, a few summary statistics that condense large amounts of data from observations or model simulations—encapsulating the diurnal cycle of precipitation. Vector area averaging of Fourier amplitude and phase produces useful information in a reasonably small number of harmonic dial plots, a procedure familiar from atmospheric tide research. The metrics cover most of the globe but down-weight high-latitude wintertime ocean areas where baroclinic waves are most prominent. This enables intercomparison of a large number of climate models with observations and with each other. The diurnal cycle of precipitation has features not encountered in typicalmore » climate model intercomparisons, notably the absence of meaningful “average model” results that can be displayed in a single two-dimensional map. Displaying one map per model guides development of the metrics proposed here by making it clear that land and ocean areas must be averaged separately, but interpreting maps from all models becomes problematic as the size of a multimodel ensemble increases. Global diurnal metrics provide quick comparisons with observations and among models, using the most recent version of the Coupled Model Intercomparison Project (CMIP). This includes, for the first time in CMIP, spatial resolutions comparable to global satellite observations. Finally, consistent with earlier studies of resolution versus parameterization of the diurnal cycle, the longstanding tendency of models to produce rainfall too early in the day persists in the high-resolution simulations, as expected if the error is due to subgrid-scale physics.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1375303','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1375303"><span>Metrics for the Diurnal Cycle of Precipitation: Toward Routine Benchmarks for Climate Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Covey, Curt; Gleckler, Peter J.; Doutriaux, Charles</p> <p></p> <p>In this paper, metrics are proposed—that is, a few summary statistics that condense large amounts of data from observations or model simulations—encapsulating the diurnal cycle of precipitation. Vector area averaging of Fourier amplitude and phase produces useful information in a reasonably small number of harmonic dial plots, a procedure familiar from atmospheric tide research. The metrics cover most of the globe but down-weight high-latitude wintertime ocean areas where baroclinic waves are most prominent. This enables intercomparison of a large number of climate models with observations and with each other. The diurnal cycle of precipitation has features not encountered in typicalmore » climate model intercomparisons, notably the absence of meaningful “average model” results that can be displayed in a single two-dimensional map. Displaying one map per model guides development of the metrics proposed here by making it clear that land and ocean areas must be averaged separately, but interpreting maps from all models becomes problematic as the size of a multimodel ensemble increases. Global diurnal metrics provide quick comparisons with observations and among models, using the most recent version of the Coupled Model Intercomparison Project (CMIP). This includes, for the first time in CMIP, spatial resolutions comparable to global satellite observations. Finally, consistent with earlier studies of resolution versus parameterization of the diurnal cycle, the longstanding tendency of models to produce rainfall too early in the day persists in the high-resolution simulations, as expected if the error is due to subgrid-scale physics.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22598958-brownian-relaxation-inelastic-sphere-air','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22598958-brownian-relaxation-inelastic-sphere-air"><span>Brownian relaxation of an inelastic sphere in air</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Bird, G. A., E-mail: gab@gab.com.au</p> <p>2016-06-15</p> <p>The procedures that are used to calculate the forces and moments on an aerodynamic body in the rarefied gas of the upper atmosphere are applied to a small sphere of the size of an aerosol particle at sea level. While the gas-surface interaction model that provides accurate results for macroscopic bodies may not be appropriate for bodies that are comprised of only about a thousand atoms, it provides a limiting case that is more realistic than the elastic model. The paper concentrates on the transfer of energy from the air to an initially stationary sphere as it acquires Brownian motion.more » Individual particle trajectories vary wildly, but a clear relaxation process emerges from an ensemble average over tens of thousands of trajectories. The translational and rotational energies in equilibrium Brownian motion are determined. Empirical relationships are obtained for the mean translational and rotational relaxation times, the mean initial power input to the particle, the mean rates of energy transfer between the particle and air, and the diffusivity. These relationships are functions of the ratio of the particle mass to an average air molecule mass and the Knudsen number, which is the ratio of the mean free path in the air to the particle diameter. The ratio of the molecular radius to the particle radius also enters as a correction factor. The implications of Brownian relaxation for the second law of thermodynamics are discussed.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EL....10538004S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EL....10538004S"><span>Credit risk and the instability of the financial system: An ensemble approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schmitt, Thilo A.; Chetalova, Desislava; Schäfer, Rudi; Guhr, Thomas</p> <p>2014-02-01</p> <p>The instability of the financial system as experienced in recent years and in previous periods is often linked to credit defaults, i.e., to the failure of obligors to make promised payments. Given the large number of credit contracts, this problem is amenable to be treated with approaches developed in statistical physics. We introduce the idea of ensemble averaging and thereby uncover generic features of credit risk. We then show that the often advertised concept of diversification, i.e., reducing the risk by distributing it, is deeply flawed when it comes to credit risk. The risk of extreme losses remains due to the ever present correlations, implying a substantial and persistent intrinsic danger to the financial system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009PhRvB..80k5301S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009PhRvB..80k5301S"><span>Theory of nonlinear optical response of ensembles of double quantum dots</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sitek, Anna; Machnikowski, Paweł</p> <p>2009-09-01</p> <p>We study theoretically the time-resolved four-wave mixing (FWM) response of an ensemble of pairs of quantum dots undergoing radiative recombination. At short (picosecond) delay times, the response signal shows beats that may be dominated by the subensemble of resonant pairs, which gives access to the information on the interdot coupling. At longer delay times, the decay of the FWM signal is governed by two rates which result from the collective interaction between the two dots and the radiation modes. The two rates correspond to the subradiant and super-radiant components in the radiative decay. Coupling between the dots enhances the collective effects and makes them observable even when the average energy mismatch between the dots is relatively large.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22196718','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22196718"><span>Predicting the need for CT imaging in children with minor head injury using an ensemble of Naive Bayes classifiers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Klement, William; Wilk, Szymon; Michalowski, Wojtek; Farion, Ken J; Osmond, Martin H; Verter, Vedat</p> <p>2012-03-01</p> <p>Using an automatic data-driven approach, this paper develops a prediction model that achieves more balanced performance (in terms of sensitivity and specificity) than the Canadian Assessment of Tomography for Childhood Head Injury (CATCH) rule, when predicting the need for computed tomography (CT) imaging of children after a minor head injury. CT is widely considered an effective tool for evaluating patients with minor head trauma who have potentially suffered serious intracranial injury. However, its use poses possible harmful effects, particularly for children, due to exposure to radiation. Safety concerns, along with issues of cost and practice variability, have led to calls for the development of effective methods to decide when CT imaging is needed. Clinical decision rules represent such methods and are normally derived from the analysis of large prospectively collected patient data sets. The CATCH rule was created by a group of Canadian pediatric emergency physicians to support the decision of referring children with minor head injury to CT imaging. The goal of the CATCH rule was to maximize the sensitivity of predictions of potential intracranial lesion while keeping specificity at a reasonable level. After extensive analysis of the CATCH data set, characterized by severe class imbalance, and after a thorough evaluation of several data mining methods, we derived an ensemble of multiple Naive Bayes classifiers as the prediction model for CT imaging decisions. In the first phase of the experiment we compared the proposed ensemble model to other ensemble models employing rule-, tree- and instance-based member classifiers. Our prediction model demonstrated the best performance in terms of AUC, G-mean and sensitivity measures. In the second phase, using a bootstrapping experiment similar to that reported by the CATCH investigators, we showed that the proposed ensemble model achieved a more balanced predictive performance than the CATCH rule with an average sensitivity of 82.8% and an average specificity of 74.4% (vs. 98.1% and 50.0% for the CATCH rule respectively). Automatically derived prediction models cannot replace a physician's acumen. However, they help establish reference performance indicators for the purpose of developing clinical decision rules so the trade-off between prediction sensitivity and specificity is better understood. Copyright © 2011 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017DSRI..126..148R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017DSRI..126..148R"><span>Comparison of modeling methods to predict the spatial distribution of deep-sea coral and sponge in the Gulf of Alaska</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.</p> <p>2017-08-01</p> <p>Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27627355','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27627355"><span>Genuine non-self-averaging and ultraslow convergence in gelation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cho, Y S; Mazza, M G; Kahng, B; Nagler, J</p> <p>2016-08-01</p> <p>In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1353188-ultrafast-electronic-relaxation-through-conical-intersection-nonadiabatic-dynamics-disentangled-through-oscillator-strength-based-diabatization-framework','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1353188-ultrafast-electronic-relaxation-through-conical-intersection-nonadiabatic-dynamics-disentangled-through-oscillator-strength-based-diabatization-framework"><span>Ultrafast Electronic Relaxation through a Conical Intersection: Nonadiabatic Dynamics Disentangled through an Oscillator Strength-Based Diabatization Framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Medders, Gregory R.; Alguire, Ethan C.; Jain, Amber; ...</p> <p>2017-01-18</p> <p>Here, we employ surface hopping trajectories to model the short-time dynamics of gas-phase and partially solvated 4-(N,N-dimethylamino)benzonitrile (DMABN), a dual fluorescent molecule that is known to undergo a nonadiabatic transition through a conical intersection. To compare theory vs time-resolved fluorescence measurements, we calculate the mixed quantum–classical density matrix and the ensemble averaged transition dipole moment. We introduce a diabatization scheme based on the oscillator strength to convert the TDDFT adiabatic states into diabatic states of L a and L b character. Somewhat surprisingly, we find that the rate of relaxation reported by emission to the ground state is almost 50%more » slower than the adiabatic population relaxation. Although our calculated adiabatic rates are largely consistent with previous theoretical calculations and no obvious effects of decoherence are seen, the diabatization procedure introduced here enables an explicit picture of dynamics in the branching plane, raising tantalizing questions about geometric phase effects in systems with dozens of atoms.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19880032656&hterms=oxygen+consumption&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Doxygen%2Bconsumption','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19880032656&hterms=oxygen+consumption&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Doxygen%2Bconsumption"><span>A model-free method for mass spectrometer response correction. [for oxygen consumption and cardiac output calculation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shykoff, Barbara E.; Swanson, Harvey T.</p> <p>1987-01-01</p> <p>A new method for correction of mass spectrometer output signals is described. Response-time distortion is reduced independently of any model of mass spectrometer behavior. The delay of the system is found first from the cross-correlation function of a step change and its response. A two-sided time-domain digital correction filter (deconvolution filter) is generated next from the same step response data using a regression procedure. Other data are corrected using the filter and delay. The mean squared error between a step response and a step is reduced considerably more after the use of a deconvolution filter than after the application of a second-order model correction. O2 consumption and CO2 production values calculated from data corrupted by a simulated dynamic process return to near the uncorrupted values after correction. Although a clean step response or the ensemble average of several responses contaminated with noise is needed for the generation of the filter, random noise of magnitude not above 0.5 percent added to the response to be corrected does not impair the correction severely.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28714698','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28714698"><span>MicroRNA Intercellular Transfer and Bioelectrical Regulation of Model Multicellular Ensembles by the Gap Junction Connectivity.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cervera, Javier; Meseguer, Salvador; Mafe, Salvador</p> <p>2017-08-17</p> <p>We have studied theoretically the microRNA (miRNA) intercellular transfer through voltage-gated gap junctions in terms of a biophysically grounded system of coupled differential equations. Instead of modeling a specific system, we use a general approach describing the interplay between the genetic mechanisms and the single-cell electric potentials. The dynamics of the multicellular ensemble are simulated under different conditions including spatially inhomogeneous transcription rates and local intercellular transfer of miRNAs. These processes result in spatiotemporal changes of miRNA, mRNA, and ion channel protein concentrations that eventually modify the bioelectrical states of small multicellular domains because of the ensemble average nature of the electrical potential. The simulations allow a qualitative understanding of the context-dependent nature of the effects observed when specific signaling molecules are transferred through gap junctions. The results suggest that an efficient miRNA intercellular transfer could permit the spatiotemporal control of small cellular domains by the conversion of single-cell genetic and bioelectric states into multicellular states regulated by the gap junction interconnectivity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27690054','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27690054"><span>Ensemble of One-Class Classifiers for Personal Risk Detection Based on Wearable Sensor Data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rodríguez, Jorge; Barrera-Animas, Ari Y; Trejo, Luis A; Medina-Pérez, Miguel Angel; Monroy, Raúl</p> <p>2016-09-29</p> <p>This study introduces the One-Class K-means with Randomly-projected features Algorithm (OCKRA). OCKRA is an ensemble of one-class classifiers built over multiple projections of a dataset according to random feature subsets. Algorithms found in the literature spread over a wide range of applications where ensembles of one-class classifiers have been satisfactorily applied; however, none is oriented to the area under our study: personal risk detection. OCKRA has been designed with the aim of improving the detection performance in the problem posed by the Personal RIsk DEtection(PRIDE) dataset. PRIDE was built based on 23 test subjects, where the data for each user were captured using a set of sensors embedded in a wearable band. The performance of OCKRA was compared against support vector machine and three versions of the Parzen window classifier. On average, experimental results show that OCKRA outperformed the other classifiers for at least 0.53% of the area under the curve (AUC). In addition, OCKRA achieved an AUC above 90% for more than 57% of the users.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5087407','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5087407"><span>Ensemble of One-Class Classifiers for Personal Risk Detection Based on Wearable Sensor Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rodríguez, Jorge; Barrera-Animas, Ari Y.; Trejo, Luis A.; Medina-Pérez, Miguel Angel; Monroy, Raúl</p> <p>2016-01-01</p> <p>This study introduces the One-Class K-means with Randomly-projected features Algorithm (OCKRA). OCKRA is an ensemble of one-class classifiers built over multiple projections of a dataset according to random feature subsets. Algorithms found in the literature spread over a wide range of applications where ensembles of one-class classifiers have been satisfactorily applied; however, none is oriented to the area under our study: personal risk detection. OCKRA has been designed with the aim of improving the detection performance in the problem posed by the Personal RIsk DEtection(PRIDE) dataset. PRIDE was built based on 23 test subjects, where the data for each user were captured using a set of sensors embedded in a wearable band. The performance of OCKRA was compared against support vector machine and three versions of the Parzen window classifier. On average, experimental results show that OCKRA outperformed the other classifiers for at least 0.53% of the area under the curve (AUC). In addition, OCKRA achieved an AUC above 90% for more than 57% of the users. PMID:27690054</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19485687','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19485687"><span>Seeing the mean: ensemble coding for sets of faces.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Haberman, Jason; Whitney, David</p> <p>2009-06-01</p> <p>We frequently encounter groups of similar objects in our visual environment: a bed of flowers, a basket of oranges, a crowd of people. How does the visual system process such redundancy? Research shows that rather than code every element in a texture, the visual system favors a summary statistical representation of all the elements. The authors demonstrate that although it may facilitate texture perception, ensemble coding also occurs for faces-a level of processing well beyond that of textures. Observers viewed sets of faces varying in emotionality (e.g., happy to sad) and assessed the mean emotion of each set. Although observers retained little information about the individual set members, they had a remarkably precise representation of the mean emotion. Observers continued to discriminate the mean emotion accurately even when they viewed sets of 16 faces for 500 ms or less. Modeling revealed that perceiving the average facial expression in groups of faces was not due to noisy representation or noisy discrimination. These findings support the hypothesis that ensemble coding occurs extremely fast at multiple levels of visual analysis. (c) 2009 APA, all rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29297620','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29297620"><span>Development of full-field optical spatial coherence tomography system for automated identification of malaria using the multilevel ensemble classifier.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Singla, Neeru; Srivastava, Vishal; Mehta, Dalip Singh</p> <p>2018-05-01</p> <p>Malaria is a life-threatening infectious blood disease affecting humans and other animals caused by parasitic protozoans belonging to the Plasmodium type especially in developing countries. The gold standard method for the detection of malaria is through the microscopic method of chemically treated blood smears. We developed an automated optical spatial coherence tomographic system using a machine learning approach for a fast identification of malaria cells. In this study, 28 samples (15 healthy, 13 malaria infected stages of red blood cells) were imaged by the developed system and 13 features were extracted. We designed a multilevel ensemble-based classifier for the quantitative prediction of different stages of the malaria cells. The proposed classifier was used by repeating k-fold cross validation dataset and achieve a high-average accuracy of 97.9% for identifying malaria infected late trophozoite stage of cells. Overall, our proposed system and multilevel ensemble model has a substantial quantifiable potential to detect the different stages of malaria infection without staining or expert. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016RvGeo..54..336K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016RvGeo..54..336K"><span>A review of multimodel superensemble forecasting for weather, seasonal climate, and hurricanes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Krishnamurti, T. N.; Kumar, V.; Simon, A.; Bhardwaj, A.; Ghosh, T.; Ross, R.</p> <p>2016-06-01</p> <p>This review provides a summary of work in the area of ensemble forecasts for weather, climate, oceans, and hurricanes. This includes a combination of multiple forecast model results that does not dwell on the ensemble mean but uses a unique collective bias reduction procedure. A theoretical framework for this procedure is provided, utilizing a suite of models that is constructed from the well-known Lorenz low-order nonlinear system. A tutorial that includes a walk-through table and illustrates the inner workings of the multimodel superensemble's principle is provided. Systematic errors in a single deterministic model arise from a host of features that range from the model's initial state (data assimilation), resolution, representation of physics, dynamics, and ocean processes, local aspects of orography, water bodies, and details of the land surface. Models, in their diversity of representation of such features, end up leaving unique signatures of systematic errors. The multimodel superensemble utilizes as many as 10 million weights to take into account the bias errors arising from these diverse features of multimodels. The design of a single deterministic forecast models that utilizes multiple features from the use of the large volume of weights is provided here. This has led to a better understanding of the error growths and the collective bias reductions for several of the physical parameterizations within diverse models, such as cumulus convection, planetary boundary layer physics, and radiative transfer. A number of examples for weather, seasonal climate, hurricanes and sub surface oceanic forecast skills of member models, the ensemble mean, and the superensemble are provided.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFMEP43A0951A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFMEP43A0951A"><span>Aerodynamic Surface Stress Intermittency and Conditionally Averaged Turbulence Statistics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anderson, W.</p> <p>2015-12-01</p> <p>Aeolian erosion of dry, flat, semi-arid landscapes is induced (and sustained) by kinetic energy fluxes in the aloft atmospheric surface layer. During saltation -- the mechanism responsible for surface fluxes of dust and sediment -- briefly suspended sediment grains undergo a ballistic trajectory before impacting and `splashing' smaller-diameter (dust) particles vertically. Conceptual models typically indicate that sediment flux, q (via saltation or drift), scales with imposed aerodynamic (basal) stress raised to some exponent, n, where n > 1. Since basal stress (in fully rough, inertia-dominated flows) scales with the incoming velocity squared, u^2, it follows that q ~ u^2n (where u is some relevant component of the above flow field, u(x,t)). Thus, even small (turbulent) deviations of u from its time-averaged value may play an enormously important role in aeolian activity on flat, dry landscapes. The importance of this argument is further augmented given that turbulence in the atmospheric surface layer exhibits maximum Reynolds stresses in the fluid immediately above the landscape. In order to illustrate the importance of surface stress intermittency, we have used conditional averaging predicated on aerodynamic surface stress during large-eddy simulation of atmospheric boundary layer flow over a flat landscape with momentum roughness length appropriate for the Llano Estacado in west Texas (a flat agricultural region that is notorious for dust transport). By using data from a field campaign to measure diurnal variability of aeolian activity and prevailing winds on the Llano Estacado, we have retrieved the threshold friction velocity (which can be used to compute threshold surface stress under the geostrophic balance with the Monin-Obukhov similarity theory). This averaging procedure provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Preliminary evidence indicates that surface stress peaks are associated with the passage of inclined, high-momentum regions flanked by adjacent low-momentum regions. We will characterize geometric attributes of such structures and explore streamwise and vertical vorticity distribution within the conditionally averaged flow field.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H41A1418H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H41A1418H"><span>Ensemble Simulation of Sierra Nevada Snowmelt Runoff Using a Regional Climate Modeling Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Holtzman, N.; Pavelsky, T.; Wrzesien, M.</p> <p>2017-12-01</p> <p>The snowmelt-dominated watersheds on the western slopes of the California Sierra Nevada drain into reservoirs that generate electricity and help irrigate Central Valley farms. At the end of the wet season of each year, around April 1, most of the water that will become runoff in these basins is stored as snow at high elevations. Snow measurements provide a good estimate of the total annual runoff to come. For efficient water management, however, it is also useful to know the timing of runoff. When and how large will the peak flow into a reservoir be, and how fast will the flow decline after it peaks? We address such questions using a coupled regional climate and land surface model, WRF and Noah-MP, to dynamically downscale the North American Regional Reanalysis (NARR) with an ensemble approach. First, we assess several methods of deriving melt-season runoff from WRF. We run WRF for a complete water year, and also test initializing WRF snow from observation-based datasets at the approximate date of peak snow water equivalent. By aggregating the modeled runoffs over the drainage basins of reservoirs and comparing to naturalized flow data, we can assess the basin-scale snow accumulation accuracy of WRF and the other datasets in the Sierra. After choosing a procedure to set the model snow at the end of the wet season, we apply in WRF the melt-season meteorology from 20 different past years of NARR to produce an ensemble of simulations, each with modeled flows into 8 reservoirs spanning the Sierra. We use the ensemble to characterize the likely spread in the timing and magnitude of hydrologic outcomes during the melt season. Probabilistic forecasts can help water-energy systems operate more efficiently. The ensemble also shows the effect of warm-season temperature extremes on flow timing, allowing human systems to prepare for those possibilities. Finally, the ensemble provides a baseline estimate of the maximum variability in runoff timing that could be generated by past conditions. If future runoff patterns consistently exceed the extremes found in the ensemble, nonstationary hydroclimate can be inferred.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15768404','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15768404"><span>Automated use of mutagenesis data in structure prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nanda, Vikas; DeGrado, William F</p> <p>2005-05-15</p> <p>In the absence of experimental structural determination, numerous methods are available to indirectly predict or probe the structure of a target molecule. Genetic modification of a protein sequence is a powerful tool for identifying key residues involved in binding reactions or protein stability. Mutagenesis data is usually incorporated into the modeling process either through manual inspection of model compatibility with empirical data, or through the generation of geometric constraints linking sensitive residues to a binding interface. We present an approach derived from statistical studies of lattice models for introducing mutation information directly into the fitness score. The approach takes into account the phenotype of mutation (neutral or disruptive) and calculates the energy for a given structure over an ensemble of sequences. The structure prediction procedure searches for the optimal conformation where neutral sequences either have no impact or improve stability and disruptive sequences reduce stability relative to wild type. We examine three types of sequence ensembles: information from saturation mutagenesis, scanning mutagenesis, and homologous proteins. Incorporating multiple sequences into a statistical ensemble serves to energetically separate the native state and misfolded structures. As a result, the prediction of structure with a poor force field is sufficiently enhanced by mutational information to improve accuracy. Furthermore, by separating misfolded conformations from the target score, the ensemble energy serves to speed up conformational search algorithms such as Monte Carlo-based methods. Copyright 2005 Wiley-Liss, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMGC44B..05V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMGC44B..05V"><span>The Role of Ocean and Atmospheric Heat Transport in the Arctic Amplification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vargas Martes, R. M.; Kwon, Y. O.; Furey, H. H.</p> <p>2017-12-01</p> <p>Observational data and climate model projections have suggested that the Arctic region is warming around twice faster than the rest of the globe, which has been referred as the Arctic Amplification (AA). While the local feedbacks, e.g. sea ice-albedo feedback, are often suggested as the primary driver of AA by previous studies, the role of meridional heat transport by ocean and atmosphere is less clear. This study uses the Community Earth System Model version 1 Large Ensemble simulation (CESM1-LE) to seek deeper understanding of the role meridional oceanic and atmospheric heat transports play in AA. The simulation consists of 40 ensemble members with the same physics and external forcing using a single fully coupled climate model. Each ensemble member spans two time periods; the historical period from 1920 to 2005 using the Coupled Model Intercomparison Project Phase 5 (CMIP5) historical forcing and the future period from 2006 to 2100 using the CMIP5 Representative Concentration Pathways 8.5 (RCP8.5) scenario. Each of the ensemble members are initialized with slightly different air temperatures. As the CESM1-LE uses a single model unlike the CMIP5 multi-model ensemble, the internal variability and the externally forced components can be separated more clearly. The projections are calculated by comparing the period 2081-2100 relative to the time period 2001-2020. The CESM1-LE projects an AA of 2.5-2.8 times faster than the global average, which is within the range of those from the CMIP5 multi-model ensemble. However, the spread of AA from the CESM1-LE, which is attributed to the internal variability, is 2-3 times smaller than that of the CMIP5 ensemble, which may also include the inter-model differences. CESM1LE projects a decrease in the atmospheric heat transport into the Arctic and an increase in the oceanic heat transport. The atmospheric heat transport is further decomposed into moisture transport and dry static energy transport. Also, the oceanic heat transport is decomposed into the Pacific and Atlantic contributions.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22255211-quantitative-study-fluctuation-effects-fast-lattice-monte-carlo-simulations-compression-grafted-homopolymers','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22255211-quantitative-study-fluctuation-effects-fast-lattice-monte-carlo-simulations-compression-grafted-homopolymers"><span>Quantitative study of fluctuation effects by fast lattice Monte Carlo simulations: Compression of grafted homopolymers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zhang, Pengfei; Wang, Qiang, E-mail: q.wang@colostate.edu</p> <p>2014-01-28</p> <p>Using fast lattice Monte Carlo (FLMC) simulations [Q. Wang, Soft Matter 5, 4564 (2009)] and the corresponding lattice self-consistent field (LSCF) calculations, we studied a model system of grafted homopolymers, in both the brush and mushroom regimes, in an explicit solvent compressed by an impenetrable surface. Direct comparisons between FLMC and LSCF results, both of which are based on the same Hamiltonian (thus without any parameter-fitting between them), unambiguously and quantitatively reveal the fluctuations/correlations neglected by the latter. We studied both the structure (including the canonical-ensemble averages of the height and the mean-square end-to-end distances of grafted polymers) and thermodynamicsmore » (including the ensemble-averaged reduced energy density and the related internal energy per chain, the differences in the Helmholtz free energy and entropy per chain from the uncompressed state, and the pressure due to compression) of the system. In particular, we generalized the method for calculating pressure in lattice Monte Carlo simulations proposed by Dickman [J. Chem. Phys. 87, 2246 (1987)], and combined it with the Wang-Landau–Optimized Ensemble sampling [S. Trebst, D. A. Huse, and M. Troyer, Phys. Rev. E 70, 046701 (2004)] to efficiently and accurately calculate the free energy difference and the pressure due to compression. While we mainly examined the effects of the degree of compression, the distance between the nearest-neighbor grafting points, the reduced number of chains grafted at each grafting point, and the system fluctuations/correlations in an athermal solvent, the θ-solvent is also considered in some cases.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25698176','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25698176"><span>Method for exploratory cluster analysis and visualisation of single-trial ERP ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Williams, N J; Nasuto, S J; Saddy, J D</p> <p>2015-07-30</p> <p>The validity of ensemble averaging on event-related potential (ERP) data has been questioned, due to its assumption that the ERP is identical across trials. Thus, there is a need for preliminary testing for cluster structure in the data. We propose a complete pipeline for the cluster analysis of ERP data. To increase the signal-to-noise (SNR) ratio of the raw single-trials, we used a denoising method based on Empirical Mode Decomposition (EMD). Next, we used a bootstrap-based method to determine the number of clusters, through a measure called the Stability Index (SI). We then used a clustering algorithm based on a Genetic Algorithm (GA) to define initial cluster centroids for subsequent k-means clustering. Finally, we visualised the clustering results through a scheme based on Principal Component Analysis (PCA). After validating the pipeline on simulated data, we tested it on data from two experiments - a P300 speller paradigm on a single subject and a language processing study on 25 subjects. Results revealed evidence for the existence of 6 clusters in one experimental condition from the language processing study. Further, a two-way chi-square test revealed an influence of subject on cluster membership. Our analysis operates on denoised single-trials, the number of clusters are determined in a principled manner and the results are presented through an intuitive visualisation. Given the cluster structure in some experimental conditions, we suggest application of cluster analysis as a preliminary step before ensemble averaging. Copyright © 2015 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28972148','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28972148"><span>Time-course, negative-stain electron microscopy-based analysis for investigating protein-protein interactions at the single-molecule level.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nogal, Bartek; Bowman, Charles A; Ward, Andrew B</p> <p>2017-11-24</p> <p>Several biophysical approaches are available to study protein-protein interactions. Most approaches are conducted in bulk solution, and are therefore limited to an average measurement of the ensemble of molecular interactions. Here, we show how single-particle EM can enrich our understanding of protein-protein interactions at the single-molecule level and potentially capture states that are unobservable with ensemble methods because they are below the limit of detection or not conducted on an appropriate time scale. Using the HIV-1 envelope glycoprotein (Env) and its interaction with receptor CD4-binding site neutralizing antibodies as a model system, we both corroborate ensemble kinetics-derived parameters and demonstrate how time-course EM can further dissect stoichiometric states of complexes that are not readily observable with other methods. Visualization of the kinetics and stoichiometry of Env-antibody complexes demonstrated the applicability of our approach to qualitatively and semi-quantitatively differentiate two highly similar neutralizing antibodies. Furthermore, implementation of machine-learning techniques for sorting class averages of these complexes into discrete subclasses of particles helped reduce human bias. Our data provide proof of concept that single-particle EM can be used to generate a "visual" kinetic profile that should be amenable to studying many other protein-protein interactions, is relatively simple and complementary to well-established biophysical approaches. Moreover, our method provides critical insights into broadly neutralizing antibody recognition of Env, which may inform vaccine immunogen design and immunotherapeutic development. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70048754','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70048754"><span>Climate change and watershed mercury export: a multiple projection and model analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Golden, Heather E.; Knightes, Christopher D.; Conrads, Paul; Feaster, Toby D.; Davis, Gary M.; Benedict, Stephen T.; Bradley, Paul M.</p> <p>2013-01-01</p> <p>Future shifts in climatic conditions may impact watershed mercury (Hg) dynamics and transport. An ensemble of watershed models was applied in the present study to simulate and evaluate the responses of hydrological and total Hg (THg) fluxes from the landscape to the watershed outlet and in-stream THg concentrations to contrasting climate change projections for a watershed in the southeastern coastal plain of the United States. Simulations were conducted under stationary atmospheric deposition and land cover conditions to explicitly evaluate the effect of projected precipitation and temperature on watershed Hg export (i.e., the flux of Hg at the watershed outlet). Based on downscaled inputs from 2 global circulation models that capture extremes of projected wet (Community Climate System Model, Ver 3 [CCSM3]) and dry (ECHAM4/HOPE-G [ECHO]) conditions for this region, watershed model simulation results suggest a decrease of approximately 19% in ensemble-averaged mean annual watershed THg fluxes using the ECHO climate-change model and an increase of approximately 5% in THg fluxes with the CCSM3 model. Ensemble-averaged mean annual ECHO in-stream THg concentrations increased 20%, while those of CCSM3 decreased by 9% between the baseline and projected simulation periods. Watershed model simulation results using both climate change models suggest that monthly watershed THg fluxes increase during the summer, when projected flow is higher than baseline conditions. The present study's multiple watershed model approach underscores the uncertainty associated with climate change response projections and their use in climate change management decisions. Thus, single-model predictions can be misleading, particularly in developmental stages of watershed Hg modeling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JCoPh.270...70K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JCoPh.270...70K"><span>Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad</p> <p>2014-08-01</p> <p>Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28140332','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28140332"><span>Robust electroencephalogram phase estimation with applications in brain-computer interface systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Seraj, Esmaeil; Sameni, Reza</p> <p>2017-03-01</p> <p>In this study, a robust method is developed for frequency-specific electroencephalogram (EEG) phase extraction using the analytic representation of the EEG. Based on recent theoretical findings in this area, it is shown that some of the phase variations-previously associated to the brain response-are systematic side-effects of the methods used for EEG phase calculation, especially during low analytical amplitude segments of the EEG. With this insight, the proposed method generates randomized ensembles of the EEG phase using minor perturbations in the zero-pole loci of narrow-band filters, followed by phase estimation using the signal's analytical form and ensemble averaging over the randomized ensembles to obtain a robust EEG phase and frequency. This Monte Carlo estimation method is shown to be very robust to noise and minor changes of the filter parameters and reduces the effect of fake EEG phase jumps, which do not have a cerebral origin. As proof of concept, the proposed method is used for extracting EEG phase features for a brain computer interface (BCI) application. The results show significant improvement in classification rates using rather simple phase-related features and a standard K-nearest neighbors and random forest classifiers, over a standard BCI dataset. The average performance was improved between 4-7% (in absence of additive noise) and 8-12% (in presence of additive noise). The significance of these improvements was statistically confirmed by a paired sample t-test, with 0.01 and 0.03 p-values, respectively. The proposed method for EEG phase calculation is very generic and may be applied to other EEG phase-based studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20180000551&hterms=Scheme&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DScheme','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20180000551&hterms=Scheme&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DScheme"><span>Sensitivity of CONUS Summer Rainfall to the Selection of Cumulus Parameterization Schemes in NU-WRF Seasonal Simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Iguchi, Takamichi; Tao, Wei-Kuo; Wu, Di; Peters-Lidard, Christa; Santanello, Joseph A.; Kemp, Eric; Tian, Yudong; Case, Jonathan; Wang, Weile; Ferraro, Robert; <a style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20180000551'); toggleEditAbsImage('author_20180000551_show'); toggleEditAbsImage('author_20180000551_hide'); "> <img style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20180000551_show"> <img style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20180000551_hide"></p> <p>2017-01-01</p> <p>This study investigates the sensitivity of daily rainfall rates in regional seasonal simulations over the contiguous United States (CONUS) to different cumulus parameterization schemes. Daily rainfall fields were simulated at 24-km resolution using the NASA-Unified Weather Research and Forecasting (NU-WRF) Model for June-August 2000. Four cumulus parameterization schemes and two options for shallow cumulus components in a specific scheme were tested. The spread in the domain-mean rainfall rates across the parameterization schemes was generally consistent between the entire CONUS and most subregions. The selection of the shallow cumulus component in a specific scheme had more impact than that of the four cumulus parameterization schemes. Regional variability in the performance of each scheme was assessed by calculating optimally weighted ensembles that minimize full root-mean-square errors against reference datasets. The spatial pattern of the seasonally averaged rainfall was insensitive to the selection of cumulus parameterization over mountainous regions because of the topographical pattern constraint, so that the simulation errors were mostly attributed to the overall bias there. In contrast, the spatial patterns over the Great Plains regions as well as the temporal variation over most parts of the CONUS were relatively sensitive to cumulus parameterization selection. Overall, adopting a single simulation result was preferable to generating a better ensemble for the seasonally averaged daily rainfall simulation, as long as their overall biases had the same positive or negative sign. However, an ensemble of multiple simulation results was more effective in reducing errors in the case of also considering temporal variation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JChPh.130u4904M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JChPh.130u4904M"><span>In the eye of the beholder: Inhomogeneous distribution of high-resolution shapes within the random-walk ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Müller, Christian L.; Sbalzarini, Ivo F.; van Gunsteren, Wilfred F.; Žagrović, Bojan; Hünenberger, Philippe H.</p> <p>2009-06-01</p> <p>The concept of high-resolution shapes (also referred to as folds or states, depending on the context) of a polymer chain plays a central role in polymer science, structural biology, bioinformatics, and biopolymer dynamics. However, although the idea of shape is intuitively very useful, there is no unambiguous mathematical definition for this concept. In the present work, the distributions of high-resolution shapes within the ideal random-walk ensembles with N =3,…,6 beads (or up to N =10 for some properties) are investigated using a systematic (grid-based) approach based on a simple working definition of shapes relying on the root-mean-square atomic positional deviation as a metric (i.e., to define the distance between pairs of structures) and a single cutoff criterion for the shape assignment. Although the random-walk ensemble appears to represent the paramount of homogeneity and randomness, this analysis reveals that the distribution of shapes within this ensemble, i.e., in the total absence of interatomic interactions characteristic of a specific polymer (beyond the generic connectivity constraint), is significantly inhomogeneous. In particular, a specific (densest) shape occurs with a local probability that is 1.28, 1.79, 2.94, and 10.05 times (N =3,…,6) higher than the corresponding average over all possible shapes (these results can tentatively be extrapolated to a factor as large as about 1028 for N =100). The qualitative results of this analysis lead to a few rather counterintuitive suggestions, namely, that, e.g., (i) a fold classification analysis applied to the random-walk ensemble would lead to the identification of random-walk "folds;" (ii) a clustering analysis applied to the random-walk ensemble would also lead to the identification random-walk "states" and associated relative free energies; and (iii) a random-walk ensemble of polymer chains could lead to well-defined diffraction patterns in hypothetical fiber or crystal diffraction experiments. The inhomogeneous nature of the shape probability distribution identified here for random walks may represent a significant underlying baseline effect in the analysis of real polymer chain ensembles (i.e., in the presence of specific interatomic interactions). As a consequence, a part of what is called a polymer shape may actually reside just "in the eye of the beholder" rather than in the nature of the interactions between the constituting atoms, and the corresponding observation-related bias should be taken into account when drawing conclusions from shape analyses as applied to real structural ensembles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19508095','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19508095"><span>In the eye of the beholder: Inhomogeneous distribution of high-resolution shapes within the random-walk ensemble.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Müller, Christian L; Sbalzarini, Ivo F; van Gunsteren, Wilfred F; Zagrović, Bojan; Hünenberger, Philippe H</p> <p>2009-06-07</p> <p>The concept of high-resolution shapes (also referred to as folds or states, depending on the context) of a polymer chain plays a central role in polymer science, structural biology, bioinformatics, and biopolymer dynamics. However, although the idea of shape is intuitively very useful, there is no unambiguous mathematical definition for this concept. In the present work, the distributions of high-resolution shapes within the ideal random-walk ensembles with N=3,...,6 beads (or up to N=10 for some properties) are investigated using a systematic (grid-based) approach based on a simple working definition of shapes relying on the root-mean-square atomic positional deviation as a metric (i.e., to define the distance between pairs of structures) and a single cutoff criterion for the shape assignment. Although the random-walk ensemble appears to represent the paramount of homogeneity and randomness, this analysis reveals that the distribution of shapes within this ensemble, i.e., in the total absence of interatomic interactions characteristic of a specific polymer (beyond the generic connectivity constraint), is significantly inhomogeneous. In particular, a specific (densest) shape occurs with a local probability that is 1.28, 1.79, 2.94, and 10.05 times (N=3,...,6) higher than the corresponding average over all possible shapes (these results can tentatively be extrapolated to a factor as large as about 10(28) for N=100). The qualitative results of this analysis lead to a few rather counterintuitive suggestions, namely, that, e.g., (i) a fold classification analysis applied to the random-walk ensemble would lead to the identification of random-walk "folds;" (ii) a clustering analysis applied to the random-walk ensemble would also lead to the identification random-walk "states" and associated relative free energies; and (iii) a random-walk ensemble of polymer chains could lead to well-defined diffraction patterns in hypothetical fiber or crystal diffraction experiments. The inhomogeneous nature of the shape probability distribution identified here for random walks may represent a significant underlying baseline effect in the analysis of real polymer chain ensembles (i.e., in the presence of specific interatomic interactions). As a consequence, a part of what is called a polymer shape may actually reside just "in the eye of the beholder" rather than in the nature of the interactions between the constituting atoms, and the corresponding observation-related bias should be taken into account when drawing conclusions from shape analyses as applied to real structural ensembles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ACP....1713103H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ACP....1713103H"><span>Ensemble prediction of air quality using the WRF/CMAQ model system for health effect studies in China</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hu, Jianlin; Li, Xun; Huang, Lin; Ying, Qi; Zhang, Qiang; Zhao, Bin; Wang, Shuxiao; Zhang, Hongliang</p> <p>2017-11-01</p> <p>Accurate exposure estimates are required for health effect analyses of severe air pollution in China. Chemical transport models (CTMs) are widely used to provide spatial distribution, chemical composition, particle size fractions, and source origins of air pollutants. The accuracy of air quality predictions in China is greatly affected by the uncertainties of emission inventories. The Community Multiscale Air Quality (CMAQ) model with meteorological inputs from the Weather Research and Forecasting (WRF) model were used in this study to simulate air pollutants in China in 2013. Four simulations were conducted with four different anthropogenic emission inventories, including the Multi-resolution Emission Inventory for China (MEIC), the Emission Inventory for China by School of Environment at Tsinghua University (SOE), the Emissions Database for Global Atmospheric Research (EDGAR), and the Regional Emission inventory in Asia version 2 (REAS2). Model performance of each simulation was evaluated against available observation data from 422 sites in 60 cities across China. Model predictions of O3 and PM2.5 generally meet the model performance criteria, but performance differences exist in different regions, for different pollutants, and among inventories. Ensemble predictions were calculated by linearly combining the results from different inventories to minimize the sum of the squared errors between the ensemble results and the observations in all cities. The ensemble concentrations show improved agreement with observations in most cities. The mean fractional bias (MFB) and mean fractional errors (MFEs) of the ensemble annual PM2.5 in the 60 cities are -0.11 and 0.24, respectively, which are better than the MFB (-0.25 to -0.16) and MFE (0.26-0.31) of individual simulations. The ensemble annual daily maximum 1 h O3 (O3-1h) concentrations are also improved, with mean normalized bias (MNB) of 0.03 and mean normalized errors (MNE) of 0.14, compared to MNB of 0.06-0.19 and MNE of 0.16-0.22 of the individual predictions. The ensemble predictions agree better with observations with daily, monthly, and annual averaging times in all regions of China for both PM2.5 and O3-1h. The study demonstrates that ensemble predictions from combining predictions from individual emission inventories can improve the accuracy of predicted temporal and spatial distributions of air pollutants. This study is the first ensemble model study in China using multiple emission inventories, and the results are publicly available for future health effect studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MAP...128..429M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MAP...128..429M"><span>Meta-heuristic CRPS minimization for the calibration of short-range probabilistic forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mohammadi, Seyedeh Atefeh; Rahmani, Morteza; Azadi, Majid</p> <p>2016-08-01</p> <p>This paper deals with the probabilistic short-range temperature forecasts over synoptic meteorological stations across Iran using non-homogeneous Gaussian regression (NGR). NGR creates a Gaussian forecast probability density function (PDF) from the ensemble output. The mean of the normal predictive PDF is a bias-corrected weighted average of the ensemble members and its variance is a linear function of the raw ensemble variance. The coefficients for the mean and variance are estimated by minimizing the continuous ranked probability score (CRPS) during a training period. CRPS is a scoring rule for distributional forecasts. In the paper of Gneiting et al. (Mon Weather Rev 133:1098-1118, 2005), Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is used to minimize the CRPS. Since BFGS is a conventional optimization method with its own limitations, we suggest using the particle swarm optimization (PSO), a robust meta-heuristic method, to minimize the CRPS. The ensemble prediction system used in this study consists of nine different configurations of the weather research and forecasting model for 48-h forecasts of temperature during autumn and winter 2011 and 2012. The probabilistic forecasts were evaluated using several common verification scores including Brier score, attribute diagram and rank histogram. Results show that both BFGS and PSO find the optimal solution and show the same evaluation scores, but PSO can do this with a feasible random first guess and much less computational complexity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CMaPh.tmp.1311F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CMaPh.tmp.1311F"><span>On Statistics of Bi-Orthogonal Eigenvectors in Real and Complex Ginibre Ensembles: Combining Partial Schur Decomposition with Supersymmetry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fyodorov, Yan V.</p> <p>2018-06-01</p> <p>We suggest a method of studying the joint probability density (JPD) of an eigenvalue and the associated `non-orthogonality overlap factor' (also known as the `eigenvalue condition number') of the left and right eigenvectors for non-selfadjoint Gaussian random matrices of size {N× N} . First we derive the general finite N expression for the JPD of a real eigenvalue {λ} and the associated non-orthogonality factor in the real Ginibre ensemble, and then analyze its `bulk' and `edge' scaling limits. The ensuing distribution is maximally heavy-tailed, so that all integer moments beyond normalization are divergent. A similar calculation for a complex eigenvalue z and the associated non-orthogonality factor in the complex Ginibre ensemble is presented as well and yields a distribution with the finite first moment. Its `bulk' scaling limit yields a distribution whose first moment reproduces the well-known result of Chalker and Mehlig (Phys Rev Lett 81(16):3367-3370, 1998), and we provide the `edge' scaling distribution for this case as well. Our method involves evaluating the ensemble average of products and ratios of integer and half-integer powers of characteristic polynomials for Ginibre matrices, which we perform in the framework of a supersymmetry approach. Our paper complements recent studies by Bourgade and Dubach (The distribution of overlaps between eigenvectors of Ginibre matrices, 2018. arXiv:1801.01219).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9413E..42Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9413E..42Z"><span>Identifying the optimal segmentors for mass classification in mammograms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Yu; Tomuro, Noriko; Furst, Jacob; Raicu, Daniela S.</p> <p>2015-03-01</p> <p>In this paper, we present the results of our investigation on identifying the optimal segmentor(s) from an ensemble of weak segmentors, used in a Computer-Aided Diagnosis (CADx) system which classifies suspicious masses in mammograms as benign or malignant. This is an extension of our previous work, where we used various parameter settings of image enhancement techniques to each suspicious mass (region of interest (ROI)) to obtain several enhanced images, then applied segmentation to each image to obtain several contours of a given mass. Each segmentation in this ensemble is essentially a "weak segmentor" because no single segmentation can produce the optimal result for all images. Then after shape features are computed from the segmented contours, the final classification model was built using logistic regression. The work in this paper focuses on identifying the optimal segmentor(s) from an ensemble mix of weak segmentors. For our purpose, optimal segmentors are those in the ensemble mix which contribute the most to the overall classification rather than the ones that produced high precision segmentation. To measure the segmentors' contribution, we examined weights on the features in the derived logistic regression model and computed the average feature weight for each segmentor. The result showed that, while in general the segmentors with higher segmentation success rates had higher feature weights, some segmentors with lower segmentation rates had high classification feature weights as well.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26618792','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26618792"><span>Structural Insights into the Calcium-Mediated Allosteric Transition in the C-Terminal Domain of Calmodulin from Nuclear Magnetic Resonance Measurements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kukic, Predrag; Lundström, Patrik; Camilloni, Carlo; Evenäs, Johan; Akke, Mikael; Vendruscolo, Michele</p> <p>2016-01-12</p> <p>Calmodulin is a two-domain signaling protein that becomes activated upon binding cooperatively two pairs of calcium ions, leading to large-scale conformational changes that expose its binding site. Despite significant advances in understanding the structural biology of calmodulin functions, the mechanistic details of the conformational transition between closed and open states have remained unclear. To investigate this transition, we used a combination of molecular dynamics simulations and nuclear magnetic resonance (NMR) experiments on the Ca(2+)-saturated E140Q C-terminal domain variant. Using chemical shift restraints in replica-averaged metadynamics simulations, we obtained a high-resolution structural ensemble consisting of two conformational states and validated such an ensemble against three independent experimental data sets, namely, interproton nuclear Overhauser enhancements, (15)N order parameters, and chemical shift differences between the exchanging states. Through a detailed analysis of this structural ensemble and of the corresponding statistical weights, we characterized a calcium-mediated conformational transition whereby the coordination of Ca(2+) by just one oxygen of the bidentate ligand E140 triggers a concerted movement of the two EF-hands that exposes the target binding site. This analysis provides atomistic insights into a possible Ca(2+)-mediated activation mechanism of calmodulin that cannot be achieved from static structures alone or from ensemble NMR measurements of the transition between conformations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=pass+AND+year+AND+paper&pg=6&id=EJ818205','ERIC'); return false;" href="https://eric.ed.gov/?q=pass+AND+year+AND+paper&pg=6&id=EJ818205"><span>A Formal Derivation of the Gibbs Entropy for Classical Systems Following the Schrodinger Quantum Mechanical Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Santillan, M.; Zeron, E. S.; Del Rio-Correa, J. L.</p> <p>2008-01-01</p> <p>In the traditional statistical mechanics textbooks, the entropy concept is first introduced for the microcanonical ensemble and then extended to the canonical and grand-canonical cases. However, in the authors' experience, this procedure makes it difficult for the student to see the bigger picture and, although quite ingenuous, the subtleness of…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22026179','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22026179"><span>Research in the Laboratory of Supramolecular Chemistry: functional nanostructures, sensors, and catalysts.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Severin, Kay</p> <p>2011-01-01</p> <p>This article summarizes research activities in the Laboratory of Supramolecular Chemistry (LCS) at the EPFL. Three topics will be discussed: a) the construction of functional nanostructures by multicomponent self-assembly processes, b) the development of chemosensors using specific receptors or ensembles of crossreactive sensors, and c) the investigation of novel synthetic procedures with organometallic catalysts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29728250','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29728250"><span>Minimal ensemble based on subset selection using ECG to diagnose categories of CAN.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Abawajy, Jemal; Kelarev, Andrei; Yi, Xun; Jelinek, Herbert F</p> <p>2018-07-01</p> <p>Early diagnosis of cardiac autonomic neuropathy (CAN) is critical for reversing or decreasing its progression and prevent complications. Diagnostic accuracy or precision is one of the core requirements of CAN detection. As the standard Ewing battery tests suffer from a number of shortcomings, research in automating and improving the early detection of CAN has recently received serious attention in identifying additional clinical variables and designing advanced ensembles of classifiers to improve the accuracy or precision of CAN diagnostics. Although large ensembles are commonly proposed for the automated diagnosis of CAN, large ensembles are characterized by slow processing speed and computational complexity. This paper applies ECG features and proposes a new ensemble-based approach for diagnosis of CAN progression. We introduce a Minimal Ensemble Based On Subset Selection (MEBOSS) for the diagnosis of all categories of CAN including early, definite and atypical CAN. MEBOSS is based on a novel multi-tier architecture applying classifier subset selection as well as the training subset selection during several steps of its operation. Our experiments determined the diagnostic accuracy or precision obtained in 5 × 2 cross-validation for various options employed in MEBOSS and other classification systems. The experiments demonstrate the operation of the MEBOSS procedure invoking the most effective classifiers available in the open source software environment SageMath. The results of our experiments show that for the large DiabHealth database of CAN related parameters MEBOSS outperformed other classification systems available in SageMath and achieved 94% to 97% precision in 5 × 2 cross-validation correctly distinguishing any two CAN categories to a maximum of five categorizations including control, early, definite, severe and atypical CAN. These results show that MEBOSS architecture is effective and can be recommended for practical implementations in systems for the diagnosis of CAN progression. Copyright © 2018 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28950518','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28950518"><span>Effects of correlations and fees in random multiplicative environments: Implications for portfolio management.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Alper, Ofer; Somekh-Baruch, Anelia; Pirvandy, Oz; Schaps, Malka; Yaari, Gur</p> <p>2017-08-01</p> <p>Geometric Brownian motion (GBM) is frequently used to model price dynamics of financial assets, and a weighted average of multiple GBMs is commonly used to model a financial portfolio. Diversified portfolios can lead to an increased exponential growth compared to a single asset by effectively reducing the effective noise. The sum of GBM processes is no longer a log-normal process and has a complex statistical properties. The nonergodicity of the weighted average process results in constant degradation of the exponential growth from the ensemble average toward the time average. One way to stay closer to the ensemble average is to maintain a balanced portfolio: keep the relative weights of the different assets constant over time. To keep these proportions constant, whenever assets values change, it is necessary to rebalance their relative weights, exposing this strategy to fees (transaction costs). Two strategies that were suggested in the past for cases that involve fees are rebalance the portfolio periodically and rebalance it in a partial way. In this paper, we study these two strategies in the presence of correlations and fees. We show that using periodic and partial rebalance strategies, it is possible to maintain a steady exponential growth while minimizing the losses due to fees. We also demonstrate how these redistribution strategies perform in a phenomenal way on real-world market data, despite the fact that not all assumptions of the model hold in these real-world systems. Our results have important implications for stochastic dynamics in general and to portfolio management in particular, as we show that there is a superior alternative to the common buy-and-hold strategy, even in the presence of correlations and fees.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvE..96b2305A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvE..96b2305A"><span>Effects of correlations and fees in random multiplicative environments: Implications for portfolio management</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alper, Ofer; Somekh-Baruch, Anelia; Pirvandy, Oz; Schaps, Malka; Yaari, Gur</p> <p>2017-08-01</p> <p>Geometric Brownian motion (GBM) is frequently used to model price dynamics of financial assets, and a weighted average of multiple GBMs is commonly used to model a financial portfolio. Diversified portfolios can lead to an increased exponential growth compared to a single asset by effectively reducing the effective noise. The sum of GBM processes is no longer a log-normal process and has a complex statistical properties. The nonergodicity of the weighted average process results in constant degradation of the exponential growth from the ensemble average toward the time average. One way to stay closer to the ensemble average is to maintain a balanced portfolio: keep the relative weights of the different assets constant over time. To keep these proportions constant, whenever assets values change, it is necessary to rebalance their relative weights, exposing this strategy to fees (transaction costs). Two strategies that were suggested in the past for cases that involve fees are rebalance the portfolio periodically and rebalance it in a partial way. In this paper, we study these two strategies in the presence of correlations and fees. We show that using periodic and partial rebalance strategies, it is possible to maintain a steady exponential growth while minimizing the losses due to fees. We also demonstrate how these redistribution strategies perform in a phenomenal way on real-world market data, despite the fact that not all assumptions of the model hold in these real-world systems. Our results have important implications for stochastic dynamics in general and to portfolio management in particular, as we show that there is a superior alternative to the common buy-and-hold strategy, even in the presence of correlations and fees.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3125385','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3125385"><span>A new transform for the analysis of complex fractionated atrial electrograms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2011-01-01</p> <p>Background Representation of independent biophysical sources using Fourier analysis can be inefficient because the basis is sinusoidal and general. When complex fractionated atrial electrograms (CFAE) are acquired during atrial fibrillation (AF), the electrogram morphology depends on the mix of distinct nonsinusoidal generators. Identification of these generators using efficient methods of representation and comparison would be useful for targeting catheter ablation sites to prevent arrhythmia reinduction. Method A data-driven basis and transform is described which utilizes the ensemble average of signal segments to identify and distinguish CFAE morphologic components and frequencies. Calculation of the dominant frequency (DF) of actual CFAE, and identification of simulated independent generator frequencies and morphologies embedded in CFAE, is done using a total of 216 recordings from 10 paroxysmal and 10 persistent AF patients. The transform is tested versus Fourier analysis to detect spectral components in the presence of phase noise and interference. Correspondence is shown between ensemble basis vectors of highest power and corresponding synthetic drivers embedded in CFAE. Results The ensemble basis is orthogonal, and efficient for representation of CFAE components as compared with Fourier analysis (p ≤ 0.002). When three synthetic drivers with additive phase noise and interference were decomposed, the top three peaks in the ensemble power spectrum corresponded to the driver frequencies more closely as compared with top Fourier power spectrum peaks (p ≤ 0.005). The synthesized drivers with phase noise and interference were extractable from their corresponding ensemble basis with a mean error of less than 10%. Conclusions The new transform is able to efficiently identify CFAE features using DF calculation and by discerning morphologic differences. Unlike the Fourier transform method, it does not distort CFAE signals prior to analysis, and is relatively robust to jitter in periodic events. Thus the ensemble method can provide a useful alternative for quantitative characterization of CFAE during clinical study. PMID:21569421</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>