Sample records for ensemble averaging technique

  1. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2014-11-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  2. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2015-04-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  3. Genetic programming based ensemble system for microarray data classification.

    PubMed

    Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To

    2015-01-01

    Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved.

  4. Genetic Programming Based Ensemble System for Microarray Data Classification

    PubMed Central

    Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To

    2015-01-01

    Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved. PMID:25810748

  5. Large ensemble modeling of the last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert

    2016-05-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.

  6. Creation of the BMA ensemble for SST using a parallel processing technique

    NASA Astrophysics Data System (ADS)

    Kim, Kwangjin; Lee, Yang Won

    2013-10-01

    Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.

  7. A Simple Ensemble Simulation Technique for Assessment of Future Variations in Specific High-Impact Weather Events

    NASA Astrophysics Data System (ADS)

    Taniguchi, Kenji

    2018-04-01

    To investigate future variations in high-impact weather events, numerous samples are required. For the detailed assessment in a specific region, a high spatial resolution is also required. A simple ensemble simulation technique is proposed in this paper. In the proposed technique, new ensemble members were generated from one basic state vector and two perturbation vectors, which were obtained by lagged average forecasting simulations. Sensitivity experiments with different numbers of ensemble members, different simulation lengths, and different perturbation magnitudes were performed. Experimental application to a global warming study was also implemented for a typhoon event. Ensemble-mean results and ensemble spreads of total precipitation, atmospheric conditions showed similar characteristics across the sensitivity experiments. The frequencies of the maximum total and hourly precipitation also showed similar distributions. These results indicate the robustness of the proposed technique. On the other hand, considerable ensemble spread was found in each ensemble experiment. In addition, the results of the application to a global warming study showed possible variations in the future. These results indicate that the proposed technique is useful for investigating various meteorological phenomena and the impacts of global warming. The results of the ensemble simulations also enable the stochastic evaluation of differences in high-impact weather events. In addition, the impacts of a spectral nudging technique were also examined. The tracks of a typhoon were quite different between cases with and without spectral nudging; however, the ranges of the tracks among ensemble members were comparable. It indicates that spectral nudging does not necessarily suppress ensemble spread.

  8. Clustering cancer gene expression data by projective clustering ensemble

    PubMed Central

    Yu, Xianxue; Yu, Guoxian

    2017-01-01

    Gene expression data analysis has paramount implications for gene treatments, cancer diagnosis and other domains. Clustering is an important and promising tool to analyze gene expression data. Gene expression data is often characterized by a large amount of genes but with limited samples, thus various projective clustering techniques and ensemble techniques have been suggested to combat with these challenges. However, it is rather challenging to synergy these two kinds of techniques together to avoid the curse of dimensionality problem and to boost the performance of gene expression data clustering. In this paper, we employ a projective clustering ensemble (PCE) to integrate the advantages of projective clustering and ensemble clustering, and to avoid the dilemma of combining multiple projective clusterings. Our experimental results on publicly available cancer gene expression data show PCE can improve the quality of clustering gene expression data by at least 4.5% (on average) than other related techniques, including dimensionality reduction based single clustering and ensemble approaches. The empirical study demonstrates that, to further boost the performance of clustering cancer gene expression data, it is necessary and promising to synergy projective clustering with ensemble clustering. PCE can serve as an effective alternative technique for clustering gene expression data. PMID:28234920

  9. Optimal averaging of soil moisture predictions from ensemble land surface model simulations

    USDA-ARS?s Scientific Manuscript database

    The correct interpretation of ensemble information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an instrumental variabl...

  10. A comparison between EDA-EnVar and ETKF-EnVar data assimilation techniques using radar observations at convective scales through a case study of Hurricane Ike (2008)

    NASA Astrophysics Data System (ADS)

    Shen, Feifei; Xu, Dongmei; Xue, Ming; Min, Jinzhong

    2017-07-01

    This study examines the impacts of assimilating radar radial velocity (Vr) data for the simulation of hurricane Ike (2008) with two different ensemble generation techniques in the framework of the hybrid ensemble-variational (EnVar) data assimilation system of Weather Research and Forecasting model. For the generation of ensemble perturbations we apply two techniques, the ensemble transform Kalman filter (ETKF) and the ensemble of data assimilation (EDA). For the ETKF-EnVar, the forecast ensemble perturbations are updated by the ETKF, while for the EDA-EnVar, the hybrid is employed to update each ensemble member with perturbed observations. The ensemble mean is analyzed by the hybrid method with flow-dependent ensemble covariance for both EnVar. The sensitivity of analyses and forecasts to the two applied ensemble generation techniques is investigated in our current study. It is found that the EnVar system is rather stable with different ensemble update techniques in terms of its skill on improving the analyses and forecasts. The EDA-EnVar-based ensemble perturbations are likely to include slightly less organized spatial structures than those in ETKF-EnVar, and the perturbations of the latter are constructed more dynamically. Detailed diagnostics reveal that both of the EnVar schemes not only produce positive temperature increments around the hurricane center but also systematically adjust the hurricane location with the hurricane-specific error covariance. On average, the analysis and forecast from the ETKF-EnVar have slightly smaller errors than that from the EDA-EnVar in terms of track, intensity, and precipitation forecast. Moreover, ETKF-EnVar yields better forecasts when verified against conventional observations.

  11. Optimal averaging of soil moisture predictions from ensemble land surface model simulations

    USDA-ARS?s Scientific Manuscript database

    The correct interpretation of ensemble 3 soil moisture information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an inst...

  12. Quantifying rapid changes in cardiovascular state with a moving ensemble average.

    PubMed

    Cieslak, Matthew; Ryan, William S; Babenko, Viktoriya; Erro, Hannah; Rathbun, Zoe M; Meiring, Wendy; Kelsey, Robert M; Blascovich, Jim; Grafton, Scott T

    2018-04-01

    MEAP, the moving ensemble analysis pipeline, is a new open-source tool designed to perform multisubject preprocessing and analysis of cardiovascular data, including electrocardiogram (ECG), impedance cardiogram (ICG), and continuous blood pressure (BP). In addition to traditional ensemble averaging, MEAP implements a moving ensemble averaging method that allows for the continuous estimation of indices related to cardiovascular state, including cardiac output, preejection period, heart rate variability, and total peripheral resistance, among others. Here, we define the moving ensemble technique mathematically, highlighting its differences from fixed-window ensemble averaging. We describe MEAP's interface and features for signal processing, artifact correction, and cardiovascular-based fMRI analysis. We demonstrate the accuracy of MEAP's novel B point detection algorithm on a large collection of hand-labeled ICG waveforms. As a proof of concept, two subjects completed a series of four physical and cognitive tasks (cold pressor, Valsalva maneuver, video game, random dot kinetogram) on 3 separate days while ECG, ICG, and BP were recorded. Critically, the moving ensemble method reliably captures the rapid cyclical cardiovascular changes related to the baroreflex during the Valsalva maneuver and the classic cold pressor response. Cardiovascular measures were seen to vary considerably within repetitions of the same cognitive task for each individual, suggesting that a carefully designed paradigm could be used to capture fast-acting event-related changes in cardiovascular state. © 2017 Society for Psychophysiological Research.

  13. Decadal climate predictions improved by ocean ensemble dispersion filtering

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the whole ensemble with its ensemble average, improves a prediction system. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Our study shows that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure applying the average during the model run, called ensemble dispersion filter, results in more accurate results than the standard prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27149620','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27149620"><span>Estimation of Uncertainties in the Global Distance Test (GDT_TS) for CASP Models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Wenlin; Schaeffer, R Dustin; Otwinowski, Zbyszek; Grishin, Nick V</p> <p>2016-01-01</p> <p>The Critical Assessment of techniques for protein Structure Prediction (or CASP) is a community-wide blind test experiment to reveal the best accomplishments of structure modeling. Assessors have been using the Global Distance Test (GDT_TS) measure to quantify prediction performance since CASP3 in 1998. However, identifying significant score differences between close models is difficult because of the lack of uncertainty estimations for this measure. Here, we utilized the atomic fluctuations caused by structure flexibility to estimate the uncertainty of GDT_TS scores. Structures determined by nuclear magnetic resonance are deposited as ensembles of alternative conformers that reflect the structural flexibility, whereas standard X-ray refinement produces the static structure averaged over time and space for the dynamic ensembles. To recapitulate the structural heterogeneous ensemble in the crystal lattice, we performed time-averaged refinement for X-ray datasets to generate structural ensembles for our GDT_TS uncertainty analysis. Using those generated ensembles, our study demonstrates that the time-averaged refinements produced structure ensembles with better agreement with the experimental datasets than the averaged X-ray structures with B-factors. The uncertainty of the GDT_TS scores, quantified by their standard deviations (SDs), increases for scores lower than 50 and 70, with maximum SDs of 0.3 and 1.23 for X-ray and NMR structures, respectively. We also applied our procedure to the high accuracy version of GDT-based score and produced similar results with slightly higher SDs. To facilitate score comparisons by the community, we developed a user-friendly web server that produces structure ensembles for NMR and X-ray structures and is accessible at http://prodata.swmed.edu/SEnCS. Our work helps to identify the significance of GDT_TS score differences, as well as to provide structure ensembles for estimating SDs of any scores.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvF...2a4703E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvF...2a4703E"><span>Equilibrium energy spectrum of point vortex motion with remarks on ensemble choice and ergodicity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Esler, J. G.</p> <p>2017-01-01</p> <p>The dynamics and statistical mechanics of N chaotically evolving point vortices in the doubly periodic domain are revisited. The selection of the correct microcanonical ensemble for the system is first investigated. The numerical results of Weiss and McWilliams [Phys. Fluids A 3, 835 (1991), 10.1063/1.858014], who argued that the point vortex system with N =6 is nonergodic because of an apparent discrepancy between ensemble averages and dynamical time averages, are shown to be due to an incorrect ensemble definition. When the correct microcanonical ensemble is sampled, accounting for the vortex momentum constraint, time averages obtained from direct numerical simulation agree with ensemble averages within the sampling error of each calculation, i.e., there is no numerical evidence for nonergodicity. Further, in the N →∞ limit it is shown that the vortex momentum no longer constrains the long-time dynamics and therefore that the correct microcanonical ensemble for statistical mechanics is that associated with the entire constant energy hypersurface in phase space. Next, a recently developed technique is used to generate an explicit formula for the density of states function for the system, including for arbitrary distributions of vortex circulations. Exact formulas for the equilibrium energy spectrum, and for the probability density function of the energy in each Fourier mode, are then obtained. Results are compared with a series of direct numerical simulations with N =50 and excellent agreement is found, confirming the relevance of the results for interpretation of quantum and classical two-dimensional turbulence.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AtmRe.176...75F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AtmRe.176...75F"><span>Applications of Bayesian Procrustes shape analysis to ensemble radar reflectivity nowcast verification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fox, Neil I.; Micheas, Athanasios C.; Peng, Yuqiang</p> <p>2016-07-01</p> <p>This paper introduces the use of Bayesian full Procrustes shape analysis in object-oriented meteorological applications. In particular, the Procrustes methodology is used to generate mean forecast precipitation fields from a set of ensemble forecasts. This approach has advantages over other ensemble averaging techniques in that it can produce a forecast that retains the morphological features of the precipitation structures and present the range of forecast outcomes represented by the ensemble. The production of the ensemble mean avoids the problems of smoothing that result from simple pixel or cell averaging, while producing credible sets that retain information on ensemble spread. Also in this paper, the full Bayesian Procrustes scheme is used as an object verification tool for precipitation forecasts. This is an extension of a previously presented Procrustes shape analysis based verification approach into a full Bayesian format designed to handle the verification of precipitation forecasts that match objects from an ensemble of forecast fields to a single truth image. The methodology is tested on radar reflectivity nowcasts produced in the Warning Decision Support System - Integrated Information (WDSS-II) by varying parameters in the K-means cluster tracking scheme.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/936447','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/936447"><span>Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ajami, N K; Duan, Q; Gao, X</p> <p>2005-04-11</p> <p>This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015GMDD....8.9925P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015GMDD....8.9925P"><span>Large ensemble modeling of last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.</p> <p>2015-11-01</p> <p>A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H41H1542A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H41H1542A"><span>Climatic Models Ensemble-based Mid-21st Century Runoff Projections: A Bayesian Framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Achieng, K. O.; Zhu, J.</p> <p>2017-12-01</p> <p>There are a number of North American Regional Climate Change Assessment Program (NARCCAP) climatic models that have been used to project surface runoff in the mid-21st century. Statistical model selection techniques are often used to select the model that best fits data. However, model selection techniques often lead to different conclusions. In this study, ten models are averaged in Bayesian paradigm to project runoff. Bayesian Model Averaging (BMA) is used to project and identify effect of model uncertainty on future runoff projections. Baseflow separation - a two-digital filter which is also called Eckhardt filter - is used to separate USGS streamflow (total runoff) into two components: baseflow and surface runoff. We use this surface runoff as the a priori runoff when conducting BMA of runoff simulated from the ten RCM models. The primary objective of this study is to evaluate how well RCM multi-model ensembles simulate surface runoff, in a Bayesian framework. Specifically, we investigate and discuss the following questions: How well do ten RCM models ensemble jointly simulate surface runoff by averaging over all the models using BMA, given a priori surface runoff? What are the effects of model uncertainty on surface runoff simulation?</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20197040','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20197040"><span>Determination of ensemble-average pairwise root mean-square deviation from experimental B-factors.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kuzmanic, Antonija; Zagrovic, Bojan</p> <p>2010-03-03</p> <p>Root mean-square deviation (RMSD) after roto-translational least-squares fitting is a measure of global structural similarity of macromolecules used commonly. On the other hand, experimental x-ray B-factors are used frequently to study local structural heterogeneity and dynamics in macromolecules by providing direct information about root mean-square fluctuations (RMSF) that can also be calculated from molecular dynamics simulations. We provide a mathematical derivation showing that, given a set of conservative assumptions, a root mean-square ensemble-average of an all-against-all distribution of pairwise RMSD for a single molecular species, <RMSD(2)>(1/2), is directly related to average B-factors (<B>) and <RMSF(2)>(1/2). We show this relationship and explore its limits of validity on a heterogeneous ensemble of structures taken from molecular dynamics simulations of villin headpiece generated using distributed-computing techniques and the Folding@Home cluster. Our results provide a basis for quantifying global structural diversity of macromolecules in crystals directly from x-ray experiments, and we show this on a large set of structures taken from the Protein Data Bank. In particular, we show that the ensemble-average pairwise backbone RMSD for a microscopic ensemble underlying a typical protein x-ray structure is approximately 1.1 A, under the assumption that the principal contribution to experimental B-factors is conformational variability. 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li class="active"><span>1</span></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_1 --> <div id="page_2" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li class="active"><span>2</span></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="21"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2830444','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2830444"><span>Determination of Ensemble-Average Pairwise Root Mean-Square Deviation from Experimental B-Factors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kuzmanic, Antonija; Zagrovic, Bojan</p> <p>2010-01-01</p> <p>Abstract Root mean-square deviation (RMSD) after roto-translational least-squares fitting is a measure of global structural similarity of macromolecules used commonly. On the other hand, experimental x-ray B-factors are used frequently to study local structural heterogeneity and dynamics in macromolecules by providing direct information about root mean-square fluctuations (RMSF) that can also be calculated from molecular dynamics simulations. We provide a mathematical derivation showing that, given a set of conservative assumptions, a root mean-square ensemble-average of an all-against-all distribution of pairwise RMSD for a single molecular species, <RMSD2>1/2, is directly related to average B-factors (<B>) and <RMSF2>1/2. We show this relationship and explore its limits of validity on a heterogeneous ensemble of structures taken from molecular dynamics simulations of villin headpiece generated using distributed-computing techniques and the Folding@Home cluster. Our results provide a basis for quantifying global structural diversity of macromolecules in crystals directly from x-ray experiments, and we show this on a large set of structures taken from the Protein Data Bank. In particular, we show that the ensemble-average pairwise backbone RMSD for a microscopic ensemble underlying a typical protein x-ray structure is ∼1.1 Å, under the assumption that the principal contribution to experimental B-factors is conformational variability. PMID:20197040</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AnGeo..29.1295S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AnGeo..29.1295S"><span>Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Soltanzadeh, I.; Azadi, M.; Vakili, G. A.</p> <p>2011-07-01</p> <p>Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhyA..419..221H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhyA..419..221H"><span>Variable diffusion in stock market fluctuations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.</p> <p>2015-02-01</p> <p>We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5625167','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5625167"><span>On averaging aspect ratios and distortion parameters over ice crystal population ensembles for estimating effective scattering asymmetry parameters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>van Diedenhoven, Bastiaan; Ackerman, Andrew S.; Fridlind, Ann M.; Cairns, Brian</p> <p>2017-01-01</p> <p>The use of ensemble-average values of aspect ratio and distortion parameter of hexagonal ice prisms for the estimation of ensemble-average scattering asymmetry parameters is evaluated. Using crystal aspect ratios greater than unity generally leads to ensemble-average values of aspect ratio that are inconsistent with the ensemble-average asymmetry parameters. When a definition of aspect ratio is used that limits the aspect ratio to below unity (α≤1) for both hexagonal plates and columns, the effective asymmetry parameters calculated using ensemble-average aspect ratios are generally consistent with ensemble-average asymmetry parameters, especially if aspect ratios are geometrically averaged. Ensemble-average distortion parameters generally also yield effective asymmetry parameters that are largely consistent with ensemble-average asymmetry parameters. In the case of mixtures of plates and columns, it is recommended to geometrically average the α≤1 aspect ratios and to subsequently calculate the effective asymmetry parameter using a column or plate geometry when the contribution by columns to a given mixture’s total projected area is greater or lower than 50%, respectively. In addition, we show that ensemble-average aspect ratios, distortion parameters and asymmetry parameters can generally be retrieved accurately from simulated multi-directional polarization measurements based on mixtures of varying columns and plates. However, such retrievals tend to be somewhat biased toward yielding column-like aspect ratios. Furthermore, generally large retrieval errors can occur for mixtures with approximately equal contributions of columns and plates and for ensembles with strong contributions of thin plates. PMID:28983127</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19820015075','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19820015075"><span>Program for narrow-band analysis of aircraft flyover noise using ensemble averaging techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gridley, D.</p> <p>1982-01-01</p> <p>A package of computer programs was developed for analyzing acoustic data from an aircraft flyover. The package assumes the aircraft is flying at constant altitude and constant velocity in a fixed attitude over a linear array of ground microphones. Aircraft position is provided by radar and an option exists for including the effects of the aircraft's rigid-body attitude relative to the flight path. Time synchronization between radar and acoustic recording stations permits ensemble averaging techniques to be applied to the acoustic data thereby increasing the statistical accuracy of the acoustic results. Measured layered meteorological data obtained during the flyovers are used to compute propagation effects through the atmosphere. Final results are narrow-band spectra and directivities corrected for the flight environment to an equivalent static condition at a specified radius.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.emc.ncep.noaa.gov/gmb/wx20cb/NAEFS/ncep_biasavg_percent.html','SCIGOVWS'); return false;" href="http://www.emc.ncep.noaa.gov/gmb/wx20cb/NAEFS/ncep_biasavg_percent.html"><span>--No Title--</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>2008073000 2008072900 2008072800 Background information bias reduction = ( | domain-averaged ensemble <em>mean</em> bias | - | domain-averaged bias-corrected ensemble <em>mean</em> bias | / | domain-averaged bias-corrected ensemble <em>mean</em> bias | NAEFS Products | NAEFS | EMC Ensemble Products EMC | NCEP | National Weather Service</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1225583','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1225583"><span>Random-matrix approach to the statistical compound nuclear reaction at low energies using the Monte-Carlo technique [PowerPoint</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kawano, Toshihiko</p> <p>2015-11-10</p> <p>This theoretical treatment of low-energy compound nucleus reactions begins with the Bohr hypothesis, with corrections, and various statistical theories. The author investigates the statistical properties of the scattering matrix containing a Gaussian Orthogonal Ensemble (GOE) Hamiltonian in the propagator. The following conclusions are reached: For all parameter values studied, the numerical average of MC-generated cross sections coincides with the result of the Verbaarschot, Weidenmueller, Zirnbauer triple-integral formula. Energy average and ensemble average agree reasonably well when the width I is one or two orders of magnitude larger than the average resonance spacing d. In the strong-absorption limit, the channel degree-of-freedommore » ν a is 2. The direct reaction increases the inelastic cross sections while the elastic cross section is reduced.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26068738','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26068738"><span>Quantifying Nucleic Acid Ensembles with X-ray Scattering Interferometry.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shi, Xuesong; Bonilla, Steve; Herschlag, Daniel; Harbury, Pehr</p> <p>2015-01-01</p> <p>The conformational ensemble of a macromolecule is the complete description of the macromolecule's solution structures and can reveal important aspects of macromolecular folding, recognition, and function. However, most experimental approaches determine an average or predominant structure, or follow transitions between states that each can only be described by an average structure. Ensembles have been extremely difficult to experimentally characterize. We present the unique advantages and capabilities of a new biophysical technique, X-ray scattering interferometry (XSI), for probing and quantifying structural ensembles. XSI measures the interference of scattered waves from two heavy metal probes attached site specifically to a macromolecule. A Fourier transform of the interference pattern gives the fractional abundance of different probe separations directly representing the multiple conformation states populated by the macromolecule. These probe-probe distance distributions can then be used to define the structural ensemble of the macromolecule. XSI provides accurate, calibrated distance in a model-independent fashion with angstrom scale sensitivity in distances. XSI data can be compared in a straightforward manner to atomic coordinates determined experimentally or predicted by molecular dynamics simulations. We describe the conceptual framework for XSI and provide a detailed protocol for carrying out an XSI experiment. © 2015 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.emc.ncep.noaa.gov/gmb/wx20cb/NAEFS/cmc_biasavg_percent_control.html','SCIGOVWS'); return false;" href="http://www.emc.ncep.noaa.gov/gmb/wx20cb/NAEFS/cmc_biasavg_percent_control.html"><span>--No Title--</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>2008112500 2008112400 Background information bias reduction = ( | domain-averaged ensemble <em>mean</em> bias | - | domain-averaged bias-corrected ensemble <em>mean</em> bias | / | domain-averaged bias-corrected ensemble <em>mean</em> bias</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23049168','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23049168"><span>A translating stage system for µ-PIV measurements surrounding the tip of a migrating semi-infinite bubble.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Smith, B J; Yamaguchi, E; Gaver, D P</p> <p>2010-01-01</p> <p>We have designed, fabricated and evaluated a novel translating stage system (TSS) that augments a conventional micro particle image velocimetry (µ-PIV) system. The TSS has been used to enhance the ability to measure flow fields surrounding the tip of a migrating semi-infinite bubble in a glass capillary tube under both steady and pulsatile reopening conditions. With conventional µ-PIV systems, observations near the bubble tip are challenging because the forward progress of the bubble rapidly sweeps the air-liquid interface across the microscopic field of view. The translating stage mechanically cancels the mean bubble tip velocity, keeping the interface within the microscope field of view and providing a tenfold increase in data collection efficiency compared to fixed-stage techniques. This dramatic improvement allows nearly continuous observation of the flow field over long propagation distances. A large (136-frame) ensemble-averaged velocity field recorded with the TSS near the tip of a steadily migrating bubble is shown to compare well with fixed-stage results under identical flow conditions. Use of the TSS allows the ensemble-averaged measurement of pulsatile bubble propagation flow fields, which would be practically impossible using conventional fixed-stage techniques. We demonstrate our ability to analyze these time-dependent two-phase flows using the ensemble-averaged flow field at four points in the oscillatory cycle.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148x1731M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148x1731M"><span>Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Matsunaga, Y.; Sugita, Y.</p> <p>2018-06-01</p> <p>A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25844624','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25844624"><span>Individual differences in ensemble perception reveal multiple, independent levels of ensemble representation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Haberman, Jason; Brady, Timothy F; Alvarez, George A</p> <p>2015-04-01</p> <p>Ensemble perception, including the ability to "see the average" from a group of items, operates in numerous feature domains (size, orientation, speed, facial expression, etc.). Although the ubiquity of ensemble representations is well established, the large-scale cognitive architecture of this process remains poorly defined. We address this using an individual differences approach. In a series of experiments, observers saw groups of objects and reported either a single item from the group or the average of the entire group. High-level ensemble representations (e.g., average facial expression) showed complete independence from low-level ensemble representations (e.g., average orientation). In contrast, low-level ensemble representations (e.g., orientation and color) were correlated with each other, but not with high-level ensemble representations (e.g., facial expression and person identity). These results suggest that there is not a single domain-general ensemble mechanism, and that the relationship among various ensemble representations depends on how proximal they are in representational space. (c) 2015 APA, all rights reserved).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020008664','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020008664"><span>Statistical Ensemble of Large Eddy Simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)</p> <p>2001-01-01</p> <p>A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29240972','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29240972"><span>Evidence for Dynamic Chemical Kinetics at Individual Molecular Ruthenium Catalysts.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Easter, Quinn T; Blum, Suzanne A</p> <p>2018-02-05</p> <p>Catalytic cycles are typically depicted as possessing time-invariant steps with fixed rates. Yet the true behavior of individual catalysts with respect to time is unknown, hidden by the ensemble averaging inherent to bulk measurements. Evidence is presented for variable chemical kinetics at individual catalysts, with a focus on ring-opening metathesis polymerization catalyzed by the second-generation Grubbs' ruthenium catalyst. Fluorescence microscopy is used to probe the chemical kinetics of the reaction because the technique possesses sufficient sensitivity for the detection of single chemical reactions. Insertion reactions in submicron regions likely occur at groups of many (not single) catalysts, yet not so many that their unique kinetic behavior is ensemble averaged. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1421971-ensemble-averaged-structurefunction-relationship-nanocrystals-effective-superparamagnetic-fe-clusters-catalytically-active-pt-skin-ensemble-averaged-structure-function-relationship-composite-nanocrystals-magnetic-bcc-fe-clusters-catalytically-active-fcc-pt-skin','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1421971-ensemble-averaged-structurefunction-relationship-nanocrystals-effective-superparamagnetic-fe-clusters-catalytically-active-pt-skin-ensemble-averaged-structure-function-relationship-composite-nanocrystals-magnetic-bcc-fe-clusters-catalytically-active-fcc-pt-skin"><span>Ensemble averaged structure–function relationship for nanocrystals: effective superparamagnetic Fe clusters with catalytically active Pt skin [Ensemble averaged structure-function relationship for composite nanocrystals: magnetic bcc Fe clusters with catalytically active fcc Pt skin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Petkov, Valeri; Prasai, Binay; Shastri, Sarvjit</p> <p></p> <p>Practical applications require the production and usage of metallic nanocrystals (NCs) in large ensembles. Besides, due to their cluster-bulk solid duality, metallic NCs exhibit a large degree of structural diversity. This poses the question as to what atomic-scale basis is to be used when the structure–function relationship for metallic NCs is to be quantified precisely. In this paper, we address the question by studying bi-functional Fe core-Pt skin type NCs optimized for practical applications. In particular, the cluster-like Fe core and skin-like Pt surface of the NCs exhibit superparamagnetic properties and a superb catalytic activity for the oxygen reduction reaction,more » respectively. We determine the atomic-scale structure of the NCs by non-traditional resonant high-energy X-ray diffraction coupled to atomic pair distribution function analysis. Using the experimental structure data we explain the observed magnetic and catalytic behavior of the NCs in a quantitative manner. Lastly, we demonstrate that NC ensemble-averaged 3D positions of atoms obtained by advanced X-ray scattering techniques are a very proper basis for not only establishing but also quantifying the structure–function relationship for the increasingly complex metallic NCs explored for practical applications.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JCoPh.270...70K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JCoPh.270...70K"><span>Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad</p> <p>2014-08-01</p> <p>Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1223012','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1223012"><span>Cell population modelling of yeast glycolytic oscillations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Henson, Michael A; Müller, Dirk; Reuss, Matthias</p> <p>2002-01-01</p> <p>We investigated a cell-population modelling technique in which the population is constructed from an ensemble of individual cell models. The average value or the number distribution of any intracellular property captured by the individual cell model can be calculated by simulation of a sufficient number of individual cells. The proposed method is applied to a simple model of yeast glycolytic oscillations where synchronization of the cell population is mediated by the action of an excreted metabolite. We show that smooth one-dimensional distributions can be obtained with ensembles comprising 1000 individual cells. Random variations in the state and/or structure of individual cells are shown to produce complex dynamic behaviours which cannot be adequately captured by small ensembles. PMID:12206713</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ESD.....9..153E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ESD.....9..153E"><span>Reliability ensemble averaging of 21st century projections of terrestrial net primary productivity reduces global and regional uncertainties</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Exbrayat, Jean-François; Bloom, A. Anthony; Falloon, Pete; Ito, Akihiko; Smallman, T. Luke; Williams, Mathew</p> <p>2018-02-01</p> <p>Multi-model averaging techniques provide opportunities to extract additional information from large ensembles of simulations. In particular, present-day model skill can be used to evaluate their potential performance in future climate simulations. Multi-model averaging methods have been used extensively in climate and hydrological sciences, but they have not been used to constrain projected plant productivity responses to climate change, which is a major uncertainty in Earth system modelling. Here, we use three global observationally orientated estimates of current net primary productivity (NPP) to perform a reliability ensemble averaging (REA) method using 30 global simulations of the 21st century change in NPP based on the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) <q>business as usual</q> emissions scenario. We find that the three REA methods support an increase in global NPP by the end of the 21st century (2095-2099) compared to 2001-2005, which is 2-3 % stronger than the ensemble ISIMIP mean value of 24.2 Pg C y-1. Using REA also leads to a 45-68 % reduction in the global uncertainty of 21st century NPP projection, which strengthens confidence in the resilience of the CO2 fertilization effect to climate change. This reduction in uncertainty is especially clear for boreal ecosystems although it may be an artefact due to the lack of representation of nutrient limitations on NPP in most models. Conversely, the large uncertainty that remains on the sign of the response of NPP in semi-arid regions points to the need for better observations and model development in these regions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoRL..45.4273A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoRL..45.4273A"><span>Machine Learning Predictions of a Multiresolution Climate Model Ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anderson, Gemma J.; Lucas, Donald D.</p> <p>2018-05-01</p> <p>Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24110485','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24110485"><span>An ensemble rank learning approach for gene prioritization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Po-Feng; Soo, Von-Wun</p> <p>2013-01-01</p> <p>Several different computational approaches have been developed to solve the gene prioritization problem. We intend to use the ensemble boosting learning techniques to combine variant computational approaches for gene prioritization in order to improve the overall performance. In particular we add a heuristic weighting function to the Rankboost algorithm according to: 1) the absolute ranks generated by the adopted methods for a certain gene, and 2) the ranking relationship between all gene-pairs from each prioritization result. We select 13 known prostate cancer genes in OMIM database as training set and protein coding gene data in HGNC database as test set. We adopt the leave-one-out strategy for the ensemble rank boosting learning. The experimental results show that our ensemble learning approach outperforms the four gene-prioritization methods in ToppGene suite in the ranking results of the 13 known genes in terms of mean average precision, ROC and AUC measures.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li class="active"><span>2</span></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_2 --> <div id="page_3" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li class="active"><span>3</span></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="41"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFMSH43A4178M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFMSH43A4178M"><span>Real­-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.</p> <p>2014-12-01</p> <p>Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA424381','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA424381"><span>Experimental and Computational Analysis of Modes in a Partially Constrained Plate</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2004-03-01</p> <p>way to quantify a structure. One technique utilizing an energy method is the Statistical Energy Analysis (SEA). The SEA process involves regarding...B.R. Mace. “ Statistical Energy Analysis of Two Edge- Coupled Rectangular Plates: Ensemble Averages,” Journal of Sound and Vibration, 193(4): 793-822</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3970899','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3970899"><span>Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Xue, Yi; Skrynnikov, Nikolai R</p> <p>2014-01-01</p> <p>Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5932517','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5932517"><span>Moisture Damage Modeling in Lime and Chemically Modified Asphalt at Nanolevel Using Ensemble Computational Intelligence</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2018-01-01</p> <p>This paper measures the adhesion/cohesion force among asphalt molecules at nanoscale level using an Atomic Force Microscopy (AFM) and models the moisture damage by applying state-of-the-art Computational Intelligence (CI) techniques (e.g., artificial neural network (ANN), support vector regression (SVR), and an Adaptive Neuro Fuzzy Inference System (ANFIS)). Various combinations of lime and chemicals as well as dry and wet environments are used to produce different asphalt samples. The parameters that were varied to generate different asphalt samples and measure the corresponding adhesion/cohesion forces are percentage of antistripping agents (e.g., Lime and Unichem), AFM tips K values, and AFM tip types. The CI methods are trained to model the adhesion/cohesion forces given the variation in values of the above parameters. To achieve enhanced performance, the statistical methods such as average, weighted average, and regression of the outputs generated by the CI techniques are used. The experimental results show that, of the three individual CI methods, ANN can model moisture damage to lime- and chemically modified asphalt better than the other two CI techniques for both wet and dry conditions. Moreover, the ensemble of CI along with statistical measurement provides better accuracy than any of the individual CI techniques. PMID:29849551</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22525639-cosmological-ensemble-directional-averages-observables','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22525639-cosmological-ensemble-directional-averages-observables"><span>Cosmological ensemble and directional averages of observables</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Bonvin, Camille; Clarkson, Chris; Durrer, Ruth</p> <p></p> <p>We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmologicalmore » observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1185896-ensemble-sampling-vs-time-sampling-molecular-dynamics-simulations-thermal-conductivity','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1185896-ensemble-sampling-vs-time-sampling-molecular-dynamics-simulations-thermal-conductivity"><span>Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Gordiz, Kiarash; Singh, David J.; Henry, Asegun</p> <p>2015-01-29</p> <p>In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ThApC.132.1057Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ThApC.132.1057Y"><span>Multi-criterion model ensemble of CMIP5 surface air temperature over China</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming</p> <p>2018-05-01</p> <p>The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the South Central China (the Inner Mongolia), the North Eastern China (the South Central China), and the North Western China (the South Central China), under RCP 2.6, RCP 4.5, and RCP 8.5 scenarios, respectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFMGC41D0850B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFMGC41D0850B"><span>A short-term ensemble wind speed forecasting system for wind power applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baidya Roy, S.; Traiteur, J. J.; Callicutt, D.; Smith, M.</p> <p>2011-12-01</p> <p>This study develops an adaptive, blended forecasting system to provide accurate wind speed forecasts 1 hour ahead of time for wind power applications. The system consists of an ensemble of 21 forecasts with different configurations of the Weather Research and Forecasting Single Column Model (WRFSCM) and a persistence model. The ensemble is calibrated against observations for a 2 month period (June-July, 2008) at a potential wind farm site in Illinois using the Bayesian Model Averaging (BMA) technique. The forecasting system is evaluated against observations for August 2008 at the same site. The calibrated ensemble forecasts significantly outperform the forecasts from the uncalibrated ensemble while significantly reducing forecast uncertainty under all environmental stability conditions. The system also generates significantly better forecasts than persistence, autoregressive (AR) and autoregressive moving average (ARMA) models during the morning transition and the diurnal convective regimes. This forecasting system is computationally more efficient than traditional numerical weather prediction models and can generate a calibrated forecast, including model runs and calibration, in approximately 1 minute. Currently, hour-ahead wind speed forecasts are almost exclusively produced using statistical models. However, numerical models have several distinct advantages over statistical models including the potential to provide turbulence forecasts. Hence, there is an urgent need to explore the role of numerical models in short-term wind speed forecasting. This work is a step in that direction and is likely to trigger a debate within the wind speed forecasting community.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160007028','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160007028"><span>Multi-Model Ensemble Wake Vortex Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.</p> <p>2015-01-01</p> <p>Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25510166','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25510166"><span>Reduced set averaging of face identity in children and adolescents with autism.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rhodes, Gillian; Neumann, Markus F; Ewing, Louise; Palermo, Romina</p> <p>2015-01-01</p> <p>Individuals with autism have difficulty abstracting and updating average representations from their diet of faces. These averages function as perceptual norms for coding faces, and poorly calibrated norms may contribute to face recognition difficulties in autism. Another kind of average, known as an ensemble representation, can be abstracted from briefly glimpsed sets of faces. Here we show for the first time that children and adolescents with autism also have difficulty abstracting ensemble representations from sets of faces. On each trial, participants saw a study set of four identities and then indicated whether a test face was present. The test face could be a set average or a set identity, from either the study set or another set. Recognition of set averages was reduced in participants with autism, relative to age- and ability-matched typically developing participants. This difference, which actually represents more accurate responding, indicates weaker set averaging and thus weaker ensemble representations of face identity in autism. Our finding adds to the growing evidence for atypical abstraction of average face representations from experience in autism. Weak ensemble representations may have negative consequences for face processing in autism, given the importance of ensemble representations in dealing with processing capacity limitations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMSH53A2143M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMSH53A2143M"><span>Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.</p> <p>2013-12-01</p> <p>Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.A51I0194E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.A51I0194E"><span>Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Erfanian, A.; Fomenko, L.; Wang, G.</p> <p>2016-12-01</p> <p>Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H43B1623L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H43B1623L"><span>Assessment of Surface Air Temperature over China Using Multi-criterion Model Ensemble Framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, J.; Zhu, Q.; Su, L.; He, X.; Zhang, X.</p> <p>2017-12-01</p> <p>The General Circulation Models (GCMs) are designed to simulate the present climate and project future trends. It has been noticed that the performances of GCMs are not always in agreement with each other over different regions. Model ensemble techniques have been developed to post-process the GCMs' outputs and improve their prediction reliabilities. To evaluate the performances of GCMs, root-mean-square error, correlation coefficient, and uncertainty are commonly used statistical measures. However, the simultaneous achievements of these satisfactory statistics cannot be guaranteed when using many model ensemble techniques. Meanwhile, uncertainties and future scenarios are critical for Water-Energy management and operation. In this study, a new multi-model ensemble framework was proposed. It uses a state-of-art evolutionary multi-objective optimization algorithm, termed Multi-Objective Complex Evolution Global Optimization with Principle Component Analysis and Crowding Distance (MOSPD), to derive optimal GCM ensembles and demonstrate the trade-offs among various solutions. Such trade-off information was further analyzed with a robust Pareto front with respect to different statistical measures. A case study was conducted to optimize the surface air temperature (SAT) ensemble solutions over seven geographical regions of China for the historical period (1900-2005) and future projection (2006-2100). The results showed that the ensemble solutions derived with MOSPD algorithm are superior over the simple model average and any single model output during the historical simulation period. For the future prediction, the proposed ensemble framework identified that the largest SAT change would occur in the South Central China under RCP 2.6 scenario, North Eastern China under RCP 4.5 scenario, and North Western China under RCP 8.5 scenario, while the smallest SAT change would occur in the Inner Mongolia under RCP 2.6 scenario, South Central China under RCP 4.5 scenario, and South Central China under RCP 8.5 scenario.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29363314','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29363314"><span>Life under the Microscope: Single-Molecule Fluorescence Highlights the RNA World.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ray, Sujay; Widom, Julia R; Walter, Nils G</p> <p>2018-04-25</p> <p>The emergence of single-molecule (SM) fluorescence techniques has opened up a vast new toolbox for exploring the molecular basis of life. The ability to monitor individual biomolecules in real time enables complex, dynamic folding pathways to be interrogated without the averaging effect of ensemble measurements. In parallel, modern biology has been revolutionized by our emerging understanding of the many functions of RNA. In this comprehensive review, we survey SM fluorescence approaches and discuss how the application of these tools to RNA and RNA-containing macromolecular complexes in vitro has yielded significant insights into the underlying biology. Topics covered include the three-dimensional folding landscapes of a plethora of isolated RNA molecules, their assembly and interactions in RNA-protein complexes, and the relation of these properties to their biological functions. In all of these examples, the use of SM fluorescence methods has revealed critical information beyond the reach of ensemble averages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22479573-almost-sure-convergence-quantum-spin-glasses','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22479573-almost-sure-convergence-quantum-spin-glasses"><span>Almost sure convergence in quantum spin glasses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Buzinski, David, E-mail: dab197@case.edu; Meckes, Elizabeth, E-mail: elizabeth.meckes@case.edu</p> <p>2015-12-15</p> <p>Recently, Keating, Linden, and Wells [Markov Processes Relat. Fields 21(3), 537-555 (2015)] showed that the density of states measure of a nearest-neighbor quantum spin glass model is approximately Gaussian when the number of particles is large. The density of states measure is the ensemble average of the empirical spectral measure of a random matrix; in this paper, we use concentration of measure and entropy techniques together with the result of Keating, Linden, and Wells to show that in fact the empirical spectral measure of such a random matrix is almost surely approximately Gaussian itself with no ensemble averaging. We alsomore » extend this result to a spherical quantum spin glass model and to the more general coupling geometries investigated by Erdős and Schröder [Math. Phys., Anal. Geom. 17(3-4), 441–464 (2014)].« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JSMTE..11.3401T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JSMTE..11.3401T"><span>Typical performance of approximation algorithms for NP-hard problems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Takabe, Satoshi; Hukushima, Koji</p> <p>2016-11-01</p> <p>Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4748182','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4748182"><span>Unveiling Inherent Degeneracies in Determining Population-weighted Ensembles of Inter-domain Orientational Distributions Using NMR Residual Dipolar Couplings: Application to RNA Helix Junction Helix Motifs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yang, Shan; Al-Hashimi, Hashim M.</p> <p>2016-01-01</p> <p>A growing number of studies employ time-averaged experimental data to determine dynamic ensembles of biomolecules. While it is well known that different ensembles can satisfy experimental data to within error, the extent and nature of these degeneracies, and their impact on the accuracy of the ensemble determination remains poorly understood. Here, we use simulations and a recently introduced metric for assessing ensemble similarity to explore degeneracies in determining ensembles using NMR residual dipolar couplings (RDCs) with specific application to A-form helices in RNA. Various target ensembles were constructed representing different domain-domain orientational distributions that are confined to a topologically restricted (<10%) conformational space. Five independent sets of ensemble averaged RDCs were then computed for each target ensemble and a ‘sample and select’ scheme used to identify degenerate ensembles that satisfy RDCs to within experimental uncertainty. We find that ensembles with different ensemble sizes and that can differ significantly from the target ensemble (by as much as ΣΩ ~ 0.4 where ΣΩ varies between 0 and 1 for maximum and minimum ensemble similarity, respectively) can satisfy the ensemble averaged RDCs. These deviations increase with the number of unique conformers and breadth of the target distribution, and result in significant uncertainty in determining conformational entropy (as large as 5 kcal/mol at T = 298 K). Nevertheless, the RDC-degenerate ensembles are biased towards populated regions of the target ensemble, and capture other essential features of the distribution, including the shape. Our results identify ensemble size as a major source of uncertainty in determining ensembles and suggest that NMR interactions such as RDCs and spin relaxation, on their own, do not carry the necessary information needed to determine conformational entropy at a useful level of precision. The framework introduced here provides a general approach for exploring degeneracies in ensemble determination for different types of experimental data. PMID:26131693</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27874263','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27874263"><span>Ensemble perception of color in autistic adults.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Maule, John; Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna</p> <p>2017-05-01</p> <p>Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839-851. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5484362','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5484362"><span>Ensemble perception of color in autistic adults</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna</p> <p>2016-01-01</p> <p>Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839–851. © 2016 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research PMID:27874263</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20170009122&hterms=vortex&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dvortex','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20170009122&hterms=vortex&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dvortex"><span>Multimodel Ensemble Methods for Prediction of Wake-Vortex Transport and Decay Originating NASA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Korner, Stephan; Ahmad, Nashat N.; Holzapfel, Frank; VanValkenburg, Randal L.</p> <p>2017-01-01</p> <p>Several multimodel ensemble methods are selected and further developed to improve the deterministic and probabilistic prediction skills of individual wake-vortex transport and decay models. The different multimodel ensemble methods are introduced, and their suitability for wake applications is demonstrated. The selected methods include direct ensemble averaging, Bayesian model averaging, and Monte Carlo simulation. The different methodologies are evaluated employing data from wake-vortex field measurement campaigns conducted in the United States and Germany.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li class="active"><span>3</span></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_3 --> <div id="page_4" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li class="active"><span>4</span></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="61"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvE..96f2122M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvE..96f2122M"><span>Scale-invariant Green-Kubo relation for time-averaged diffusivity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Meyer, Philipp; Barkai, Eli; Kantz, Holger</p> <p>2017-12-01</p> <p>In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..16.6931B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..16.6931B"><span>Creating "Intelligent" Ensemble Averages Using a Process-Based Framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baker, Noel; Taylor, Patrick</p> <p>2014-05-01</p> <p>The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1366610','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1366610"><span>Measurement of Single Macromolecule Orientation by Total Internal Reflection Fluorescence Polarization Microscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Forkey, Joseph N.; Quinlan, Margot E.; Goldman, Yale E.</p> <p>2005-01-01</p> <p>A new approach is presented for measuring the three-dimensional orientation of individual macromolecules using single molecule fluorescence polarization (SMFP) microscopy. The technique uses the unique polarizations of evanescent waves generated by total internal reflection to excite the dipole moment of individual fluorophores. To evaluate the new SMFP technique, single molecule orientation measurements from sparsely labeled F-actin are compared to ensemble-averaged orientation data from similarly prepared densely labeled F-actin. Standard deviations of the SMFP measurements taken at 40 ms time intervals indicate that the uncertainty for individual measurements of axial and azimuthal angles is ∼10° at 40 ms time resolution. Comparison with ensemble data shows there are no substantial systematic errors associated with the single molecule measurements. In addition to evaluating the technique, the data also provide a new measurement of the torsional rigidity of F-actin. These measurements support the smaller of two values of the torsional rigidity of F-actin previously reported. PMID:15894632</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.emc.ncep.noaa.gov/gmb/ens/NAEFS/NAEFS-eval.html','SCIGOVWS'); return false;" href="http://www.emc.ncep.noaa.gov/gmb/ens/NAEFS/NAEFS-eval.html"><span>EMC Global Climate And Weather Modeling Branch Personnel</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>Comparison Statistics which includes: NCEP <em>Raw</em> and Bias-Corrected Ensemble Domain Averaged Bias NCEP <em>Raw</em> and Bias-Corrected Ensemble Domain Averaged Bias Reduction (Percents) CMC <em>Raw</em> and Bias-Corrected Control Forecast Domain Averaged Bias CMC <em>Raw</em> and Bias-Corrected Control Forecast Domain Averaged Bias Reduction</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19790029971&hterms=Weak+signals&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DWeak%2Bsignals','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19790029971&hterms=Weak+signals&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DWeak%2Bsignals"><span>A method for determining the weak statistical stationarity of a random process</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sadeh, W. Z.; Koper, C. A., Jr.</p> <p>1978-01-01</p> <p>A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvE..95c3305H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvE..95c3305H"><span>Decorrelation correction for nanoparticle tracking analysis of dilute polydisperse suspensions in bulk flow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hartman, John; Kirby, Brian</p> <p>2017-03-01</p> <p>Nanoparticle tracking analysis, a multiprobe single particle tracking technique, is a widely used method to quickly determine the concentration and size distribution of colloidal particle suspensions. Many popular tools remove non-Brownian components of particle motion by subtracting the ensemble-average displacement at each time step, which is termed dedrifting. Though critical for accurate size measurements, dedrifting is shown here to introduce significant biasing error and can fundamentally limit the dynamic range of particle size that can be measured for dilute heterogeneous suspensions such as biological extracellular vesicles. We report a more accurate estimate of particle mean-square displacement, which we call decorrelation analysis, that accounts for correlations between individual and ensemble particle motion, which are spuriously introduced by dedrifting. Particle tracking simulation and experimental results show that this approach more accurately determines particle diameters for low-concentration polydisperse suspensions when compared with standard dedrifting techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFMGC43C1037V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFMGC43C1037V"><span>Climate Model Ensemble Methodology: Rationale and Challenges</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vezer, M. A.; Myrvold, W.</p> <p>2012-12-01</p> <p>A tractable model of the Earth's atmosphere, or, indeed, any large, complex system, is inevitably unrealistic in a variety of ways. This will have an effect on the model's output. Nonetheless, we want to be able to rely on certain features of the model's output in studies aiming to detect, attribute, and project climate change. For this, we need assurance that these features reflect the target system, and are not artifacts of the unrealistic assumptions that go into the model. One technique for overcoming these limitations is to study ensembles of models which employ different simplifying assumptions and different methods of modelling. One then either takes as reliable certain outputs on which models in the ensemble agree, or takes the average of these outputs as the best estimate. Since the Intergovernmental Panel on Climate Change's Fourth Assessment Report (IPCC AR4) modellers have aimed to improve ensemble analysis by developing techniques to account for dependencies among models, and to ascribe unequal weights to models according to their performance. The goal of this paper is to present as clearly and cogently as possible the rationale for climate model ensemble methodology, the motivation of modellers to account for model dependencies, and their efforts to ascribe unequal weights to models. The method of our analysis is as follows. We will consider a simpler, well-understood case of taking the mean of a number of measurements of some quantity. Contrary to what is sometimes said, it is not a requirement of this practice that the errors of the component measurements be independent; one must, however, compensate for any lack of independence. We will also extend the usual accounts to include cases of unknown systematic error. We draw parallels between this simpler illustration and the more complex example of climate model ensembles, detailing how ensembles can provide more useful information than any of their constituent models. This account emphasizes the epistemic importance of considering degrees of model dependence, and the practice of ascribing unequal weights to models of unequal skill.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120013787','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120013787"><span>Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shih, Tsan-Hsing; Liu, Nan-Suey</p> <p>2012-01-01</p> <p>In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016IJSyS..47..406C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016IJSyS..47..406C"><span>MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Lei; Kamel, Mohamed S.</p> <p>2016-01-01</p> <p>In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1254263-single-molecule-fluorescence-imaging-studying-organic-organometallic-inorganic-reaction-mechanisms','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1254263-single-molecule-fluorescence-imaging-studying-organic-organometallic-inorganic-reaction-mechanisms"><span>Single-Molecule Fluorescence Imaging for Studying Organic, Organometallic, and Inorganic Reaction Mechanisms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Blum, Suzanne A.</p> <p>2016-05-24</p> <p>The reactive behavior of individual molecules is seldom observed, because we usually measure the average properties of billions of molecules. What we miss is important: the catalytic activity of less than 1% of the molecules under observation can dominate the outcome of a chemical reaction seen at a macroscopic level. Currently available techniques to examine reaction mechanisms (such as nuclear magnetic resonance spectroscopy and mass spectrometry) study molecules as an averaged ensemble. These ensemble techniques are unable to detect minor components (under ~1%) in mixtures or determine which components in the mixture are responsible for reactivity and catalysis. In themore » field of mechanistic chemistry, there is a resulting heuristic device that if an intermediate is very reactive in catalysis, it often cannot be observed (termed “Halpern’s Rule” ). Ultimately, the development of single-molecule imaging technology could be a powerful tool to observe these “unobservable” intermediates and active catalysts. Single-molecule techniques have already transformed biology and the understanding of biochemical processes. The potential of single-molecule fluorescence microscopy to address diverse chemical questions, such as the chemical reactivity of organometallic or inorganic systems with discrete metal complexes, however, has not yet been realized. In this respect, its application to chemical systems lags significantly behind its application to biophysical systems. This transformative imaging technique has broad, multidisciplinary impact with the potential to change the way the chemistry community studies reaction mechanisms and reactivity distributions, especially in the core area of catalysis.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/56275','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/56275"><span>Impact of Bias-Correction Type and Conditional Training on Bayesian Model Averaging over the Northeast United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Michael J. Erickson; Brian A. Colle; Joseph J. Charney</p> <p>2012-01-01</p> <p>The performance of a multimodel ensemble over the northeast United States is evaluated before and after applying bias correction and Bayesian model averaging (BMA). The 13-member Stony Brook University (SBU) ensemble at 0000 UTC is combined with the 21-member National Centers for Environmental Prediction (NCEP) Short-Range Ensemble Forecast (SREF) system at 2100 UTC....</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29399270','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29399270"><span>The Weighted-Average Lagged Ensemble.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>DelSole, T; Trenary, L; Tippett, M K</p> <p>2017-11-01</p> <p>A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMSA33B..01S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMSA33B..01S"><span>Ionospheric Storm Reconstructions with a Multimodel Ensemble Prdiction System (MEPS) of Data Assimilation Models: Mid and Low Latitude Dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schunk, R. W.; Scherliess, L.; Eccles, V.; Gardner, L. C.; Sojka, J. J.; Zhu, L.; Pi, X.; Mannucci, A. J.; Komjathy, A.; Wang, C.; Rosen, G.</p> <p>2016-12-01</p> <p>As part of the NASA-NSF Space Weather Modeling Collaboration, we created a Multimodel Ensemble Prediction System (MEPS) for the Ionosphere-Thermosphere-Electrodynamics system that is based on Data Assimilation (DA) models. MEPS is composed of seven physics-based data assimilation models that cover the globe. Ensemble modeling can be conducted for the mid-low latitude ionosphere using the four GAIM data assimilation models, including the Gauss Markov (GM), Full Physics (FP), Band Limited (BL) and 4DVAR DA models. These models can assimilate Total Electron Content (TEC) from a constellation of satellites, bottom-side electron density profiles from digisondes, in situ plasma densities, occultation data and ultraviolet emissions. The four GAIM models were run for the March 16-17, 2013, geomagnetic storm period with the same data, but we also systematically added new data types and re-ran the GAIM models to see how the different data types affected the GAIM results, with the emphasis on elucidating differences in the underlying ionospheric dynamics and thermospheric coupling. Also, for each scenario the outputs from the four GAIM models were used to produce an ensemble mean for TEC, NmF2, and hmF2. A simple average of the models was used in the ensemble averaging to see if there was an improvement of the ensemble average over the individual models. For the scenarios considered, the ensemble average yielded better specifications than the individual GAIM models. The model differences and averages, and the consequent differences in ionosphere-thermosphere coupling and dynamics will be discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1713741V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1713741V"><span>Using ensembles in water management: forecasting dry and wet episodes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van het Schip-Haverkamp, Tessa; van den Berg, Wim; van de Beek, Remco</p> <p>2015-04-01</p> <p>Extreme weather situations as droughts and extensive precipitation are becoming more frequent, which makes it more important to obtain accurate weather forecasts for the short and long term. Ensembles can provide a solution in terms of scenario forecasts. MeteoGroup uses ensembles in a new forecasting technique which presents a number of weather scenarios for a dynamical water management project, called Water-Rijk, in which water storage and water retention plays a large role. The Water-Rijk is part of Park Lingezegen, which is located between Arnhem and Nijmegen in the Netherlands. In collaboration with the University of Wageningen, Alterra and Eijkelkamp a forecasting system is developed for this area which can provide water boards with a number of weather and hydrology scenarios in order to assist in the decision whether or not water retention or water storage is necessary in the near future. In order to make a forecast for drought and extensive precipitation, the difference 'precipitation- evaporation' is used as a measurement of drought in the weather forecasts. In case of an upcoming drought this difference will take larger negative values. In case of a wet episode, this difference will be positive. The Makkink potential evaporation is used which gives the most accurate potential evaporation values during the summer, when evaporation plays an important role in the availability of surface water. Scenarios are determined by reducing the large number of forecasts in the ensemble to a number of averaged members with each its own likelihood of occurrence. For the Water-Rijk project 5 scenario forecasts are calculated: extreme dry, dry, normal, wet and extreme wet. These scenarios are constructed for two forecasting periods, each using its own ensemble technique: up to 48 hours ahead and up to 15 days ahead. The 48-hour forecast uses an ensemble constructed from forecasts of multiple high-resolution regional models: UKMO's Euro4 model,the ECMWF model, WRF and Hirlam. Using multiple model runs and additional post processing, an ensemble can be created from non-ensemble models. The 15-day forecast uses the ECMWF Ensemble Prediction System forecast from which scenarios can be deduced directly. A combination of the ensembles from the two forecasting periods is used in order to have the highest possible resolution of the forecast for the first 48 hours followed by the lower resolution long term forecast.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850019472','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850019472"><span>An interplanetary magnetic field ensemble at 1 AU</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Matthaeus, W. H.; Goldstein, M. L.; King, J. H.</p> <p>1985-01-01</p> <p>A method for calculation ensemble averages from magnetic field data is described. A data set comprising approximately 16 months of nearly continuous ISEE-3 magnetic field data is used in this study. Individual subintervals of this data, ranging from 15 hours to 15.6 days comprise the ensemble. The sole condition for including each subinterval in the averages is the degree to which it represents a weakly time-stationary process. Averages obtained by this method are appropriate for a turbulence description of the interplanetary medium. The ensemble average correlation length obtained from all subintervals is found to be 4.9 x 10 to the 11th cm. The average value of the variances of the magnetic field components are in the approximate ratio 8:9:10, where the third component is the local mean field direction. The correlation lengths and variances are found to have a systematic variation with subinterval duration, reflecting the important role of low-frequency fluctuations in the interplanetary medium.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24667482','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24667482"><span>NIMEFI: gene regulatory network inference using multiple ensemble feature importance algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ruyssinck, Joeri; Huynh-Thu, Vân Anh; Geurts, Pierre; Dhaene, Tom; Demeester, Piet; Saeys, Yvan</p> <p>2014-01-01</p> <p>One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms) and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made publicly available.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29257722','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29257722"><span>Perceived Average Orientation Reflects Effective Gist of the Surface.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cha, Oakyoon; Chong, Sang Chul</p> <p>2018-03-01</p> <p>The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1987ltcc.rept.....H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1987ltcc.rept.....H"><span>Long-time correlation for the chaotic orbit in the two-wave Hamiltonian</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hatori, Tadatsugu; Irie, Haruyuki</p> <p>1987-03-01</p> <p>The time correlation function of velocity is found to decay with the power law for the orbit governed by a Hamiltonian, H=v sup 2/2 - Mcosx - Pcos (k(x-t)). The renormalization group technique can predict the power of decay for the correlation function defined by the ensemble average. The power spectrum becomes the 1/f-type for a special case.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA263588','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA263588"><span>Center for Advanced Propulsion Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1993-02-01</p> <p>breakup model for two chamber pressures. 7.5.4 Exciplex images for a single main injection. Images are 126 ensemble averaged for 8 individual images. Times...to obtain data using an electronic fuel injector (UCORS). Exciplex fluorescence and photographic imaging were used to study liquid and vapor...later paper, (Bower and Foster, 1993) in the same combustion bomb, the authors applied Exciplex fluorescence techniques to visualize fuel liquid and fuel</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1421971-ensemble-averaged-structurefunction-relationship-nanocrystals-effective-superparamagnetic-fe-clusters-catalytically-active-pt-skin','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1421971-ensemble-averaged-structurefunction-relationship-nanocrystals-effective-superparamagnetic-fe-clusters-catalytically-active-pt-skin"><span>Ensemble averaged structure–function relationship for nanocrystals: effective superparamagnetic Fe clusters with catalytically active Pt skin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Petkov, Valeri; Prasai, Binay; Shastri, Sarvjit</p> <p>2017-09-12</p> <p>Practical applications require the production and usage of metallic nanocrystals (NCs) in large ensembles. Besides, due to their cluster-bulk solid duality, metallic NCs exhibit a large degree of structural diversity. This poses the question as to what atomic-scale basis is to be used when the structure–function relationship for metallic NCs is to be quantified precisely. In this paper, we address the question by studying bi-functional Fe core-Pt skin type NCs optimized for practical applications. In particular, the cluster-like Fe core and skin-like Pt surface of the NCs exhibit superparamagnetic properties and a superb catalytic activity for the oxygen reduction reaction,more » respectively. We determine the atomic-scale structure of the NCs by non-traditional resonant high-energy X-ray diffraction coupled to atomic pair distribution function analysis. Using the experimental structure data we explain the observed magnetic and catalytic behavior of the NCs in a quantitative manner. Lastly, we demonstrate that NC ensemble-averaged 3D positions of atoms obtained by advanced X-ray scattering techniques are a very proper basis for not only establishing but also quantifying the structure–function relationship for the increasingly complex metallic NCs explored for practical applications.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li class="active"><span>4</span></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_4 --> <div id="page_5" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li class="active"><span>5</span></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="81"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1008a2019A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1008a2019A"><span>Ensemble averaging and stacking of ARIMA and GSTAR model for rainfall forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anggraeni, D.; Kurnia, I. F.; Hadi, A. F.</p> <p>2018-04-01</p> <p>Unpredictable rainfall changes can affect human activities, such as in agriculture, aviation, shipping which depend on weather forecasts. Therefore, we need forecasting tools with high accuracy in predicting the rainfall in the future. This research focus on local forcasting of the rainfall at Jember in 2005 until 2016, from 77 rainfall stations. The rainfall here was not only related to the occurrence of the previous of its stations, but also related to others, it’s called the spatial effect. The aim of this research is to apply the GSTAR model, to determine whether there are some correlations of spatial effect between one to another stations. The GSTAR model is an expansion of the space-time model that combines the time-related effects, the locations (stations) in a time series effects, and also the location it self. The GSTAR model will also be compared to the ARIMA model that completely ignores the independent variables. The forcested value of the ARIMA and of the GSTAR models then being combined using the ensemble forecasting technique. The averaging and stacking method of ensemble forecasting method here provide us the best model with higher acuracy model that has the smaller RMSE (Root Mean Square Error) value. Finally, with the best model we can offer a better local rainfall forecasting in Jember for the future.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19860009818','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19860009818"><span>A two-dimensional numerical study of the flow inside the combustion chambers of a motored rotary engine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shih, T. I. P.; Yang, S. L.; Schock, H. J.</p> <p>1986-01-01</p> <p>A numerical study was performed to investigate the unsteady, multidimensional flow inside the combustion chambers of an idealized, two-dimensional, rotary engine under motored conditions. The numerical study was based on the time-dependent, two-dimensional, density-weighted, ensemble-averaged conservation equations of mass, species, momentum, and total energy valid for two-component ideal gas mixtures. The ensemble-averaged conservation equations were closed by a K-epsilon model of turbulence. This K-epsilon model of turbulence was modified to account for some of the effects of compressibility, streamline curvature, low-Reynolds number, and preferential stress dissipation. Numerical solutions to the conservation equations were obtained by the highly efficient implicit-factored method of Beam and Warming. The grid system needed to obtain solutions were generated by an algebraic grid generation technique based on transfinite interpolation. Results of the numerical study are presented in graphical form illustrating the flow patterns during intake, compression, gaseous fuel injection, expansion, and exhaust.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19870041350&hterms=rotary+engine&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Drotary%2Bengine','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19870041350&hterms=rotary+engine&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Drotary%2Bengine"><span>A two-dimensional numerical study of the flow inside the combustion chamber of a motored rotary engine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shih, T. I-P.; Yang, S. L.; Schock, H. J.</p> <p>1986-01-01</p> <p>A numerical study was performed to investigate the unsteady, multidimensional flow inside the combustion chambers of an idealized, two-dimensional, rotary engine under motored conditions. The numerical study was based on the time-dependent, two-dimensional, density-weighted, ensemble-averaged conservation equations of mass, species, momentum, and total energy valid for two-component ideal gas mixtures. The ensemble-averaged conservation equations were closed by a K-epsilon model of turbulence. This K-epsilon model of turbulence was modified to account for some of the effects of compressibility, streamline curvature, low-Reynolds number, and preferential stress dissipation. Numerical solutions to the conservation equations were obtained by the highly efficient implicit-factored method of Beam and Warming. The grid system needed to obtain solutions were generated by an algebraic grid generation technique based on transfinite interpolation. Results of the numerical study are presented in graphical form illustrating the flow patterns during intake, compression, gaseous fuel injection, expansion, and exhaust.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JChPh.125u4905L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JChPh.125u4905L"><span>Simulation studies of the fidelity of biomolecular structure ensemble recreation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lätzer, Joachim; Eastwood, Michael P.; Wolynes, Peter G.</p> <p>2006-12-01</p> <p>We examine the ability of Bayesian methods to recreate structural ensembles for partially folded molecules from averaged data. Specifically we test the ability of various algorithms to recreate different transition state ensembles for folding proteins using a multiple replica simulation algorithm using input from "gold standard" reference ensembles that were first generated with a Gō-like Hamiltonian having nonpairwise additive terms. A set of low resolution data, which function as the "experimental" ϕ values, were first constructed from this reference ensemble. The resulting ϕ values were then treated as one would treat laboratory experimental data and were used as input in the replica reconstruction algorithm. The resulting ensembles of structures obtained by the replica algorithm were compared to the gold standard reference ensemble, from which those "data" were, in fact, obtained. It is found that for a unimodal transition state ensemble with a low barrier, the multiple replica algorithm does recreate the reference ensemble fairly successfully when no experimental error is assumed. The Kolmogorov-Smirnov test as well as principal component analysis show that the overlap of the recovered and reference ensembles is significantly enhanced when multiple replicas are used. Reduction of the multiple replica ensembles by clustering successfully yields subensembles with close similarity to the reference ensembles. On the other hand, for a high barrier transition state with two distinct transition state ensembles, the single replica algorithm only samples a few structures of one of the reference ensemble basins. This is due to the fact that the ϕ values are intrinsically ensemble averaged quantities. The replica algorithm with multiple copies does sample both reference ensemble basins. In contrast to the single replica case, the multiple replicas are constrained to reproduce the average ϕ values, but allow fluctuations in ϕ for each individual copy. These fluctuations facilitate a more faithful sampling of the reference ensemble basins. Finally, we test how robustly the reconstruction algorithm can function by introducing errors in ϕ comparable in magnitude to those suggested by some authors. In this circumstance we observe that the chances of ensemble recovery with the replica algorithm are poor using a single replica, but are improved when multiple copies are used. A multimodal transition state ensemble, however, turns out to be more sensitive to large errors in ϕ (if appropriately gauged) and attempts at successful recreation of the reference ensemble with simple replica algorithms can fall short.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148j4114N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148j4114N"><span>Implicit ligand theory for relative binding free energies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nguyen, Trung Hai; Minh, David D. L.</p> <p>2018-03-01</p> <p>Implicit ligand theory enables noncovalent binding free energies to be calculated based on an exponential average of the binding potential of mean force (BPMF)—the binding free energy between a flexible ligand and rigid receptor—over a precomputed ensemble of receptor configurations. In the original formalism, receptor configurations were drawn from or reweighted to the apo ensemble. Here we show that BPMFs averaged over a holo ensemble yield binding free energies relative to the reference ligand that specifies the ensemble. When using receptor snapshots from an alchemical simulation with a single ligand, the new statistical estimator outperforms the original.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29350933','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29350933"><span>Reproducing the Ensemble Average Polar Solvation Energy of a Protein from a Single Structure: Gaussian-Based Smooth Dielectric Function for Macromolecular Modeling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chakravorty, Arghya; Jia, Zhe; Li, Lin; Zhao, Shan; Alexov, Emil</p> <p>2018-02-13</p> <p>Typically, the ensemble average polar component of solvation energy (ΔG polar solv ) of a macromolecule is computed using molecular dynamics (MD) or Monte Carlo (MC) simulations to generate conformational ensemble and then single/rigid conformation solvation energy calculation is performed on each snapshot. The primary objective of this work is to demonstrate that Poisson-Boltzmann (PB)-based approach using a Gaussian-based smooth dielectric function for macromolecular modeling previously developed by us (Li et al. J. Chem. Theory Comput. 2013, 9 (4), 2126-2136) can reproduce that ensemble average (ΔG polar solv ) of a protein from a single structure. We show that the Gaussian-based dielectric model reproduces the ensemble average ΔG polar solv (⟨ΔG polar solv ⟩) from an energy-minimized structure of a protein regardless of the minimization environment (structure minimized in vacuo, implicit or explicit waters, or crystal structure); the best case, however, is when it is paired with an in vacuo-minimized structure. In other minimization environments (implicit or explicit waters or crystal structure), the traditional two-dielectric model can still be selected with which the model produces correct solvation energies. Our observations from this work reflect how the ability to appropriately mimic the motion of residues, especially the salt bridge residues, influences a dielectric model's ability to reproduce the ensemble average value of polar solvation free energy from a single in vacuo-minimized structure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28185571','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28185571"><span>HIPPI: highly accurate protein family classification with ensembles of HMMs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nguyen, Nam-Phuong; Nute, Michael; Mirarab, Siavash; Warnow, Tandy</p> <p>2016-11-11</p> <p>Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification). HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CG....104...75V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CG....104...75V"><span>Ensemble predictive model for more accurate soil organic carbon spectroscopic estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vašát, Radim; Kodešová, Radka; Borůvka, Luboš</p> <p>2017-07-01</p> <p>A myriad of signal pre-processing strategies and multivariate calibration techniques has been explored in attempt to improve the spectroscopic prediction of soil organic carbon (SOC) over the last few decades. Therefore, to come up with a novel, more powerful, and accurate predictive approach to beat the rank becomes a challenging task. However, there may be a way, so that combine several individual predictions into a single final one (according to ensemble learning theory). As this approach performs best when combining in nature different predictive algorithms that are calibrated with structurally different predictor variables, we tested predictors of two different kinds: 1) reflectance values (or transforms) at each wavelength and 2) absorption feature parameters. Consequently we applied four different calibration techniques, two per each type of predictors: a) partial least squares regression and support vector machines for type 1, and b) multiple linear regression and random forest for type 2. The weights to be assigned to individual predictions within the ensemble model (constructed as a weighted average) were determined by an automated procedure that ensured the best solution among all possible was selected. The approach was tested at soil samples taken from surface horizon of four sites differing in the prevailing soil units. By employing the ensemble predictive model the prediction accuracy of SOC improved at all four sites. The coefficient of determination in cross-validation (R2cv) increased from 0.849, 0.611, 0.811 and 0.644 (the best individual predictions) to 0.864, 0.650, 0.824 and 0.698 for Site 1, 2, 3 and 4, respectively. Generally, the ensemble model affected the final prediction so that the maximal deviations of predicted vs. observed values of the individual predictions were reduced, and thus the correlation cloud became thinner as desired.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhRvL.110j0603P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhRvL.110j0603P"><span>Ergodicity Breaking in Geometric Brownian Motion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Peters, O.; Klein, W.</p> <p>2013-03-01</p> <p>Geometric Brownian motion (GBM) is a model for systems as varied as financial instruments and populations. The statistical properties of GBM are complicated by nonergodicity, which can lead to ensemble averages exhibiting exponential growth while any individual trajectory collapses according to its time average. A common tactic for bringing time averages closer to ensemble averages is diversification. In this Letter, we study the effects of diversification using the concept of ergodicity breaking.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3226070','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3226070"><span>A Statistical Description of Neural Ensemble Dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Long, John D.; Carmena, Jose M.</p> <p>2011-01-01</p> <p>The growing use of multi-channel neural recording techniques in behaving animals has produced rich datasets that hold immense potential for advancing our understanding of how the brain mediates behavior. One limitation of these techniques is they do not provide important information about the underlying anatomical connections among the recorded neurons within an ensemble. Inferring these connections is often intractable because the set of possible interactions grows exponentially with ensemble size. This is a fundamental challenge one confronts when interpreting these data. Unfortunately, the combination of expert knowledge and ensemble data is often insufficient for selecting a unique model of these interactions. Our approach shifts away from modeling the network diagram of the ensemble toward analyzing changes in the dynamics of the ensemble as they relate to behavior. Our contribution consists of adapting techniques from signal processing and Bayesian statistics to track the dynamics of ensemble data on time-scales comparable with behavior. We employ a Bayesian estimator to weigh prior information against the available ensemble data, and use an adaptive quantization technique to aggregate poorly estimated regions of the ensemble data space. Importantly, our method is capable of detecting changes in both the magnitude and structure of correlations among neurons missed by firing rate metrics. We show that this method is scalable across a wide range of time-scales and ensemble sizes. Lastly, the performance of this method on both simulated and real ensemble data is used to demonstrate its utility. PMID:22319486</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MAP...130..107E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MAP...130..107E"><span>Prediction of dosage-based parameters from the puff dispersion of airborne materials in urban environments using the CFD-RANS methodology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.</p> <p>2018-02-01</p> <p>One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict ensemble-average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model's predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble-average peak concentration was systematically underpredicted by the model to a degree higher than the allowable by the acceptance criteria, in 1 of the 2 wind-tunnel experiments. The model performance depended on the positions of the examined sensors in relation to the emission source and the buildings configuration. The work presented in this paper was carried out (partly) within the scope of COST Action ES1006 "Evaluation, improvement, and guidance for the use of local-scale emergency prediction and response tools for airborne hazards in built environments".</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EGUGA..12.5529B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EGUGA..12.5529B"><span>A non-parametric postprocessor for bias-correcting multi-model ensemble forecasts of hydrometeorological and hydrologic variables</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brown, James; Seo, Dong-Jun</p> <p>2010-05-01</p> <p>Operational forecasts of hydrometeorological and hydrologic variables often contain large uncertainties, for which ensemble techniques are increasingly used. However, the utility of ensemble forecasts depends on the unbiasedness of the forecast probabilities. We describe a technique for quantifying and removing biases from ensemble forecasts of hydrometeorological and hydrologic variables, intended for use in operational forecasting. The technique makes no a priori assumptions about the distributional form of the variables, which is often unknown or difficult to model parametrically. The aim is to estimate the conditional cumulative distribution function (ccdf) of the observed variable given a (possibly biased) real-time ensemble forecast from one or several forecasting systems (multi-model ensembles). The technique is based on Bayesian optimal linear estimation of indicator variables, and is analogous to indicator cokriging (ICK) in geostatistics. By developing linear estimators for the conditional expectation of the observed variable at many thresholds, ICK provides a discrete approximation of the full ccdf. Since ICK minimizes the conditional error variance of the indicator expectation at each threshold, it effectively minimizes the Continuous Ranked Probability Score (CRPS) when infinitely many thresholds are employed. However, the ensemble members used as predictors in ICK, and other bias-correction techniques, are often highly cross-correlated, both within and between models. Thus, we propose an orthogonal transform of the predictors used in ICK, which is analogous to using their principal components in the linear system of equations. This leads to a well-posed problem in which a minimum number of predictors are used to provide maximum information content in terms of the total variance explained. The technique is used to bias-correct precipitation ensemble forecasts from the NCEP Global Ensemble Forecast System (GEFS), for which independent validation results are presented. Extension to multimodel ensembles from the NCEP GFS and Short Range Ensemble Forecast (SREF) systems is also proposed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29475799','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29475799"><span>Reprint of "Investigating ensemble perception of emotions in autistic and typical children and adolescents".</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth</p> <p>2018-01-01</p> <p>Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28160619','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28160619"><span>Ensemble perception of emotions in autistic and typical children and adolescents.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth</p> <p>2017-04-01</p> <p>Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..1514098W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..1514098W"><span>Supermodeling With A Global Atmospheric Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wiegerinck, Wim; Burgers, Willem; Selten, Frank</p> <p>2013-04-01</p> <p>In weather and climate prediction studies it often turns out to be the case that the multi-model ensemble mean prediction has the best prediction skill scores. One possible explanation is that the major part of the model error is random and is averaged out in the ensemble mean. In the standard multi-model ensemble approach, the models are integrated in time independently and the predicted states are combined a posteriori. Recently an alternative ensemble prediction approach has been proposed in which the models exchange information during the simulation and synchronize on a common solution that is closer to the truth than any of the individual model solutions in the standard multi-model ensemble approach or a weighted average of these. This approach is called the super modeling approach (SUMO). The potential of the SUMO approach has been demonstrated in the context of simple, low-order, chaotic dynamical systems. The information exchange takes the form of linear nudging terms in the dynamical equations that nudge the solution of each model to the solution of all other models in the ensemble. With a suitable choice of the connection strengths the models synchronize on a common solution that is indeed closer to the true system than any of the individual model solutions without nudging. This approach is called connected SUMO. An alternative approach is to integrate a weighted averaged model, weighted SUMO. At each time step all models in the ensemble calculate the tendency, these tendencies are weighted averaged and the state is integrated one time step into the future with this weighted averaged tendency. It was shown that in case the connected SUMO synchronizes perfectly, the connected SUMO follows the weighted averaged trajectory and both approaches yield the same solution. In this study we pioneer both approaches in the context of a global, quasi-geostrophic, three-level atmosphere model that is capable of simulating quite realistically the extra-tropical circulation in the Northern Hemisphere winter.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AcMeS..26...52D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AcMeS..26...52D"><span>A comparison of breeding and ensemble transform vectors for global ensemble generation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Deng, Guo; Tian, Hua; Li, Xiaoli; Chen, Jing; Gong, Jiandong; Jiao, Meiyan</p> <p>2012-02-01</p> <p>To compare the initial perturbation techniques using breeding vectors and ensemble transform vectors, three ensemble prediction systems using both initial perturbation methods but with different ensemble member sizes based on the spectral model T213/L31 are constructed at the National Meteorological Center, China Meteorological Administration (NMC/CMA). A series of ensemble verification scores such as forecast skill of the ensemble mean, ensemble resolution, and ensemble reliability are introduced to identify the most important attributes of ensemble forecast systems. The results indicate that the ensemble transform technique is superior to the breeding vector method in light of the evaluation of anomaly correlation coefficient (ACC), which is a deterministic character of the ensemble mean, the root-mean-square error (RMSE) and spread, which are of probabilistic attributes, and the continuous ranked probability score (CRPS) and its decomposition. The advantage of the ensemble transform approach is attributed to its orthogonality among ensemble perturbations as well as its consistence with the data assimilation system. Therefore, this study may serve as a reference for configuration of the best ensemble prediction system to be used in operation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.A41A3010B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.A41A3010B"><span>Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baker, N. C.; Taylor, P. C.</p> <p>2014-12-01</p> <p>The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26460349','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26460349"><span>Single-cell epigenomics: techniques and emerging applications.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schwartzman, Omer; Tanay, Amos</p> <p>2015-12-01</p> <p>Epigenomics is the study of the physical modifications, associations and conformations of genomic DNA sequences, with the aim of linking these with epigenetic memory, cellular identity and tissue-specific functions. While current techniques in the field are characterizing the average epigenomic features across large cell ensembles, the increasing interest in the epigenetics within complex and heterogeneous tissues is driving the development of single-cell epigenomics. We review emerging single-cell methods for capturing DNA methylation, chromatin accessibility, histone modifications, chromosome conformation and replication dynamics. Together, these techniques are rapidly becoming a powerful tool in studies of cellular plasticity and diversity, as seen in stem cells and cancer.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28600677','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28600677"><span>Ensemble coding remains accurate under object and spatial visual working memory load.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Epstein, Michael L; Emmanouil, Tatiana A</p> <p>2017-10-01</p> <p>A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.8141P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.8141P"><span>Adaptive correction of ensemble forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane</p> <p>2017-04-01</p> <p>Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li class="active"><span>5</span></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_5 --> <div id="page_6" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li class="active"><span>6</span></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="101"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23144222','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23144222"><span>Quantum teleportation between remote atomic-ensemble quantum memories.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei</p> <p>2012-12-11</p> <p>Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a "quantum channel," quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895-1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼10(8) rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29454895','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29454895"><span>Ensemble coding of face identity is present but weaker in congenital prosopagnosia.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Robson, Matthew K; Palermo, Romina; Jeffery, Linda; Neumann, Markus F</p> <p>2018-03-01</p> <p>Individuals with congenital prosopagnosia (CP) are impaired at identifying individual faces but do not appear to show impairments in extracting the average identity from a group of faces (known as ensemble coding). However, possible deficits in ensemble coding in a previous study (CPs n = 4) may have been masked because CPs relied on pictorial (image) cues rather than identity cues. Here we asked whether a larger sample of CPs (n = 11) would show intact ensemble coding of identity when availability of image cues was minimised. Participants viewed a "set" of four faces and then judged whether a subsequent individual test face, either an exemplar or a "set average", was in the preceding set. Ensemble coding occurred when matching (vs. mismatching) averages were mistakenly endorsed as set members. We assessed both image- and identity-based ensemble coding, by varying whether test faces were either the same or different images of the identities in the set. CPs showed significant ensemble coding in both tasks, indicating that their performance was independent of image cues. As a group, CPs' ensemble coding was weaker than controls in both tasks, consistent with evidence that perceptual processing of face identity is disrupted in CP. This effect was driven by CPs (n= 3) who, in addition to having impaired face memory, also performed particularly poorly on a measure of face perception (CFPT). Future research, using larger samples, should examine whether deficits in ensemble coding may be restricted to CPs who also have substantial face perception deficits. Copyright © 2018 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26723635','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26723635"><span>Bayesian ensemble refinement by replica simulations and reweighting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hummer, Gerhard; Köfinger, Jürgen</p> <p>2015-12-28</p> <p>We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JChPh.143x3150H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JChPh.143x3150H"><span>Bayesian ensemble refinement by replica simulations and reweighting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hummer, Gerhard; Köfinger, Jürgen</p> <p>2015-12-01</p> <p>We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010APJAS..46..135E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010APJAS..46..135E"><span>Predictability of tropical cyclone events on intraseasonal timescales with the ECMWF monthly forecast model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Elsberry, Russell L.; Jordan, Mary S.; Vitart, Frederic</p> <p>2010-05-01</p> <p>The objective of this study is to provide evidence of predictability on intraseasonal time scales (10-30 days) for western North Pacific tropical cyclone formation and subsequent tracks using the 51-member ECMWF 32-day forecasts made once a week from 5 June through 25 December 2008. Ensemble storms are defined by grouping ensemble member vortices whose positions are within a specified separation distance that is equal to 180 n mi at the initial forecast time t and increases linearly to 420 n mi at Day 14 and then is constant. The 12-h track segments are calculated with a Weighted-Mean Vector Motion technique in which the weighting factor is inversely proportional to the distance from the endpoint of the previous 12-h motion vector. Seventy-six percent of the ensemble storms had five or fewer member vortices. On average, the ensemble storms begin 2.5 days before the first entry of the Joint Typhoon Warning Center (JTWC) best-track file, tend to translate too slowly in the deep tropics, and persist for longer periods over land. A strict objective matching technique with the JTWC storms is combined with a second subjective procedure that is then applied to identify nearby ensemble storms that would indicate a greater likelihood of a tropical cyclone developing in that region with that track orientation. The ensemble storms identified in the ECMWF 32-day forecasts provided guidance on intraseasonal timescales of the formations and tracks of the three strongest typhoons and two other typhoons, but not for two early season typhoons and the late season Dolphin. Four strong tropical storms were predicted consistently over Week-1 through Week-4, as was one weak tropical storm. Two other weak tropical storms, three tropical cyclones that developed from precursor baroclinic systems, and three other tropical depressions were not predicted on intraseasonal timescales. At least for the strongest tropical cyclones during the peak season, the ECMWF 32-day ensemble provides guidance of formation and tracks on 10-30 day timescales.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28391206','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28391206"><span>Multiple-Swarm Ensembles: Improving the Predictive Power and Robustness of Predictive Models and Its Use in Computational Biology.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Alves, Pedro; Liu, Shuang; Wang, Daifeng; Gerstein, Mark</p> <p>2018-01-01</p> <p>Machine learning is an integral part of computational biology, and has already shown its use in various applications, such as prognostic tests. In the last few years in the non-biological machine learning community, ensembling techniques have shown their power in data mining competitions such as the Netflix challenge; however, such methods have not found wide use in computational biology. In this work, we endeavor to show how ensembling techniques can be applied to practical problems, including problems in the field of bioinformatics, and how they often outperform other machine learning techniques in both predictive power and robustness. Furthermore, we develop a methodology of ensembling, Multi-Swarm Ensemble (MSWE) by using multiple particle swarm optimizations and demonstrate its ability to further enhance the performance of ensembles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H41A1417L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H41A1417L"><span>Multi-model analysis in hydrological prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lanthier, M.; Arsenault, R.; Brissette, F.</p> <p>2017-12-01</p> <p>Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been largely corrected on short-term predictions. For the longer term, the addition of the multi-model member has been beneficial to the quality of the predictions, although it is too early to determine whether the gain is related to the addition of a member or if multi-model member has plus-value itself.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A24D..02Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A24D..02Y"><span>Decadal climate prediction in the large ensemble limit</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yeager, S. G.; Rosenbloom, N. A.; Strand, G.; Lindsay, K. T.; Danabasoglu, G.; Karspeck, A. R.; Bates, S. C.; Meehl, G. A.</p> <p>2017-12-01</p> <p>In order to quantify the benefits of initialization for climate prediction on decadal timescales, two parallel sets of historical simulations are required: one "initialized" ensemble that incorporates observations of past climate states and one "uninitialized" ensemble whose internal climate variations evolve freely and without synchronicity. In the large ensemble limit, ensemble averaging isolates potentially predictable forced and internal variance components in the "initialized" set, but only the forced variance remains after averaging the "uninitialized" set. The ensemble size needed to achieve this variance decomposition, and to robustly distinguish initialized from uninitialized decadal predictions, remains poorly constrained. We examine a large ensemble (LE) of initialized decadal prediction (DP) experiments carried out using the Community Earth System Model (CESM). This 40-member CESM-DP-LE set of experiments represents the "initialized" complement to the CESM large ensemble of 20th century runs (CESM-LE) documented in Kay et al. (2015). Both simulation sets share the same model configuration, historical radiative forcings, and large ensemble sizes. The twin experiments afford an unprecedented opportunity to explore the sensitivity of DP skill assessment, and in particular the skill enhancement associated with initialization, to ensemble size. This talk will highlight the benefits of a large ensemble size for initialized predictions of seasonal climate over land in the Atlantic sector as well as predictions of shifts in the likelihood of climate extremes that have large societal impact.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1814908S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1814908S"><span>Single-ping ADCP measurements in the Strait of Gibraltar</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo</p> <p>2016-04-01</p> <p>In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22676006-interpolation-property-values-between-electron-numbers-inconsistent-ensemble-averaging','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22676006-interpolation-property-values-between-electron-numbers-inconsistent-ensemble-averaging"><span>Interpolation of property-values between electron numbers is inconsistent with ensemble averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Miranda-Quintana, Ramón Alain; Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1; Ayers, Paul W.</p> <p>2016-06-28</p> <p>In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integermore » electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19730037315&hterms=Coding+decoding&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3DCoding%2Bdecoding','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19730037315&hterms=Coding+decoding&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3DCoding%2Bdecoding"><span>The random coding bound is tight for the average code.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gallager, R. G.</p> <p>1973-01-01</p> <p>The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3965471','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3965471"><span>NIMEFI: Gene Regulatory Network Inference using Multiple Ensemble Feature Importance Algorithms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ruyssinck, Joeri; Huynh-Thu, Vân Anh; Geurts, Pierre; Dhaene, Tom; Demeester, Piet; Saeys, Yvan</p> <p>2014-01-01</p> <p>One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms) and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made publicly available. PMID:24667482</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27478823','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27478823"><span>Accurate Identification of Cancerlectins through Hybrid Machine Learning Technology.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Jieru; Ju, Ying; Lu, Huijuan; Xuan, Ping; Zou, Quan</p> <p>2016-01-01</p> <p>Cancerlectins are cancer-related proteins that function as lectins. They have been identified through computational identification techniques, but these techniques have sometimes failed to identify proteins because of sequence diversity among the cancerlectins. Advanced machine learning identification methods, such as support vector machine and basic sequence features (n-gram), have also been used to identify cancerlectins. In this study, various protein fingerprint features and advanced classifiers, including ensemble learning techniques, were utilized to identify this group of proteins. We improved the prediction accuracy of the original feature extraction methods and classification algorithms by more than 10% on average. Our work provides a basis for the computational identification of cancerlectins and reveals the power of hybrid machine learning techniques in computational proteomics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNG31A0145S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNG31A0145S"><span>On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.</p> <p>2017-12-01</p> <p>Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20010098882&hterms=statistics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dstatistics','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20010098882&hterms=statistics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dstatistics"><span>The Effect of Stochastic Perturbation of Fuel Distribution on the Criticality of a One Speed Reactor and the Development of Multi-Material Multinomial Line Statistics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jahshan, S. N.; Singleterry, R. C.</p> <p>2001-01-01</p> <p>The effect of random fuel redistribution on the eigenvalue of a one-speed reactor is investigated. An ensemble of such reactors that are identical to a homogeneous reference critical reactor except for the fissile isotope density distribution is constructed such that it meets a set of well-posed redistribution requirements. The average eigenvalue, <k(sub ff)>, is evaluated when the total fissile loading per ensemble element, or realization, is conserved. The perturbation is proven to increase the reactor criticality on average when it is uniformly distributed. The various causes of the change in reactivity, and their relative effects are identified and ranked. From this, a path towards identifying the causes. and relative effects of reactivity fluctuations for the energy dependent problem is pointed to. The perturbation method of using multinomial distributions for representing the perturbed reactor is developed. This method has some advantages that can be of use in other stochastic problems. Finally, some of the features of this perturbation problem are related to other techniques that have been used for addressing similar problems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015CG.....84...37J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015CG.....84...37J"><span>Ensemble of surrogates-based optimization for identifying an optimal surfactant-enhanced aquifer remediation strategy at heterogeneous DNAPL-contaminated sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin</p> <p>2015-11-01</p> <p>The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.H52B..02L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.H52B..02L"><span>Ensemble of Surrogates-based Optimization for Identifying an Optimal Surfactant-enhanced Aquifer Remediation Strategy at Heterogeneous DNAPL-contaminated Sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.</p> <p>2015-12-01</p> <p>The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24024194','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24024194"><span>An efficient ensemble learning method for gene microarray classification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Osareh, Alireza; Shadgar, Bita</p> <p>2013-01-01</p> <p>The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1395035-ensemble-based-parameter-estimation-coupled-gcm-using-adaptive-spatial-average-method','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1395035-ensemble-based-parameter-estimation-coupled-gcm-using-adaptive-spatial-average-method"><span>Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Liu, Y.; Liu, Z.; Zhang, S.; ...</p> <p>2014-05-29</p> <p>Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..16..247G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..16..247G"><span>Application Bayesian Model Averaging method for ensemble system for Poland</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guzikowski, Jakub; Czerwinska, Agnieszka</p> <p>2014-05-01</p> <p>The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation probabilistic data The Brier Score (BS) and Continuous Ranked Probability Score (CRPS) were used. Finally comparison between BMA calibrated data and data from ensemble members will be displayed.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li class="active"><span>6</span></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_6 --> <div id="page_7" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="121"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4150802','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4150802"><span>Characterizing RNA ensembles from NMR data with kinematic models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fonseca, Rasmus; Pachov, Dimitar V.; Bernauer, Julie; van den Bedem, Henry</p> <p>2014-01-01</p> <p>Functional mechanisms of biomolecules often manifest themselves precisely in transient conformational substates. Researchers have long sought to structurally characterize dynamic processes in non-coding RNA, combining experimental data with computer algorithms. However, adequate exploration of conformational space for these highly dynamic molecules, starting from static crystal structures, remains challenging. Here, we report a new conformational sampling procedure, KGSrna, which can efficiently probe the native ensemble of RNA molecules in solution. We found that KGSrna ensembles accurately represent the conformational landscapes of 3D RNA encoded by NMR proton chemical shifts. KGSrna resolves motionally averaged NMR data into structural contributions; when coupled with residual dipolar coupling data, a KGSrna ensemble revealed a previously uncharacterized transient excited state of the HIV-1 trans-activation response element stem–loop. Ensemble-based interpretations of averaged data can aid in formulating and testing dynamic, motion-based hypotheses of functional mechanisms in RNAs with broad implications for RNA engineering and therapeutic intervention. PMID:25114056</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3528515','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3528515"><span>Quantum teleportation between remote atomic-ensemble quantum memories</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei</p> <p>2012-01-01</p> <p>Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a “quantum channel,” quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895–1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼108 rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing. PMID:23144222</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A21F2221M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A21F2221M"><span>Probabilistic Near and Far-Future Climate Scenarios of Precipitation and Surface Temperature for the North American Monsoon Region Under a Weighted CMIP5-GCM Ensemble Approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Montero-Martinez, M. J.; Colorado, G.; Diaz-Gutierrez, D. E.; Salinas-Prieto, J. A.</p> <p>2017-12-01</p> <p>It is well known the North American Monsoon (NAM) region is already a very dry region which is under a lot of stress due to the lack of water resources on multiple locations of the area. However, it is very interesting that even under those conditions, the Mexican part of the NAM region is certainly the most productive in Mexico from the agricultural point of view. Thus, it is very important to have realistic climate scenarios for climate variables such as temperature, precipitation, relative humidity, radiation, etc. This study tries to tackle that problem by generating probabilistic climate scenarios using a weighted CMIP5-GCM ensemble approach based on the Xu et al. (2010) technique which is on itself an improved method from the better known Reliability Ensemble Averaging algorithm of Giorgi and Mearns (2002). In addition, it is compared the 20-plus GCMs individual performances and the weighted ensemble versus observed data (CRU TS2.1) by using different metrics and Taylor diagrams. This study focuses on probabilistic results reaching a certain threshold given the fact that those types of products could be of potential use for agricultural applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28986784','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28986784"><span>Capturing Three-Dimensional Genome Organization in Individual Cells by Single-Cell Hi-C.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nagano, Takashi; Wingett, Steven W; Fraser, Peter</p> <p>2017-01-01</p> <p>Hi-C is a powerful method to investigate genome-wide, higher-order chromatin and chromosome conformations averaged from a population of cells. To expand the potential of Hi-C for single-cell analysis, we developed single-cell Hi-C. Similar to the existing "ensemble" Hi-C method, single-cell Hi-C detects proximity-dependent ligation events between cross-linked and restriction-digested chromatin fragments in cells. A major difference between the single-cell Hi-C and ensemble Hi-C protocol is that the proximity-dependent ligation is carried out in the nucleus. This allows the isolation of individual cells in which nearly the entire Hi-C procedure has been carried out, enabling the production of a Hi-C library and data from individual cells. With this new method, we studied genome conformations and found evidence for conserved topological domain organization from cell to cell, but highly variable interdomain contacts and chromosome folding genome wide. In addition, we found that the single-cell Hi-C protocol provided cleaner results with less technical noise suggesting it could be used to improve the ensemble Hi-C technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1360147-experimental-investigation-gas-fuel-injection-ray-radiography','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1360147-experimental-investigation-gas-fuel-injection-ray-radiography"><span>An experimental investigation of gas fuel injection with X-ray radiography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Swantek, Andrew B.; Duke, D. J.; Kastengren, A. L.; ...</p> <p>2017-04-21</p> <p>In this paper, an outward-opening compressed natural gas, direct injection fuel injector has been studied with single-shot x-ray radiography. Three dimensional simulations have also been performed to compliment the x-ray data. Argon was used as a surrogate gas for experimental and safety reasons. This technique allows the acquisition of a quantitative mapping of the ensemble-average and standard deviation of the projected density throughout the injection event. Two dimensional, ensemble average and standard deviation data are presented to investigate the quasi-steady-state behavior of the jet. Upstream of the stagnation zone, minimal shot-to-shot variation is observed. Downstream of the stagnation zone, bulkmore » mixing is observed as the jet transitions to a subsonic turbulent jet. From the time averaged data, individual slices at all downstream locations are extracted and an Abel inversion was performed to compute the radial density distribution, which was interpolated to create three dimensional visualizations. The Abel reconstructions reveal that upstream of the stagnation zone, the gas forms an annulus with high argon density and large density gradients. Inside this annulus, a recirculation region with low argon density exists. Downstream, the jet transitions to a fully turbulent jet with Gaussian argon density distributions. This experimental data is intended to serve as a quantitative benchmark for simulations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1360147','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1360147"><span>An experimental investigation of gas fuel injection with X-ray radiography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Swantek, Andrew B.; Duke, D. J.; Kastengren, A. L.</p> <p></p> <p>In this paper, an outward-opening compressed natural gas, direct injection fuel injector has been studied with single-shot x-ray radiography. Three dimensional simulations have also been performed to compliment the x-ray data. Argon was used as a surrogate gas for experimental and safety reasons. This technique allows the acquisition of a quantitative mapping of the ensemble-average and standard deviation of the projected density throughout the injection event. Two dimensional, ensemble average and standard deviation data are presented to investigate the quasi-steady-state behavior of the jet. Upstream of the stagnation zone, minimal shot-to-shot variation is observed. Downstream of the stagnation zone, bulkmore » mixing is observed as the jet transitions to a subsonic turbulent jet. From the time averaged data, individual slices at all downstream locations are extracted and an Abel inversion was performed to compute the radial density distribution, which was interpolated to create three dimensional visualizations. The Abel reconstructions reveal that upstream of the stagnation zone, the gas forms an annulus with high argon density and large density gradients. Inside this annulus, a recirculation region with low argon density exists. Downstream, the jet transitions to a fully turbulent jet with Gaussian argon density distributions. This experimental data is intended to serve as a quantitative benchmark for simulations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19673196','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19673196"><span>An adaptive incremental approach to constructing ensemble classifiers: application in an information-theoretic computer-aided decision system for detection of masses in mammograms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mazurowski, Maciej A; Zurada, Jacek M; Tourassi, Georgia D</p> <p>2009-07-01</p> <p>Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC = 0.905 +/- 0.024) in performance as compared to the original IT-CAD system (AUC = 0.865 +/- 0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19980037015','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19980037015"><span>Load Balancing Using Time Series Analysis for Soft Real Time Systems with Statistically Periodic Loads</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hailperin, Max</p> <p>1993-01-01</p> <p>This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5241816','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5241816"><span>Interfacing broadband photonic qubits to on-chip cavity-protected rare-earth ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhong, Tian; Kindem, Jonathan M.; Rochman, Jake; Faraon, Andrei</p> <p>2017-01-01</p> <p>Ensembles of solid-state optical emitters enable broadband quantum storage and transduction of photonic qubits, with applications in high-rate quantum networks for secure communications and interconnecting future quantum computers. To transfer quantum states using ensembles, rephasing techniques are used to mitigate fast decoherence resulting from inhomogeneous broadening, but these techniques generally limit the bandwidth, efficiency and active times of the quantum interface. Here, we use a dense ensemble of neodymium rare-earth ions strongly coupled to a nanophotonic resonator to demonstrate a significant cavity protection effect at the single-photon level—a technique to suppress ensemble decoherence due to inhomogeneous broadening. The protected Rabi oscillations between the cavity field and the atomic super-radiant state enable ultra-fast transfer of photonic frequency qubits to the ions (∼50 GHz bandwidth) followed by retrieval with 98.7% fidelity. With the prospect of coupling to other long-lived rare-earth spin states, this technique opens the possibilities for broadband, always-ready quantum memories and fast optical-to-microwave transducers. PMID:28090078</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NatCo...814107Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NatCo...814107Z"><span>Interfacing broadband photonic qubits to on-chip cavity-protected rare-earth ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhong, Tian; Kindem, Jonathan M.; Rochman, Jake; Faraon, Andrei</p> <p>2017-01-01</p> <p>Ensembles of solid-state optical emitters enable broadband quantum storage and transduction of photonic qubits, with applications in high-rate quantum networks for secure communications and interconnecting future quantum computers. To transfer quantum states using ensembles, rephasing techniques are used to mitigate fast decoherence resulting from inhomogeneous broadening, but these techniques generally limit the bandwidth, efficiency and active times of the quantum interface. Here, we use a dense ensemble of neodymium rare-earth ions strongly coupled to a nanophotonic resonator to demonstrate a significant cavity protection effect at the single-photon level--a technique to suppress ensemble decoherence due to inhomogeneous broadening. The protected Rabi oscillations between the cavity field and the atomic super-radiant state enable ultra-fast transfer of photonic frequency qubits to the ions (~50 GHz bandwidth) followed by retrieval with 98.7% fidelity. With the prospect of coupling to other long-lived rare-earth spin states, this technique opens the possibilities for broadband, always-ready quantum memories and fast optical-to-microwave transducers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000PhRvE..62.6126L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000PhRvE..62.6126L"><span>Variety and volatility in financial markets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lillo, Fabrizio; Mantegna, Rosario N.</p> <p>2000-11-01</p> <p>We study the price dynamics of stocks traded in a financial market by considering the statistical properties of both a single time series and an ensemble of stocks traded simultaneously. We use the n stocks traded on the New York Stock Exchange to form a statistical ensemble of daily stock returns. For each trading day of our database, we study the ensemble return distribution. We find that a typical ensemble return distribution exists in most of the trading days with the exception of crash and rally days and of the days following these extreme events. We analyze each ensemble return distribution by extracting its first two central moments. We observe that these moments fluctuate in time and are stochastic processes, themselves. We characterize the statistical properties of ensemble return distribution central moments by investigating their probability density functions and temporal correlation properties. In general, time-averaged and portfolio-averaged price returns have different statistical properties. We infer from these differences information about the relative strength of correlation between stocks and between different trading days. Last, we compare our empirical results with those predicted by the single-index model and we conclude that this simple model cannot explain the statistical properties of the second moment of the ensemble return distribution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A21F2211K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A21F2211K"><span>Can decadal climate predictions be improved by ocean ensemble dispersion filtering?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.</p> <p>2017-12-01</p> <p>Decadal predictions by Earth system models aim to capture the state and phase of the climate several years inadvance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-termweather forecasts represent an initial value problem and long-term climate projections represent a boundarycondition problem, the decadal climate prediction falls in-between these two time scales. The ocean memorydue to its heat capacity holds big potential skill on the decadal scale. In recent years, more precise initializationtechniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions.Ensembles are another important aspect. Applying slightly perturbed predictions results in an ensemble. Insteadof using and evaluating one prediction, but the whole ensemble or its ensemble average, improves a predictionsystem. However, climate models in general start losing the initialized signal and its predictive skill from oneforecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improvedby a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. Wefound that this procedure, called ensemble dispersion filter, results in more accurate results than the standarddecadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions showan increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with largerensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from oceanensemble dispersion filtering toward the ensemble mean. This study is part of MiKlip (fona-miklip.de) - a major project on decadal climate prediction in Germany.We focus on the Max-Planck-Institute Earth System Model using the low-resolution version (MPI-ESM-LR) andMiKlip's basic initialization strategy as in 2017 published decadal climate forecast: http://www.fona-miklip.de/decadal-forecast-2017-2026/decadal-forecast-for-2017-2026/ More informations about this study in JAMES:DOI: 10.1002/2016MS000787</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.6801P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.6801P"><span>Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann</p> <p>2017-04-01</p> <p>Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22264078-weak-ergodicity-breaking-irreproducibility-ageing-anomalous-diffusion-processes','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22264078-weak-ergodicity-breaking-irreproducibility-ageing-anomalous-diffusion-processes"><span>Weak ergodicity breaking, irreproducibility, and ageing in anomalous diffusion processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Metzler, Ralf</p> <p>2014-01-14</p> <p>Single particle traces are standardly evaluated in terms of time averages of the second moment of the position time series r(t). For ergodic processes, one can interpret such results in terms of the known theories for the corresponding ensemble averaged quantities. In anomalous diffusion processes, that are widely observed in nature over many orders of magnitude, the equivalence between (long) time and ensemble averages may be broken (weak ergodicity breaking), and these time averages may no longer be interpreted in terms of ensemble theories. Here we detail some recent results on weakly non-ergodic systems with respect to the time averagedmore » mean squared displacement, the inherent irreproducibility of individual measurements, and methods to determine the exact underlying stochastic process. We also address the phenomenon of ageing, the dependence of physical observables on the time span between initial preparation of the system and the start of the measurement.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NHESS..16.1821K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NHESS..16.1821K"><span>Ensemble flood simulation for a small dam catchment in Japan using 10 and 2 km resolution nonhydrostatic model rainfalls</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kobayashi, Kenichiro; Otsuka, Shigenori; Apip; Saito, Kazuo</p> <p>2016-08-01</p> <p>This paper presents a study on short-term ensemble flood forecasting specifically for small dam catchments in Japan. Numerical ensemble simulations of rainfall from the Japan Meteorological Agency nonhydrostatic model (JMA-NHM) are used as the input data to a rainfall-runoff model for predicting river discharge into a dam. The ensemble weather simulations use a conventional 10 km and a high-resolution 2 km spatial resolutions. A distributed rainfall-runoff model is constructed for the Kasahori dam catchment (approx. 70 km2) and applied with the ensemble rainfalls. The results show that the hourly maximum and cumulative catchment-average rainfalls of the 2 km resolution JMA-NHM ensemble simulation are more appropriate than the 10 km resolution rainfalls. All the simulated inflows based on the 2 and 10 km rainfalls become larger than the flood discharge of 140 m3 s-1, a threshold value for flood control. The inflows with the 10 km resolution ensemble rainfall are all considerably smaller than the observations, while at least one simulated discharge out of 11 ensemble members with the 2 km resolution rainfalls reproduces the first peak of the inflow at the Kasahori dam with similar amplitude to observations, although there are spatiotemporal lags between simulation and observation. To take positional lags into account of the ensemble discharge simulation, the rainfall distribution in each ensemble member is shifted so that the catchment-averaged cumulative rainfall of the Kasahori dam maximizes. The runoff simulation with the position-shifted rainfalls shows much better results than the original ensemble discharge simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2662860','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2662860"><span>Improving consensus structure by eliminating averaging artifacts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>KC, Dukka B</p> <p>2009-01-01</p> <p>Background Common structural biology methods (i.e., NMR and molecular dynamics) often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA) is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure) towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures). However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%); in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38) of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA [1], our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction [2], which could also benefit from our approach. PMID:19267905</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1544316','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1544316"><span>Toward an Accurate Theoretical Framework for Describing Ensembles for Proteins under Strongly Denaturing Conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Tran, Hoang T.; Pappu, Rohit V.</p> <p>2006-01-01</p> <p>Our focus is on an appropriate theoretical framework for describing highly denatured proteins. In high concentrations of denaturants, proteins behave like polymers in a good solvent and ensembles for denatured proteins can be modeled by ignoring all interactions except excluded volume (EV) effects. To assay conformational preferences of highly denatured proteins, we quantify a variety of properties for EV-limit ensembles of 23 two-state proteins. We find that modeled denatured proteins can be best described as follows. Average shapes are consistent with prolate ellipsoids. Ensembles are characterized by large correlated fluctuations. Sequence-specific conformational preferences are restricted to local length scales that span five to nine residues. Beyond local length scales, chain properties follow well-defined power laws that are expected for generic polymers in the EV limit. The average available volume is filled inefficiently, and cavities of all sizes are found within the interiors of denatured proteins. All properties characterized from simulated ensembles match predictions from rigorous field theories. We use our results to resolve between conflicting proposals for structure in ensembles for highly denatured states. PMID:16766618</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010PhRvL.104s0601B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010PhRvL.104s0601B"><span>Enhanced Sampling in the Well-Tempered Ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bonomi, M.; Parrinello, M.</p> <p>2010-05-01</p> <p>We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi , J. Comput. Chem. 30, 1615 (2009)JCCHDD0192-865110.1002/jcc.21305]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20866953','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20866953"><span>Enhanced sampling in the well-tempered ensemble.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bonomi, M; Parrinello, M</p> <p>2010-05-14</p> <p>We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi, J. Comput. Chem. 30, 1615 (2009)]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20060043343&hterms=Beer&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DBeer','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20060043343&hterms=Beer&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DBeer"><span>SVD analysis of Aura TES spectral residuals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Beer, Reinhard; Kulawik, Susan S.; Rodgers, Clive D.; Bowman, Kevin W.</p> <p>2005-01-01</p> <p>Singular Value Decomposition (SVD) analysis is both a powerful diagnostic tool and an effective method of noise filtering. We present the results of an SVD analysis of an ensemble of spectral residuals acquired in September 2004 from a 16-orbit Aura Tropospheric Emission Spectrometer (TES) Global Survey and compare them to alternative methods such as zonal averages. In particular, the technique highlights issues such as the orbital variation of instrument response and incompletely modeled effects of surface emissivity and atmospheric composition.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_7 --> <div id="page_8" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="141"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvE..94e2142B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvE..94e2142B"><span>Inhomogeneous diffusion and ergodicity breaking induced by global memory effects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Budini, Adrián A.</p> <p>2016-11-01</p> <p>We introduce a class of discrete random-walk model driven by global memory effects. At any time, the right-left transitions depend on the whole previous history of the walker, being defined by an urnlike memory mechanism. The characteristic function is calculated in an exact way, which allows us to demonstrate that the ensemble of realizations is ballistic. Asymptotically, each realization is equivalent to that of a biased Markovian diffusion process with transition rates that strongly differs from one trajectory to another. Using this "inhomogeneous diffusion" feature, the ergodic properties of the dynamics are analytically studied through the time-averaged moments. Even in the long-time regime, they remain random objects. While their average over realizations recovers the corresponding ensemble averages, departure between time and ensemble averages is explicitly shown through their probability densities. For the density of the second time-averaged moment, an ergodic limit and the limit of infinite lag times do not commutate. All these effects are induced by the memory effects. A generalized Einstein fluctuation-dissipation relation is also obtained for the time-averaged moments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5702678','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5702678"><span>Time-course, negative-stain electron microscopy–based analysis for investigating protein–protein interactions at the single-molecule level</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Nogal, Bartek; Bowman, Charles A.; Ward, Andrew B.</p> <p>2017-01-01</p> <p>Several biophysical approaches are available to study protein–protein interactions. Most approaches are conducted in bulk solution, and are therefore limited to an average measurement of the ensemble of molecular interactions. Here, we show how single-particle EM can enrich our understanding of protein–protein interactions at the single-molecule level and potentially capture states that are unobservable with ensemble methods because they are below the limit of detection or not conducted on an appropriate time scale. Using the HIV-1 envelope glycoprotein (Env) and its interaction with receptor CD4-binding site neutralizing antibodies as a model system, we both corroborate ensemble kinetics-derived parameters and demonstrate how time-course EM can further dissect stoichiometric states of complexes that are not readily observable with other methods. Visualization of the kinetics and stoichiometry of Env–antibody complexes demonstrated the applicability of our approach to qualitatively and semi-quantitatively differentiate two highly similar neutralizing antibodies. Furthermore, implementation of machine-learning techniques for sorting class averages of these complexes into discrete subclasses of particles helped reduce human bias. Our data provide proof of concept that single-particle EM can be used to generate a “visual” kinetic profile that should be amenable to studying many other protein–protein interactions, is relatively simple and complementary to well-established biophysical approaches. Moreover, our method provides critical insights into broadly neutralizing antibody recognition of Env, which may inform vaccine immunogen design and immunotherapeutic development. PMID:28972148</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..15..521A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..15..521A"><span>Constructing optimal ensemble projections for predictive environmental modelling in Northern Eurasia</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anisimov, Oleg; Kokorev, Vasily</p> <p>2013-04-01</p> <p>Large uncertainties in climate impact modelling are associated with the forcing climate data. This study is targeted at the evaluation of the quality of GCM-based climatic projections in the specific context of predictive environmental modelling in Northern Eurasia. To accomplish this task, we used the output from 36 CMIP5 GCMs from the IPCC AR-5 data base for the control period 1975-2005 and calculated several climatic characteristics and indexes that are most often used in the impact models, i.e. the summer warmth index, duration of the vegetation growth period, precipitation sums, dryness index, thawing degree-day sums, and the annual temperature amplitude. We used data from 744 weather stations in Russia and neighbouring countries to analyze the spatial patterns of modern climatic change and to delineate 17 large regions with coherent temperature changes in the past few decades. GSM results and observational data were averaged over the coherent regions and compared with each other. Ultimately, we evaluated the skills of individual models, ranked them in the context of regional impact modelling and identified top-end GCMs that "better than average" reproduce modern regional changes of the selected meteorological parameters and climatic indexes. Selected top-end GCMs were used to compose several ensembles, each combining results from the different number of models. Ensembles were ranked using the same algorithm and outliers eliminated. We then used data from top-end ensembles for the 2000-2100 period to construct the climatic projections that are likely to be "better than average" in predicting climatic parameters that govern the state of environment in Northern Eurasia. The ultimate conclusions of our study are the following. • High-end GCMs that demonstrate excellent skills in conventional atmospheric model intercomparison experiments are not necessarily the best in replicating climatic characteristics that govern the state of environment in Northern Eurasia, and independent model evaluation on regional level is necessary to identify "better than average" GCMs. • Each of the ensembles combining results from several "better than average" models replicate selected meteorological parameters and climatic indexes better than any single GCM. The ensemble skills are parameter-specific and depend on models it consists of. The best results are not necessarily those based on the ensemble comprised by all "better than average" models. • Comprehensive evaluation of climatic scenarios using specific criteria narrows the range of uncertainties in environmental projections.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhyA..482....1M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhyA..482....1M"><span>Quantum canonical ensemble: A projection operator approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Magnus, Wim; Lemmens, Lucien; Brosens, Fons</p> <p>2017-09-01</p> <p>Knowing the exact number of particles N, and taking this knowledge into account, the quantum canonical ensemble imposes a constraint on the occupation number operators. The constraint particularly hampers the systematic calculation of the partition function and any relevant thermodynamic expectation value for arbitrary but fixed N. On the other hand, fixing only the average number of particles, one may remove the above constraint and simply factorize the traces in Fock space into traces over single-particle states. As is well known, that would be the strategy of the grand-canonical ensemble which, however, comes with an additional Lagrange multiplier to impose the average number of particles. The appearance of this multiplier can be avoided by invoking a projection operator that enables a constraint-free computation of the partition function and its derived quantities in the canonical ensemble, at the price of an angular or contour integration. Introduced in the recent past to handle various issues related to particle-number projected statistics, the projection operator approach proves beneficial to a wide variety of problems in condensed matter physics for which the canonical ensemble offers a natural and appropriate environment. In this light, we present a systematic treatment of the canonical ensemble that embeds the projection operator into the formalism of second quantization while explicitly fixing N, the very number of particles rather than the average. Being applicable to both bosonic and fermionic systems in arbitrary dimensions, transparent integral representations are provided for the partition function ZN and the Helmholtz free energy FN as well as for two- and four-point correlation functions. The chemical potential is not a Lagrange multiplier regulating the average particle number but can be extracted from FN+1 -FN, as illustrated for a two-dimensional fermion gas.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2832038','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2832038"><span>An adaptive incremental approach to constructing ensemble classifiers: Application in an information-theoretic computer-aided decision system for detection of masses in mammograms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.</p> <p>2009-01-01</p> <p>Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC=0.905±0.024) in performance as compared to the original IT-CAD system (AUC=0.865±0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters. PMID:19673196</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25904973','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25904973"><span>A virtual pebble game to ensemble average graph rigidity.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J</p> <p>2015-01-01</p> <p>The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most accurate but slowest method of ensemble averaging over hundreds to thousands of independent PG runs, and the fastest but least accurate MCC.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1395969','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1395969"><span>Advanced Atmospheric Ensemble Modeling Techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Buckley, R.; Chiswell, S.; Kurzeja, R.</p> <p></p> <p>Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two releasemore » times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/6391821-load-balancing-massively-parallel-soft-real-time-systems','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/6391821-load-balancing-massively-parallel-soft-real-time-systems"><span>Load balancing for massively-parallel soft-real-time systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hailperin, M.</p> <p>1988-09-01</p> <p>Global load balancing, if practical, would allow the effective use of massively-parallel ensemble architectures for large soft-real-problems. The challenge is to replace quick global communications, which is impractical in a massively-parallel system, with statistical techniques. In this vein, the author proposes a novel approach to decentralized load balancing based on statistical time-series analysis. Each site estimates the system-wide average load using information about past loads of individual sites and attempts to equal that average. This estimation process is practical because the soft-real-time systems of interest naturally exhibit loads that are periodic, in a statistical sense akin to seasonality in econometrics.more » It is shown how this load-characterization technique can be the foundation for a load-balancing system in an architecture employing cut-through routing and an efficient multicast protocol.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20966385','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20966385"><span>Fourier descriptor analysis and unification of voice range profile contours: method and applications.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pabon, Peter; Ternström, Sten; Lamarche, Anick</p> <p>2011-06-01</p> <p>To describe a method for unified description, statistical modeling, and comparison of voice range profile (VRP) contours, even from diverse sources. A morphologic modeling technique, which is based on Fourier descriptors (FDs), is applied to the VRP contour. The technique, which essentially involves resampling of the curve of the contour, is assessed and also is compared to density-based VRP averaging methods that use the overlap count. VRP contours can be usefully described and compared using FDs. The method also permits the visualization of the local covariation along the contour average. For example, the FD-based analysis shows that the population variance for ensembles of VRP contours is usually smallest at the upper left part of the VRP. To illustrate the method's advantages and possible further application, graphs are given that compare the averaged contours from different authors and recording devices--for normal, trained, and untrained male and female voices as well as for child voices. The proposed technique allows any VRP shape to be brought to the same uniform base. On this uniform base, VRP contours or contour elements coming from a variety of sources may be placed within the same graph for comparison and for statistical analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUSMGC22A..03C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUSMGC22A..03C"><span>How well the Reliable Ensemble Averaging Method (REA) for 15 CMIP5 GCMs simulations works for Mexico?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Colorado, G.; Salinas, J. A.; Cavazos, T.; de Grau, P.</p> <p>2013-05-01</p> <p>15 CMIP5 GCMs precipitation simulations were combined in a weighted ensemble using the Reliable Ensemble Averaging (REA) method, obtaining the weight of each model. This was done for a historical period (1961-2000) and for the future emissions based on low (RCP4.5) and high (RCP8.5) radiating forcing for the period 2075-2099. The annual cycle of simple ensemble of the historical GCMs simulations, the historical REA average and the Climate Research Unit (CRU TS3.1) database was compared in four zones of México. In the case of precipitation we can see the improvements by using the REA method, especially in the two northern zones of México where the REA average is more close to the observations (CRU) that the simple average. However in the southern zones although there is an improvement it is not as good as it is in the north, particularly in the southeast where instead of the REA average is able to reproduce qualitatively good the annual cycle with the mid-summer drought it was greatly underestimated. The main reason is because the precipitation is underestimated for all the models and the mid-summer drought do not even exists in some models. In the REA average of the future scenarios, as we can expected, the most drastic decrease in precipitation was simulated using the RCP8.5 especially in the monsoon area and in the south of Mexico in summer and in winter. In the center and southern of Mexico however, the same scenario in autumn simulates an increase of precipitation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvE..94b2214D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvE..94b2214D"><span>Quantifying nonergodicity in nonautonomous dissipative dynamical systems: An application to climate change</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Drótos, Gábor; Bódai, Tamás; Tél, Tamás</p> <p>2016-08-01</p> <p>In nonautonomous dynamical systems, like in climate dynamics, an ensemble of trajectories initiated in the remote past defines a unique probability distribution, the natural measure of a snapshot attractor, for any instant of time, but this distribution typically changes in time. In cases with an aperiodic driving, temporal averages taken along a single trajectory would differ from the corresponding ensemble averages even in the infinite-time limit: ergodicity does not hold. It is worth considering this difference, which we call the nonergodic mismatch, by taking time windows of finite length for temporal averaging. We point out that the probability distribution of the nonergodic mismatch is qualitatively different in ergodic and nonergodic cases: its average is zero and typically nonzero, respectively. A main conclusion is that the difference of the average from zero, which we call the bias, is a useful measure of nonergodicity, for any window length. In contrast, the standard deviation of the nonergodic mismatch, which characterizes the spread between different realizations, exhibits a power-law decrease with increasing window length in both ergodic and nonergodic cases, and this implies that temporal and ensemble averages differ in dynamical systems with finite window lengths. It is the average modulus of the nonergodic mismatch, which we call the ergodicity deficit, that represents the expected deviation from fulfilling the equality of temporal and ensemble averages. As an important finding, we demonstrate that the ergodicity deficit cannot be reduced arbitrarily in nonergodic systems. We illustrate via a conceptual climate model that the nonergodic framework may be useful in Earth system dynamics, within which we propose the measure of nonergodicity, i.e., the bias, as an order-parameter-like quantifier of climate change.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23376135','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23376135"><span>Ensemble representations: effects of set size and item heterogeneity on average size perception.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W</p> <p>2013-02-01</p> <p>Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9413E..42Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9413E..42Z"><span>Identifying the optimal segmentors for mass classification in mammograms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Yu; Tomuro, Noriko; Furst, Jacob; Raicu, Daniela S.</p> <p>2015-03-01</p> <p>In this paper, we present the results of our investigation on identifying the optimal segmentor(s) from an ensemble of weak segmentors, used in a Computer-Aided Diagnosis (CADx) system which classifies suspicious masses in mammograms as benign or malignant. This is an extension of our previous work, where we used various parameter settings of image enhancement techniques to each suspicious mass (region of interest (ROI)) to obtain several enhanced images, then applied segmentation to each image to obtain several contours of a given mass. Each segmentation in this ensemble is essentially a "weak segmentor" because no single segmentation can produce the optimal result for all images. Then after shape features are computed from the segmented contours, the final classification model was built using logistic regression. The work in this paper focuses on identifying the optimal segmentor(s) from an ensemble mix of weak segmentors. For our purpose, optimal segmentors are those in the ensemble mix which contribute the most to the overall classification rather than the ones that produced high precision segmentation. To measure the segmentors' contribution, we examined weights on the features in the derived logistic regression model and computed the average feature weight for each segmentor. The result showed that, while in general the segmentors with higher segmentation success rates had higher feature weights, some segmentors with lower segmentation rates had high classification feature weights as well.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AdAtS..35..457A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AdAtS..35..457A"><span>Evaluation of TIGGE Ensemble Forecasts of Precipitation in Distinct Climate Regions in Iran</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aminyavari, Saleh; Saghafian, Bahram; Delavar, Majid</p> <p>2018-04-01</p> <p>The application of numerical weather prediction (NWP) products is increasing dramatically. Existing reports indicate that ensemble predictions have better skill than deterministic forecasts. In this study, numerical ensemble precipitation forecasts in the TIGGE database were evaluated using deterministic, dichotomous (yes/no), and probabilistic techniques over Iran for the period 2008-16. Thirteen rain gauges spread over eight homogeneous precipitation regimes were selected for evaluation. The Inverse Distance Weighting and Kriging methods were adopted for interpolation of the prediction values, downscaled to the stations at lead times of one to three days. To enhance the forecast quality, NWP values were post-processed via Bayesian Model Averaging. The results showed that ECMWF had better scores than other products. However, products of all centers underestimated precipitation in high precipitation regions while overestimating precipitation in other regions. This points to a systematic bias in forecasts and demands application of bias correction techniques. Based on dichotomous evaluation, NCEP did better at most stations, although all centers overpredicted the number of precipitation events. Compared to those of ECMWF and NCEP, UKMO yielded higher scores in mountainous regions, but performed poorly at other selected stations. Furthermore, the evaluations showed that all centers had better skill in wet than in dry seasons. The quality of post-processed predictions was better than those of the raw predictions. In conclusion, the accuracy of the NWP predictions made by the selected centers could be classified as medium over Iran, while post-processing of predictions is recommended to improve the quality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25233367','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25233367"><span>Assessing an ensemble docking-based virtual screening strategy for kinase targets by considering protein flexibility.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tian, Sheng; Sun, Huiyong; Pan, Peichen; Li, Dan; Zhen, Xuechu; Li, Youyong; Hou, Tingjun</p> <p>2014-10-27</p> <p>In this study, to accommodate receptor flexibility, based on multiple receptor conformations, a novel ensemble docking protocol was developed by using the naïve Bayesian classification technique, and it was evaluated in terms of the prediction accuracy of docking-based virtual screening (VS) of three important targets in the kinase family: ALK, CDK2, and VEGFR2. First, for each target, the representative crystal structures were selected by structural clustering, and the capability of molecular docking based on each representative structure to discriminate inhibitors from non-inhibitors was examined. Then, for each target, 50 ns molecular dynamics (MD) simulations were carried out to generate an ensemble of the conformations, and multiple representative structures/snapshots were extracted from each MD trajectory by structural clustering. On average, the representative crystal structures outperform the representative structures extracted from MD simulations in terms of the capabilities to separate inhibitors from non-inhibitors. Finally, by using the naïve Bayesian classification technique, an integrated VS strategy was developed to combine the prediction results of molecular docking based on different representative conformations chosen from crystal structures and MD trajectories. It was encouraging to observe that the integrated VS strategy yields better performance than the docking-based VS based on any single rigid conformation. This novel protocol may provide an improvement over existing strategies to search for more diverse and promising active compounds for a target of interest.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930072240&hterms=balance+sheet&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dbalance%2Bsheet','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930072240&hterms=balance+sheet&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dbalance%2Bsheet"><span>Characteristics of ion flow in the quiet state of the inner plasma sheet</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Angelopoulos, V.; Kennel, C. F.; Coroniti, F. V.; Pellat, R.; Spence, H. E.; Kivelson, M. G.; Walker, R. J.; Baumjohann, W.; Feldman, W. C.; Gosling, J. T.</p> <p>1993-01-01</p> <p>We use AMPTE/IRM and ISEE 2 data to study the properties of the high beta plasma sheet, the inner plasma sheet (IPS). Bursty bulk flows (BBFs) are excised from the two databases, and the average flow pattern in the non-BBF (quiet) IPS is constructed. At local midnight this ensemble-average flow is predominantly duskward; closer to the flanks it is mostly earthward. The flow pattern agrees qualitatively with calculations based on the Tsyganenko (1987) model (T87), where the earthward flow is due to the ensemble-average cross tail electric field and the duskward flow is the diamagnetic drift due to an inward pressure gradient. The IPS is on the average in pressure equilibrium with the lobes. Because of its large variance the average flow does not represent the instantaneous flow field. Case studies also show that the non-BBF flow is highly irregular and inherently unsteady, a reason why earthward convection can avoid a pressure balance inconsistency with the lobes. The ensemble distribution of velocities is a fundamental observable of the quiet plasma sheet flow field.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JPhCS.653a2124B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JPhCS.653a2124B"><span>Calculating phase equilibrium properties of plasma pseudopotential model using hybrid Gibbs statistical ensemble Monte-Carlo technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.</p> <p>2015-11-01</p> <p>Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010IJMPB..24.5309F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010IJMPB..24.5309F"><span>Application of Generalized Feynman-Hellmann Theorem in Quantization of LC Circuit in Thermo Bath</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fan, Hong-Yi; Tang, Xu-Bing</p> <p></p> <p>For the quantized LC electric circuit, when taking the Joule thermal effect into account, we think that physical observables should be evaluated in the context of ensemble average. We then use the generalized Feynman-Hellmann theorem for ensemble average to calculate them, which seems convenient. Fluctuation of observables in various LC electric circuits in the presence of thermo bath growing with temperature is exhibited.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22383947','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22383947"><span>Calculating ensemble averaged descriptions of protein rigidity without sampling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J</p> <p>2012-01-01</p> <p>Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27256383','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27256383"><span>Single Molecule Approaches in RNA-Protein Interactions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Serebrov, Victor; Moore, Melissa J</p> <p></p> <p>RNA-protein interactions govern every aspect of RNA metabolism, and aberrant RNA-binding proteins are the cause of hundreds of genetic diseases. Quantitative measurements of these interactions are necessary in order to understand mechanisms leading to diseases and to develop efficient therapies. Existing methods of RNA-protein interactome capture can afford a comprehensive snapshot of RNA-protein interaction networks but lack the ability to characterize the dynamics of these interactions. As all ensemble methods, their resolution is also limited by statistical averaging. Here we discuss recent advances in single molecule techniques that have the potential to tackle these challenges. We also provide a thorough overview of single molecule colocalization microscopy and the essential protein and RNA tagging and detection techniques.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_8 --> <div id="page_9" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="161"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22093433-photon-number-discrimination-without-photon-counter-its-application-reconstructing-non-gaussian-states','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22093433-photon-number-discrimination-without-photon-counter-its-application-reconstructing-non-gaussian-states"><span>Photon-number discrimination without a photon counter and its application to reconstructing non-Gaussian states</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Chrzanowski, H. M.; Bernu, J.; Sparkes, B. M.</p> <p>2011-11-15</p> <p>The nonlinearity of a conditional photon-counting measurement can be used to ''de-Gaussify'' a Gaussian state of light. Here we present and experimentally demonstrate a technique for photon-number resolution using only homodyne detection. We then apply this technique to inform a conditional measurement, unambiguously reconstructing the statistics of the non-Gaussian one- and two-photon-subtracted squeezed vacuum states. Although our photon-number measurement relies on ensemble averages and cannot be used to prepare non-Gaussian states of light, its high efficiency, photon-number-resolving capabilities, and compatibility with the telecommunications band make it suitable for quantum-information tasks relying on the outcomes of mean values.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29683661','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29683661"><span>Plasticity of the Binding Site of Renin: Optimized Selection of Protein Structures for Ensemble Docking.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Strecker, Claas; Meyer, Bernd</p> <p>2018-05-29</p> <p>Protein flexibility poses a major challenge to docking of potential ligands in that the binding site can adopt different shapes. Docking algorithms usually keep the protein rigid and only allow the ligand to be treated as flexible. However, a wrong assessment of the shape of the binding pocket can prevent a ligand from adapting a correct pose. Ensemble docking is a simple yet promising method to solve this problem: Ligands are docked into multiple structures, and the results are subsequently merged. Selection of protein structures is a significant factor for this approach. In this work we perform a comprehensive and comparative study evaluating the impact of structure selection on ensemble docking. We perform ensemble docking with several crystal structures and with structures derived from molecular dynamics simulations of renin, an attractive target for antihypertensive drugs. Here, 500 ns of MD simulations revealed binding site shapes not found in any available crystal structure. We evaluate the importance of structure selection for ensemble docking by comparing binding pose prediction, ability to rank actives above nonactives (screening utility), and scoring accuracy. As a result, for ensemble definition k-means clustering appears to be better suited than hierarchical clustering with average linkage. The best performing ensemble consists of four crystal structures and is able to reproduce the native ligand poses better than any individual crystal structure. Moreover this ensemble outperforms 88% of all individual crystal structures in terms of screening utility as well as scoring accuracy. Similarly, ensembles of MD-derived structures perform on average better than 75% of any individual crystal structure in terms of scoring accuracy at all inspected ensembles sizes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JHyd..546..476K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JHyd..546..476K"><span>Towards an improved ensemble precipitation forecast: A probabilistic post-processing approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Khajehei, Sepideh; Moradkhani, Hamid</p> <p>2017-03-01</p> <p>Recently, ensemble post-processing (EPP) has become a commonly used approach for reducing the uncertainty in forcing data and hence hydrologic simulation. The procedure was introduced to build ensemble precipitation forecasts based on the statistical relationship between observations and forecasts. More specifically, the approach relies on a transfer function that is developed based on a bivariate joint distribution between the observations and the simulations in the historical period. The transfer function is used to post-process the forecast. In this study, we propose a Bayesian EPP approach based on copula functions (COP-EPP) to improve the reliability of the precipitation ensemble forecast. Evaluation of the copula-based method is carried out by comparing the performance of the generated ensemble precipitation with the outputs from an existing procedure, i.e. mixed type meta-Gaussian distribution. Monthly precipitation from Climate Forecast System Reanalysis (CFS) and gridded observation from Parameter-Elevation Relationships on Independent Slopes Model (PRISM) have been employed to generate the post-processed ensemble precipitation. Deterministic and probabilistic verification frameworks are utilized in order to evaluate the outputs from the proposed technique. Distribution of seasonal precipitation for the generated ensemble from the copula-based technique is compared to the observation and raw forecasts for three sub-basins located in the Western United States. Results show that both techniques are successful in producing reliable and unbiased ensemble forecast, however, the COP-EPP demonstrates considerable improvement in the ensemble forecast in both deterministic and probabilistic verification, in particular in characterizing the extreme events in wet seasons.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JChPh.134m4108W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JChPh.134m4108W"><span>Toward canonical ensemble distribution from self-guided Langevin dynamics simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, Xiongwu; Brooks, Bernard R.</p> <p>2011-04-01</p> <p>This work derives a quantitative description of the conformational distribution in self-guided Langevin dynamics (SGLD) simulations. SGLD simulations employ guiding forces calculated from local average momentums to enhance low-frequency motion. This enhancement in low-frequency motion dramatically accelerates conformational search efficiency, but also induces certain perturbations in conformational distribution. Through the local averaging, we separate properties of molecular systems into low-frequency and high-frequency portions. The guiding force effect on the conformational distribution is quantitatively described using these low-frequency and high-frequency properties. This quantitative relation provides a way to convert between a canonical ensemble and a self-guided ensemble. Using example systems, we demonstrated how to utilize the relation to obtain canonical ensemble properties and conformational distributions from SGLD simulations. This development makes SGLD not only an efficient approach for conformational searching, but also an accurate means for conformational sampling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830008785','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830008785"><span>Physiological correlates of mental workload</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zacharias, G. L.</p> <p>1980-01-01</p> <p>A literature review was conducted to assess the basis of and techniques for physiological assessment of mental workload. The study findings reviewed had shortcomings involving one or more of the following basic problems: (1) physiologic arousal can be easily driven by nonworkload factors, confounding any proposed metric; (2) the profound absence of underlying physiologic models has promulgated a multiplicity of seemingly arbitrary signal processing techniques; (3) the unspecified multidimensional nature of physiological "state" has given rise to a broad spectrum of competing noncommensurate metrics; and (4) the lack of an adequate definition of workload compels physiologic correlations to suffer either from the vagueness of implicit workload measures or from the variance of explicit subjective assessments. Using specific studies as examples, two basic signal processing/data reduction techniques in current use, time and ensemble averaging are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19970040261&hterms=Time+Series+Design&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DTime%2BSeries%2BDesign','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19970040261&hterms=Time+Series+Design&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DTime%2BSeries%2BDesign"><span>Load Balancing Using Time Series Analysis for Soft Real Time Systems with Statistically Periodic Loads</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hailperin, M.</p> <p>1993-01-01</p> <p>This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009ems..confE.140C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009ems..confE.140C"><span>New technique for ensemble dressing combining Multimodel SuperEnsemble and precipitation PDF</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cane, D.; Milelli, M.</p> <p>2009-09-01</p> <p>The Multimodel SuperEnsemble technique (Krishnamurti et al., Science 285, 1548-1550, 1999) is a postprocessing method for the estimation of weather forecast parameters reducing direct model output errors. It differs from other ensemble analysis techniques by the use of an adequate weighting of the input forecast models to obtain a combined estimation of meteorological parameters. Weights are calculated by least-square minimization of the difference between the model and the observed field during a so-called training period. Although it can be applied successfully on the continuous parameters like temperature, humidity, wind speed and mean sea level pressure (Cane and Milelli, Meteorologische Zeitschrift, 15, 2, 2006), the Multimodel SuperEnsemble gives good results also when applied on the precipitation, a parameter quite difficult to handle with standard post-processing methods. Here we present our methodology for the Multimodel precipitation forecasts applied on a wide spectrum of results over Piemonte very dense non-GTS weather station network. We will focus particularly on an accurate statistical method for bias correction and on the ensemble dressing in agreement with the observed precipitation forecast-conditioned PDF. Acknowledgement: this work is supported by the Italian Civil Defence Department.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28972148','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28972148"><span>Time-course, negative-stain electron microscopy-based analysis for investigating protein-protein interactions at the single-molecule level.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nogal, Bartek; Bowman, Charles A; Ward, Andrew B</p> <p>2017-11-24</p> <p>Several biophysical approaches are available to study protein-protein interactions. Most approaches are conducted in bulk solution, and are therefore limited to an average measurement of the ensemble of molecular interactions. Here, we show how single-particle EM can enrich our understanding of protein-protein interactions at the single-molecule level and potentially capture states that are unobservable with ensemble methods because they are below the limit of detection or not conducted on an appropriate time scale. Using the HIV-1 envelope glycoprotein (Env) and its interaction with receptor CD4-binding site neutralizing antibodies as a model system, we both corroborate ensemble kinetics-derived parameters and demonstrate how time-course EM can further dissect stoichiometric states of complexes that are not readily observable with other methods. Visualization of the kinetics and stoichiometry of Env-antibody complexes demonstrated the applicability of our approach to qualitatively and semi-quantitatively differentiate two highly similar neutralizing antibodies. Furthermore, implementation of machine-learning techniques for sorting class averages of these complexes into discrete subclasses of particles helped reduce human bias. Our data provide proof of concept that single-particle EM can be used to generate a "visual" kinetic profile that should be amenable to studying many other protein-protein interactions, is relatively simple and complementary to well-established biophysical approaches. Moreover, our method provides critical insights into broadly neutralizing antibody recognition of Env, which may inform vaccine immunogen design and immunotherapeutic development. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPCS..116..100G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPCS..116..100G"><span>On the v-representability of ensemble densities of electron systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gonis, A.; Däne, M.</p> <p>2018-05-01</p> <p>Analogously to the case at zero temperature, where the density of the ground state of an interacting many-particle system determines uniquely (within an arbitrary additive constant) the external potential acting on the system, the thermal average of the density over an ensemble defined by the Boltzmann distribution at the minimum of the thermodynamic potential, or the free energy, determines the external potential uniquely (and not just modulo a constant) acting on a system described by this thermodynamic potential or free energy. The paper describes a formal procedure that generates the domain of a constrained search over general ensembles (at zero or elevated temperatures) that lead to a given density, including as a special case a density thermally averaged at a given temperature, and in the case of a v-representable density determines the external potential leading to the ensemble density. As an immediate consequence of the general formalism, the concept of v-representability is extended beyond the hitherto discussed case of ground state densities to encompass excited states as well. Specific application to thermally averaged densities solves the v-representability problem in connection with the Mermin functional in a manner analogous to that in which this problem was recently settled with respect to the Hohenberg and Kohn functional. The main formalism is illustrated with numerical results for ensembles of one-dimensional, non-interacting systems of particles under a harmonic potential.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1438681-representability-ensemble-densities-electron-systems','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1438681-representability-ensemble-densities-electron-systems"><span>On the v-representability of ensemble densities of electron systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Gonis, A.; Dane, M.</p> <p>2017-12-30</p> <p>Analogously to the case at zero temperature, where the density of the ground state of an interacting many-particle system determines uniquely (within an arbitrary additive constant) the external potential acting on the system, the thermal average of the density over an ensemble defined by the Boltzmann distribution at the minimum of the thermodynamic potential, or the free energy, determines the external potential uniquely (and not just modulo a constant) acting on a system described by this thermodynamic potential or free energy. The study describes a formal procedure that generates the domain of a constrained search over general ensembles (at zeromore » or elevated temperatures) that lead to a given density, including as a special case a density thermally averaged at a given temperature, and in the case of a v-representable density determines the external potential leading to the ensemble density. As an immediate consequence of the general formalism, the concept of v-representability is extended beyond the hitherto discussed case of ground state densities to encompass excited states as well. Specific application to thermally averaged densities solves the v-representability problem in connection with the Mermin functional in a manner analogous to that in which this problem was recently settled with respect to the Hohenberg and Kohn functional. Finally, the main formalism is illustrated with numerical results for ensembles of one-dimensional, non-interacting systems of particles under a harmonic potential.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.7172S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.7172S"><span>Post-processing method for wind speed ensemble forecast using wind speed and direction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sofie Eide, Siri; Bjørnar Bremnes, John; Steinsland, Ingelin</p> <p>2017-04-01</p> <p>Statistical methods are widely applied to enhance the quality of both deterministic and ensemble NWP forecasts. In many situations, like wind speed forecasting, most of the predictive information is contained in one variable in the NWP models. However, in statistical calibration of deterministic forecasts it is often seen that including more variables can further improve forecast skill. For ensembles this is rarely taken advantage of, mainly due to that it is generally not straightforward how to include multiple variables. In this study, it is demonstrated how multiple variables can be included in Bayesian model averaging (BMA) by using a flexible regression method for estimating the conditional means. The method is applied to wind speed forecasting at 204 Norwegian stations based on wind speed and direction forecasts from the ECMWF ensemble system. At about 85 % of the sites the ensemble forecasts were improved in terms of CRPS by adding wind direction as predictor compared to only using wind speed. On average the improvements were about 5 %, but mainly for moderate to strong wind situations. For weak wind speeds adding wind direction had more or less neutral impact.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvE..94a2109M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvE..94a2109M"><span>Langevin equation with fluctuating diffusivity: A two-state model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Miyaguchi, Tomoshige; Akimoto, Takuma; Yamamoto, Eiji</p> <p>2016-07-01</p> <p>Recently, anomalous subdiffusion, aging, and scatter of the diffusion coefficient have been reported in many single-particle-tracking experiments, though the origins of these behaviors are still elusive. Here, as a model to describe such phenomena, we investigate a Langevin equation with diffusivity fluctuating between a fast and a slow state. Namely, the diffusivity follows a dichotomous stochastic process. We assume that the sojourn time distributions of these two states are given by power laws. It is shown that, for a nonequilibrium ensemble, the ensemble-averaged mean-square displacement (MSD) shows transient subdiffusion. In contrast, the time-averaged MSD shows normal diffusion, but an effective diffusion coefficient transiently shows aging behavior. The propagator is non-Gaussian for short time and converges to a Gaussian distribution in a long-time limit; this convergence to Gaussian is extremely slow for some parameter values. For equilibrium ensembles, both ensemble-averaged and time-averaged MSDs show only normal diffusion and thus we cannot detect any traces of the fluctuating diffusivity with these MSDs. Therefore, as an alternative approach to characterizing the fluctuating diffusivity, the relative standard deviation (RSD) of the time-averaged MSD is utilized and it is shown that the RSD exhibits slow relaxation as a signature of the long-time correlation in the fluctuating diffusivity. Furthermore, it is shown that the RSD is related to a non-Gaussian parameter of the propagator. To obtain these theoretical results, we develop a two-state renewal theory as an analytical tool.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMGC21A1057R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMGC21A1057R"><span>Cloudy Windows: What GCM Ensembles, Reanalyses and Observations Tell Us About Uncertainty in Greenland's Future Climate and Surface Melting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Reusch, D. B.</p> <p>2016-12-01</p> <p>Any analysis that wants to use a GCM-based scenario of future climate benefits from knowing how much uncertainty the GCM's inherent variability adds to the development of climate change predictions. This is extra relevant in the polar regions due to the potential of global impacts (e.g., sea level rise) from local (ice sheet) climate changes such as more frequent/intense surface melting. High-resolution, regional-scale models using GCMs for boundary/initial conditions in future scenarios inherit a measure of GCM-derived externally-driven uncertainty. We investigate these uncertainties for the Greenland ice sheet using the 30-member CESM1.0-CAM5-BGC Large Ensemble (CESMLE) for recent (1981-2000) and future (2081-2100, RCP 8.5) decades. Recent simulations are skill-tested against the ERA-Interim reanalysis and AWS observations with results informing future scenarios. We focus on key variables influencing surface melting through decadal climatologies, nonlinear analysis of variability with self-organizing maps (SOMs), regional-scale modeling (Polar WRF), and simple melt models. Relative to the ensemble average, spatially averaged climatological July temperature anomalies over a Greenland ice-sheet/ocean domain are mostly between +/- 0.2 °C. The spatial average hides larger local anomalies of up to +/- 2 °C. The ensemble average itself is 2 °C cooler than ERA-Interim. SOMs extend our diagnostics by providing a concise, objective summary of model variability as a set of generalized patterns. For CESMLE, the SOM patterns summarize the variability of multiple realizations of climate. Changes in pattern frequency by ensemble member show the influence of initial conditions. For example, basic statistical analysis of pattern frequency yields interquartile ranges of 2-4% for individual patterns across the ensemble. In climate terms, this tells us about climate state variability through the range of the ensemble, a potentially significant source of melt-prediction uncertainty. SOMs can also capture the different trajectories of climate due to intramodel variability over time. Polar WRF provides higher resolution regional modeling with improved, polar-centric model physics. Simple melt models allow us to characterize impacts of the upstream uncertainties on estimates of surface melting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNH41A0147T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNH41A0147T"><span>Comparative assessment of several post-processing methods for correcting evapotranspiration forecasts derived from TIGGE datasets.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tian, D.; Medina, H.</p> <p>2017-12-01</p> <p>Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25012476','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25012476"><span>Impact of ensemble learning in the assessment of skeletal maturity.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cunha, Pedro; Moura, Daniel C; Guevara López, Miguel Angel; Guerra, Conceição; Pinto, Daniela; Ramos, Isabel</p> <p>2014-09-01</p> <p>The assessment of the bone age, or skeletal maturity, is an important task in pediatrics that measures the degree of maturation of children's bones. Nowadays, there is no standard clinical procedure for assessing bone age and the most widely used approaches are the Greulich and Pyle and the Tanner and Whitehouse methods. Computer methods have been proposed to automatize the process; however, there is a lack of exploration about how to combine the features of the different parts of the hand, and how to take advantage of ensemble techniques for this purpose. This paper presents a study where the use of ensemble techniques for improving bone age assessment is evaluated. A new computer method was developed that extracts descriptors for each joint of each finger, which are then combined using different ensemble schemes for obtaining a final bone age value. Three popular ensemble schemes are explored in this study: bagging, stacking and voting. Best results were achieved by bagging with a rule-based regression (M5P), scoring a mean absolute error of 10.16 months. Results show that ensemble techniques improve the prediction performance of most of the evaluated regression algorithms, always achieving best or comparable to best results. Therefore, the success of the ensemble methods allow us to conclude that their use may improve computer-based bone age assessment, offering a scalable option for utilizing multiple regions of interest and combining their output.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1330296-modality-driven-classification-visualization-ensemble-variance','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1330296-modality-driven-classification-visualization-ensemble-variance"><span>Modality-Driven Classification and Visualization of Ensemble Variance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Bensema, Kevin; Gosink, Luke; Obermaier, Harald</p> <p></p> <p>Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no informationmore » about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29630571','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29630571"><span>Training set extension for SVM ensemble in P300-speller with familiar face paradigm.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Qi; Shi, Kaiyang; Gao, Ning; Li, Jian; Bai, Ou</p> <p>2018-03-27</p> <p>P300-spellers are brain-computer interface (BCI)-based character input systems. Support vector machine (SVM) ensembles are trained with large-scale training sets and used as classifiers in these systems. However, the required large-scale training data necessitate a prolonged collection time for each subject, which results in data collected toward the end of the period being contaminated by the subject's fatigue. This study aimed to develop a method for acquiring more training data based on a collected small training set. A new method was developed in which two corresponding training datasets in two sequences are superposed and averaged to extend the training set. The proposed method was tested offline on a P300-speller with the familiar face paradigm. The SVM ensemble with extended training set achieved 85% classification accuracy for the averaged results of four sequences, and 100% for 11 sequences in the P300-speller. In contrast, the conventional SVM ensemble with non-extended training set achieved only 65% accuracy for four sequences, and 92% for 11 sequences. The SVM ensemble with extended training set achieves higher classification accuracies than the conventional SVM ensemble, which verifies that the proposed method effectively improves the classification performance of BCI P300-spellers, thus enhancing their practicality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ThApC.129..243O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ThApC.129..243O"><span>Comparison of projection skills of deterministic ensemble methods using pseudo-simulation data generated from multivariate Gaussian distribution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Oh, Seok-Geun; Suh, Myoung-Seok</p> <p>2017-07-01</p> <p>The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvE..96b2156R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvE..96b2156R"><span>Self-averaging and weak ergodicity breaking of diffusion in heterogeneous media</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Russian, Anna; Dentz, Marco; Gouze, Philippe</p> <p>2017-08-01</p> <p>Diffusion in natural and engineered media is quantified in terms of stochastic models for the heterogeneity-induced fluctuations of particle motion. However, fundamental properties such as ergodicity and self-averaging and their dependence on the disorder distribution are often not known. Here, we investigate these questions for diffusion in quenched disordered media characterized by spatially varying retardation properties, which account for particle retention due to physical or chemical interactions with the medium. We link self-averaging and ergodicity to the disorder sampling efficiency Rn, which quantifies the number of disorder realizations a noise ensemble may sample in a single disorder realization. Diffusion for disorder scenarios characterized by a finite mean transition time is ergodic and self-averaging for any dimension. The strength of the sample to sample fluctuations decreases with increasing spatial dimension. For an infinite mean transition time, particle motion is weakly ergodicity breaking in any dimension because single particles cannot sample the heterogeneity spectrum in finite time. However, even though the noise ensemble is not representative of the single-particle time statistics, subdiffusive motion in q ≥2 dimensions is self-averaging, which means that the noise ensemble in a single realization samples a representative part of the heterogeneity spectrum.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMGC31F1172G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMGC31F1172G"><span>Model Independence in Downscaled Climate Projections: a Case Study in the Southeast United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gray, G. M. E.; Boyles, R.</p> <p>2016-12-01</p> <p>Downscaled climate projections are used to deduce how the climate will change in future decades at local and regional scales. It is important to use multiple models to characterize part of the future uncertainty given the impact on adaptation decision making. This is traditionally employed through an equally-weighted ensemble of multiple GCMs downscaled using one technique. Newer practices include several downscaling techniques in an effort to increase the ensemble's representation of future uncertainty. However, this practice may be adding statistically dependent models to the ensemble. Previous research has shown a dependence problem in the GCM ensemble in multiple generations, but has not been shown in the downscaled ensemble. In this case study, seven downscaled climate projections on the daily time scale are considered: CLAREnCE10, SERAP, BCCA (CMIP5 and CMIP3 versions), Hostetler, CCR, and MACA-LIVNEH. These data represent 83 ensemble members, 44 GCMs, and two generations of GCMs. Baseline periods are compared against the University of Idaho's METDATA gridded observation dataset. Hierarchical agglomerative clustering is applied to the correlated errors to determine dependent clusters. Redundant GCMs across different downscaling techniques show the most dependence, while smaller dependence signals are detected within downscaling datasets and across generations of GCMs. These results indicate that using additional downscaled projections to increase the ensemble size must be done with care to avoid redundant GCMs and the process of downscaling may increase the dependence of those downscaled GCMs. Climate model generation does not appear dissimilar enough to be treated as two separate statistical populations for ensemble building at the local and regional scales.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_9 --> <div id="page_10" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="181"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20070023651&hterms=ensemble&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Densemble','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20070023651&hterms=ensemble&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Densemble"><span>Ensemble Weight Enumerators for Protograph LDPC Codes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Divsalar, Dariush</p> <p>2006-01-01</p> <p>Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20099852','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20099852"><span>Car-Parrinello molecular dynamics study of the intramolecular vibrational mode-sensitive double proton-transfer mechanisms in porphycene.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Walewski, Łukasz; Waluk, Jacek; Lesyng, Bogdan</p> <p>2010-02-18</p> <p>Car-Parrinello molecular dynamics simulations were carried out to help interpret proton-transfer processes observed experimentally in porphycene under thermodynamic equilibrium conditions (NVT ensemble) as well as during selective, nonequilibrium vibrational excitations of the molecular scaffold (NVE ensemble). In the NVT ensemble, the population of the trans form in the gas phase at 300 K is 96.5%, and of the cis-1 form is 3.5%, in agreement with experimental data. Approximately 70% of the proton-transfer events are asynchronous double proton transfers. According to the high resolution simulation data they consist of two single transfer events that rapidly take place one after the other. The average time-period between the two consecutive jumps is 220 fs. The gas phase reaction rate estimate at 300 K is 3.6 ps, which is comparable to experimentally determined rates. The NVE ensemble nonequilibrium ab initio MD simulations, which correspond to selective vibrational excitations of the molecular scaffold generated with high resolution laser spectroscopy techniques, exhibit an enhancing property of the 182 cm(-1) vibrational mode and an inhibiting property of the 114 cm(-1) one. Both of them influence the proton-transfer rate, in qualitative agreement with experimental findings. Our ab initio simulations provide new predictions regarding the influence of double-mode vibrational excitations on proton-transfer processes. They can help in setting up future programmable spectroscopic experiments for the proton-transfer translocations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1911237M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1911237M"><span>Improving medium-range ensemble streamflow forecasts through statistical post-processing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mendoza, Pablo; Wood, Andy; Clark, Elizabeth; Nijssen, Bart; Clark, Martyn; Ramos, Maria-Helena; Nowak, Kenneth; Arnold, Jeffrey</p> <p>2017-04-01</p> <p>Probabilistic hydrologic forecasts are a powerful source of information for decision-making in water resources operations. A common approach is the hydrologic model-based generation of streamflow forecast ensembles, which can be implemented to account for different sources of uncertainties - e.g., from initial hydrologic conditions (IHCs), weather forecasts, and hydrologic model structure and parameters. In practice, hydrologic ensemble forecasts typically have biases and spread errors stemming from errors in the aforementioned elements, resulting in a degradation of probabilistic properties. In this work, we compare several statistical post-processing techniques applied to medium-range ensemble streamflow forecasts obtained with the System for Hydromet Applications, Research and Prediction (SHARP). SHARP is a fully automated prediction system for the assessment and demonstration of short-term to seasonal streamflow forecasting applications, developed by the National Center for Atmospheric Research, University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. The suite of post-processing techniques includes linear blending, quantile mapping, extended logistic regression, quantile regression, ensemble analogs, and the generalized linear model post-processor (GLMPP). We assess and compare these techniques using multi-year hindcasts in several river basins in the western US. This presentation discusses preliminary findings about the effectiveness of the techniques for improving probabilistic skill, reliability, discrimination, sharpness and resolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ThApC.tmp..465W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ThApC.tmp..465W"><span>Effect of land model ensemble versus coupled model ensemble on the simulation of precipitation climatology and variability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wei, Jiangfeng; Dirmeyer, Paul A.; Yang, Zong-Liang; Chen, Haishan</p> <p>2017-10-01</p> <p>Through a series of model simulations with an atmospheric general circulation model coupled to three different land surface models, this study investigates the impacts of land model ensembles and coupled model ensemble on precipitation simulation. It is found that coupling an ensemble of land models to an atmospheric model has a very minor impact on the improvement of precipitation climatology and variability, but a simple ensemble average of the precipitation from three individually coupled land-atmosphere models produces better results, especially for precipitation variability. The generally weak impact of land processes on precipitation should be the main reason that the land model ensembles do not improve precipitation simulation. However, if there are big biases in the land surface model or land surface data set, correcting them could improve the simulated climate, especially for well-constrained regional climate simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhDT.........9B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhDT.........9B"><span>Neutral Kaon Mixing from Lattice QCD</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bai, Ziyuan</p> <p></p> <p>In this work, we report the lattice calculation of two important quantities which emerge from second order, K0 - K¯0 mixing : DeltaMK and epsilonK. The RBC-UKQCD collaboration has performed the first calculation of DeltaMK with unphysical kinematics [1]. We now extend this calculation to near-physical and physical ensembles. In these physical or near-physical calculations, the two-pion energies are below the kaon threshold, and we have to examine the two-pion intermediate states contribution to DeltaMK, as well as the enhanced finite volume corrections arising from these two-pion intermediate states. We also report the ?rst lattice calculation of the long-distance contribution to the indirect CP violation parameter, the epsilonK. This calculation involves the treatment of a short-distance, ultra-violet divergence that is absent in the calculation of DeltaMK, and we will report our techniques for correcting this divergence on the lattice. In this calculation, we used unphysical quark masses on the same ensemble that we used in [1]. Therefore, rather than providing a physical result, this calculation demonstrates the technique for calculating epsilonK, and provides an approximate understanding the size of the long-distance contributions. Various new techniques are employed in this work, such as the use of All-Mode-Averaging (AMA), the All-to-All (A2A) propagators and the use of super-jackknife method in analyzing the data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19740003520&hterms=laws+thermodynamics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dlaws%2Bthermodynamics','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19740003520&hterms=laws+thermodynamics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dlaws%2Bthermodynamics"><span>Thermodynamics of hydrogen-helium mixtures at high pressure and finite temperature</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hubbard, W. B.</p> <p>1972-01-01</p> <p>A technique is reviewed for calculating thermodynamic quantities for mixtures of light elements at high pressure, in the metallic state. Ensemble averages are calculated with Monte Carlo techniques and periodic boundary conditions. Interparticle potentials are assumed to be coulombic, screened by the electrons in dielectric function theory. This method is quantitatively accurate for alloys at pressures above about 10 Mbar. An alloy of equal parts hydrogen and helium by mass appears to remain liquid and mixed for temperatures above about 3000 K, at pressures of about 15 Mbar. The additive volume law is satisfied to within about 10%, but the Gruneisen equation of state gives poor results. A calculation at 1300 K shows evidence of a hydrogen-helium phase separation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1357499','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1357499"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ortoleva, Peter J.</p> <p></p> <p>Illustrative embodiments of systems and methods for the deductive multiscale simulation of macromolecules are disclosed. In one illustrative embodiment, a deductive multiscale simulation method may include (i) constructing a set of order parameters that model one or more structural characteristics of a macromolecule, (ii) simulating an ensemble of atomistic configurations for the macromolecule using instantaneous values of the set of order parameters, (iii) simulating thermal-average forces and diffusivities for the ensemble of atomistic configurations, and (iv) evolving the set of order parameters via Langevin dynamics using the thermal-average forces and diffusivities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26196785','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26196785"><span>Coherent Spin Control at the Quantum Level in an Ensemble-Based Optical Memory.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jobez, Pierre; Laplane, Cyril; Timoney, Nuala; Gisin, Nicolas; Ferrier, Alban; Goldner, Philippe; Afzelius, Mikael</p> <p>2015-06-12</p> <p>Long-lived quantum memories are essential components of a long-standing goal of remote distribution of entanglement in quantum networks. These can be realized by storing the quantum states of light as single-spin excitations in atomic ensembles. However, spin states are often subjected to different dephasing processes that limit the storage time, which in principle could be overcome using spin-echo techniques. Theoretical studies suggest this to be challenging due to unavoidable spontaneous emission noise in ensemble-based quantum memories. Here, we demonstrate spin-echo manipulation of a mean spin excitation of 1 in a large solid-state ensemble, generated through storage of a weak optical pulse. After a storage time of about 1 ms we optically read-out the spin excitation with a high signal-to-noise ratio. Our results pave the way for long-duration optical quantum storage using spin-echo techniques for any ensemble-based memory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1421334','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1421334"><span>Investigation of short-term effective radiative forcing of fire aerosols over North America using nudged hindcast ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Liu, Yawen; Zhang, Kai; Qian, Yun</p> <p></p> <p>Aerosols from fire emissions can potentially have large impact on clouds and radiation. However, fire aerosol sources are often intermittent, and their effect on weather and climate is difficult to quantify. Here we investigated the short-term effective radiative forcing of fire aerosols using the global aerosol–climate model Community Atmosphere Model version 5 (CAM5). Different from previous studies, we used nudged hindcast ensembles to quantify the forcing uncertainty due to the chaotic response to small perturbations in the atmosphere state. Daily mean emissions from three fire inventories were used to consider the uncertainty in emission strength and injection heights. The simulated aerosolmore » optical depth (AOD) and mass concentrations were evaluated against in situ measurements and reanalysis data. Overall, the results show the model has reasonably good predicting skills. Short (10-day) nudged ensemble simulations were then performed with and without fire emissions to estimate the effective radiative forcing. Results show fire aerosols have large effects on both liquid and ice clouds over the two selected regions in April 2009. Ensemble mean results show strong negative shortwave cloud radiative effect (SCRE) over almost the entirety of southern Mexico, with a 10-day regional mean value of –3.0 W m –2. Over the central US, the SCRE is positive in the north but negative in the south, and the regional mean SCRE is small (–0.56 W m –2). For the 10-day average, we found a large ensemble spread of regional mean shortwave cloud radiative effect over southern Mexico (15.6 % of the corresponding ensemble mean) and the central US (64.3 %), despite the regional mean AOD time series being almost indistinguishable during the 10-day period. Moreover, the ensemble spread is much larger when using daily averages instead of 10-day averages. In conclusion, this demonstrates the importance of using a large ensemble of simulations to estimate the short-term aerosol effective radiative forcing.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1421334-investigation-short-term-effective-radiative-forcing-fire-aerosols-over-north-america-using-nudged-hindcast-ensembles','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1421334-investigation-short-term-effective-radiative-forcing-fire-aerosols-over-north-america-using-nudged-hindcast-ensembles"><span>Investigation of short-term effective radiative forcing of fire aerosols over North America using nudged hindcast ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Liu, Yawen; Zhang, Kai; Qian, Yun; ...</p> <p>2018-01-03</p> <p>Aerosols from fire emissions can potentially have large impact on clouds and radiation. However, fire aerosol sources are often intermittent, and their effect on weather and climate is difficult to quantify. Here we investigated the short-term effective radiative forcing of fire aerosols using the global aerosol–climate model Community Atmosphere Model version 5 (CAM5). Different from previous studies, we used nudged hindcast ensembles to quantify the forcing uncertainty due to the chaotic response to small perturbations in the atmosphere state. Daily mean emissions from three fire inventories were used to consider the uncertainty in emission strength and injection heights. The simulated aerosolmore » optical depth (AOD) and mass concentrations were evaluated against in situ measurements and reanalysis data. Overall, the results show the model has reasonably good predicting skills. Short (10-day) nudged ensemble simulations were then performed with and without fire emissions to estimate the effective radiative forcing. Results show fire aerosols have large effects on both liquid and ice clouds over the two selected regions in April 2009. Ensemble mean results show strong negative shortwave cloud radiative effect (SCRE) over almost the entirety of southern Mexico, with a 10-day regional mean value of –3.0 W m –2. Over the central US, the SCRE is positive in the north but negative in the south, and the regional mean SCRE is small (–0.56 W m –2). For the 10-day average, we found a large ensemble spread of regional mean shortwave cloud radiative effect over southern Mexico (15.6 % of the corresponding ensemble mean) and the central US (64.3 %), despite the regional mean AOD time series being almost indistinguishable during the 10-day period. Moreover, the ensemble spread is much larger when using daily averages instead of 10-day averages. In conclusion, this demonstrates the importance of using a large ensemble of simulations to estimate the short-term aerosol effective radiative forcing.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25571123','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25571123"><span>Performance analysis of a Principal Component Analysis ensemble classifier for Emotiv headset P300 spellers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M</p> <p>2014-01-01</p> <p>The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70035772','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70035772"><span>Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM) III: Scenario analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.</p> <p>2009-01-01</p> <p>An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27174015','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27174015"><span>Correlating single-molecule and ensemble-average measurements of peptide adsorption onto different inorganic materials.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, Seong-Oh; Jackman, Joshua A; Mochizuki, Masahito; Yoon, Bo Kyeong; Hayashi, Tomohiro; Cho, Nam-Joon</p> <p>2016-06-07</p> <p>The coating of solid-binding peptides (SBPs) on inorganic material surfaces holds significant potential for improved surface functionalization at nano-bio interfaces. In most related studies, the goal has been to engineer peptides with selective and high binding affinity for a target material. The role of the material substrate itself in modulating the adsorption behavior of a peptide molecule remains less explored and there are few studies that compare the interaction of one peptide with different inorganic substrates. Herein, using a combination of two experimental techniques, we investigated the adsorption of a 16 amino acid-long random coil peptide to various inorganic substrates - gold, silicon oxide, titanium oxide and aluminum oxide. Quartz crystal microbalance-dissipation (QCM-D) experiments were performed in order to measure the peptide binding affinity for inorganic solid supports at the ensemble average level, and atomic force microscopy (AFM) experiments were conducted in order to determine the adhesion force of a single peptide molecule. A positive trend was observed between the total mass uptake of attached peptide and the single-molecule adhesion force on each substrate. Peptide affinity for gold was appreciably greater than for the oxide substrates. Collectively, the results obtained in this study offer insight into the ways in which inorganic materials can differentially influence and modulate the adhesion of SBPs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960015572','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960015572"><span>Effects of Periodic Unsteady Wake Flow and Pressure Gradient on Boundary Layer Transition Along the Concave Surface of a Curved Plate. Part 3</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schobeiri, M. T.; Radke, R. E.</p> <p>1996-01-01</p> <p>Boundary layer transition and development on a turbomachinery blade is subjected to highly periodic unsteady turbulent flow, pressure gradient in longitudinal as well as lateral direction, and surface curvature. To study the effects of periodic unsteady wakes on the concave surface of a turbine blade, a curved plate was utilized. On the concave surface of this plate, detailed experimental investigations were carried out under zero and negative pressure gradient. The measurements were performed in an unsteady flow research facility using a rotating cascade of rods positioned upstream of the curved plate. Boundary layer measurements using a hot-wire probe were analyzed by the ensemble-averaging technique. The results presented in the temporal-spatial domain display the transition and further development of the boundary layer, specifically the ensemble-averaged velocity and turbulence intensity. As the results show, the turbulent patches generated by the wakes have different leading and trailing edge velocities and merge with the boundary layer resulting in a strong deformation and generation of a high turbulence intensity core. After the turbulent patch has totally penetrated into the boundary layer, pronounced becalmed regions were formed behind the turbulent patch and were extended far beyond the point they would occur in the corresponding undisturbed steady boundary layer.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008AIPC.1084..335C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008AIPC.1084..335C"><span>DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.</p> <p>2008-12-01</p> <p>A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EGUGA..12.2341K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EGUGA..12.2341K"><span>Spatio-temporal behaviour of medium-range ensemble forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kipling, Zak; Primo, Cristina; Charlton-Perez, Andrew</p> <p>2010-05-01</p> <p>Using the recently-developed mean-variance of logarithms (MVL) diagram, together with the TIGGE archive of medium-range ensemble forecasts from nine different centres, we present an analysis of the spatio-temporal dynamics of their perturbations, and show how the differences between models and perturbation techniques can explain the shape of their characteristic MVL curves. We also consider the use of the MVL diagram to compare the growth of perturbations within the ensemble with the growth of the forecast error, showing that there is a much closer correspondence for some models than others. We conclude by looking at how the MVL technique might assist in selecting models for inclusion in a multi-model ensemble, and suggest an experiment to test its potential in this context.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A11D1917R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A11D1917R"><span>Stochastic Forcing for High-Resolution Regional and Global Ocean and Atmosphere-Ocean Coupled Ensemble Forecast System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rowley, C. D.; Hogan, P. J.; Martin, P.; Thoppil, P.; Wei, M.</p> <p>2017-12-01</p> <p>An extended range ensemble forecast system is being developed in the US Navy Earth System Prediction Capability (ESPC), and a global ocean ensemble generation capability to represent uncertainty in the ocean initial conditions has been developed. At extended forecast times, the uncertainty due to the model error overtakes the initial condition as the primary source of forecast uncertainty. Recently, stochastic parameterization or stochastic forcing techniques have been applied to represent the model error in research and operational atmospheric, ocean, and coupled ensemble forecasts. A simple stochastic forcing technique has been developed for application to US Navy high resolution regional and global ocean models, for use in ocean-only and coupled atmosphere-ocean-ice-wave ensemble forecast systems. Perturbation forcing is added to the tendency equations for state variables, with the forcing defined by random 3- or 4-dimensional fields with horizontal, vertical, and temporal correlations specified to characterize different possible kinds of error. Here, we demonstrate the stochastic forcing in regional and global ensemble forecasts with varying perturbation amplitudes and length and time scales, and assess the change in ensemble skill measured by a range of deterministic and probabilistic metrics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5453518','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5453518"><span>Efficient Strategies for Estimating the Spatial Coherence of Backscatter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.</p> <p>2017-01-01</p> <p>The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1345146-diffusion-random-networks','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1345146-diffusion-random-networks"><span>Diffusion in random networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Zhang, Duan Z.; Padrino, Juan C.</p> <p>2017-06-01</p> <p>The ensemble averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of pockets connected by tortuous channels. Inside a channel, fluid transport is assumed to be governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pocket mass density. The so-called dual-porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem,more » we consider the one-dimensional mass diffusion in a semi-infinite domain. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt $-$1/4 rather than xt $-$1/2 as in the traditional theory. We found this early time similarity can be explained by random walk theory through the network.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29398782','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29398782"><span>A simple plug-in bagging ensemble based on threshold-moving for classifying binary and multiclass imbalanced data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Collell, Guillem; Prelec, Drazen; Patil, Kaustubh R</p> <p>2018-01-31</p> <p>Class imbalance presents a major hurdle in the application of classification methods. A commonly taken approach is to learn ensembles of classifiers using rebalanced data. Examples include bootstrap averaging (bagging) combined with either undersampling or oversampling of the minority class examples. However, rebalancing methods entail asymmetric changes to the examples of different classes, which in turn can introduce their own biases. Furthermore, these methods often require specifying the performance measure of interest a priori, i.e., before learning. An alternative is to employ the threshold moving technique, which applies a threshold to the continuous output of a model, offering the possibility to adapt to a performance measure a posteriori , i.e., a plug-in method. Surprisingly, little attention has been paid to this combination of a bagging ensemble and threshold-moving. In this paper, we study this combination and demonstrate its competitiveness. Contrary to the other resampling methods, we preserve the natural class distribution of the data resulting in well-calibrated posterior probabilities. Additionally, we extend the proposed method to handle multiclass data. We validated our method on binary and multiclass benchmark data sets by using both, decision trees and neural networks as base classifiers. We perform analyses that provide insights into the proposed method.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_10 --> <div id="page_11" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="201"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23049158','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23049158"><span>μ-PIV measurements of the ensemble flow fields surrounding a migrating semi-infinite bubble.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yamaguchi, Eiichiro; Smith, Bradford J; Gaver, Donald P</p> <p>2009-08-01</p> <p>Microscale particle image velocimetry (μ-PIV) measurements of ensemble flow fields surrounding a steadily-migrating semi-infinite bubble through the novel adaptation of a computer controlled linear motor flow control system. The system was programmed to generate a square wave velocity input in order to produce accurate constant bubble propagation repeatedly and effectively through a fused glass capillary tube. We present a novel technique for re-positioning of the coordinate axis to the bubble tip frame of reference in each instantaneous field through the analysis of the sudden change of standard deviation of centerline velocity profiles across the bubble interface. Ensemble averages were then computed in this bubble tip frame of reference. Combined fluid systems of water/air, glycerol/air, and glycerol/Si-oil were used to investigate flows comparable to computational simulations described in Smith and Gaver (2008) and to past experimental observations of interfacial shape. Fluorescent particle images were also analyzed to measure the residual film thickness trailing behind the bubble. The flow fields and film thickness agree very well with the computational simulations as well as existing experimental and analytical results. Particle accumulation and migration associated with the flow patterns near the bubble tip after long experimental durations are discussed as potential sources of error in the experimental method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21531475','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21531475"><span>Classifier ensemble construction with rotation forest to improve medical diagnosis performance of machine learning algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ozcift, Akin; Gulten, Arif</p> <p>2011-12-01</p> <p>Improving accuracies of machine learning algorithms is vital in designing high performance computer-aided diagnosis (CADx) systems. Researches have shown that a base classifier performance might be enhanced by ensemble classification strategies. In this study, we construct rotation forest (RF) ensemble classifiers of 30 machine learning algorithms to evaluate their classification performances using Parkinson's, diabetes and heart diseases from literature. While making experiments, first the feature dimension of three datasets is reduced using correlation based feature selection (CFS) algorithm. Second, classification performances of 30 machine learning algorithms are calculated for three datasets. Third, 30 classifier ensembles are constructed based on RF algorithm to assess performances of respective classifiers with the same disease data. All the experiments are carried out with leave-one-out validation strategy and the performances of the 60 algorithms are evaluated using three metrics; classification accuracy (ACC), kappa error (KE) and area under the receiver operating characteristic (ROC) curve (AUC). Base classifiers succeeded 72.15%, 77.52% and 84.43% average accuracies for diabetes, heart and Parkinson's datasets, respectively. As for RF classifier ensembles, they produced average accuracies of 74.47%, 80.49% and 87.13% for respective diseases. RF, a newly proposed classifier ensemble algorithm, might be used to improve accuracy of miscellaneous machine learning algorithms to design advanced CADx systems. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1011380.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1011380.pdf"><span>Building a Strong Ensemble of Teaching Artists: Characteristics, Contexts, and Strategies for Success and Sustainability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Mages, Wendy K.</p> <p>2013-01-01</p> <p>This research analyzes the techniques, strategies, and philosophical foundations that contributed to the quality and maintenance of a strong theatre-in-education ensemble. This study details how the company selected ensemble members and describes the work environment the company developed to promote collaboration and encourage actor-teacher…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2829965','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2829965"><span>Single-Molecule and Superresolution Imaging in Live Bacteria Cells</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Biteen, Julie S.; Moerner, W.E.</p> <p>2010-01-01</p> <p>Single-molecule imaging enables biophysical measurements devoid of ensemble averaging, gives enhanced spatial resolution beyond the diffraction limit, and permits superresolution reconstructions. Here, single-molecule and superresolution imaging are applied to the study of proteins in live Caulobacter crescentus cells to illustrate the power of these methods in bacterial imaging. Based on these techniques, the diffusion coefficient and dynamics of the histidine protein kinase PleC, the localization behavior of the polar protein PopZ, and the treadmilling behavior and protein superstructure of the structural protein MreB are investigated with sub-40-nm spatial resolution, all in live cells. PMID:20300204</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4061774','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4061774"><span>Ensemble analyses improve signatures of tumour hypoxia and reveal inter-platform differences</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>Background The reproducibility of transcriptomic biomarkers across datasets remains poor, limiting clinical application. We and others have suggested that this is in-part caused by differential error-structure between datasets, and their incomplete removal by pre-processing algorithms. Methods To test this hypothesis, we systematically assessed the effects of pre-processing on biomarker classification using 24 different pre-processing methods and 15 distinct signatures of tumour hypoxia in 10 datasets (2,143 patients). Results We confirm strong pre-processing effects for all datasets and signatures, and find that these differ between microarray versions. Importantly, exploiting different pre-processing techniques in an ensemble technique improved classification for a majority of signatures. Conclusions Assessing biomarkers using an ensemble of pre-processing techniques shows clear value across multiple diseases, datasets and biomarkers. Importantly, ensemble classification improves biomarkers with initially good results but does not result in spuriously improved performance for poor biomarkers. While further research is required, this approach has the potential to become a standard for transcriptomic biomarkers. PMID:24902696</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1093136-image-change-detection-via-ensemble-learning','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1093136-image-change-detection-via-ensemble-learning"><span>Image Change Detection via Ensemble Learning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Martin, Benjamin W; Vatsavai, Raju</p> <p>2013-01-01</p> <p>The concept of geographic change detection is relevant in many areas. Changes in geography can reveal much information about a particular location. For example, analysis of changes in geography can identify regions of population growth, change in land use, and potential environmental disturbance. A common way to perform change detection is to use a simple method such as differencing to detect regions of change. Though these techniques are simple, often the application of these techniques is very limited. Recently, use of machine learning methods such as neural networks for change detection has been explored with great success. In this work,more » we explore the use of ensemble learning methodologies for detecting changes in bitemporal synthetic aperture radar (SAR) images. Ensemble learning uses a collection of weak machine learning classifiers to create a stronger classifier which has higher accuracy than the individual classifiers in the ensemble. The strength of the ensemble lies in the fact that the individual classifiers in the ensemble create a mixture of experts in which the final classification made by the ensemble classifier is calculated from the outputs of the individual classifiers. Our methodology leverages this aspect of ensemble learning by training collections of weak decision tree based classifiers to identify regions of change in SAR images collected of a region in the Staten Island, New York area during Hurricane Sandy. Preliminary studies show that the ensemble method has approximately 11.5% higher change detection accuracy than an individual classifier.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.6533L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.6533L"><span>ASCAT soil moisture data assimilation through the Ensemble Kalman Filter for improving streamflow simulation in Mediterranean catchments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel</p> <p>2016-04-01</p> <p>Assimilation of Surface Soil Moisture (SSM) observations obtained from remote sensing techniques have been shown to improve streamflow prediction at different time scales of hydrological modeling. Different sensors and methods have been tested for their application in SSM estimation, especially in the microwave region of the electromagnetic spectrum. The available observation devices include passive microwave sensors such as the Advanced Microwave Scanning Radiometer - Earth Observation System (AMSR-E) onboard the Aqua satellite and the Soil Moisture and Ocean Salinity (SMOS) mission. On the other hand, active microwave systems include Scatterometers (SCAT) onboard the European Remote Sensing satellites (ERS-1/2) and the Advanced Scatterometer (ASCAT) onboard MetOp-A satellite. Data assimilation (DA) include different techniques that have been applied in hydrology and other fields for decades. These techniques include, among others, Kalman Filtering (KF), Variational Assimilation or Particle Filtering. From the initial KF method, different techniques were developed to suit its application to different systems. The Ensemble Kalman Filter (EnKF), extensively applied in hydrological modeling improvement, shows its capability to deal with nonlinear model dynamics without linearizing model equations, as its main advantage. The objective of this study was to investigate whether data assimilation of SSM ASCAT observations, through the EnKF method, could improve streamflow simulation of mediterranean catchments with TOPLATS hydrological complex model. The DA technique was programmed in FORTRAN, and applied to hourly simulations of TOPLATS catchment model. TOPLATS (TOPMODEL-based Land-Atmosphere Transfer Scheme) was applied on its lumped version for two mediterranean catchments of similar size, located in northern Spain (Arga, 741 km2) and central Italy (Nestore, 720 km2). The model performs a separated computation of energy and water balances. In those balances, the soil is divided into two layers, the upper Surface Zone (SZ), and the deeper Transmission Zone (TZ). In this study, the SZ depth was fixed to 5 cm, for adequate assimilation of observed data. Available data was distributed as follows: first, the model was calibrated for the 2001-2007 period; then the 2007-2010 period was used for satellite data rescaling purposes. Finally, data assimilation was applied during the validation (2010-2013) period. Application of the EnKF required the following steps: 1) rescaling of satellite data, 2) transformation of rescaled data into Soil Water Index (SWI) through a moving average filter, where a T = 9 calibrated value was applied, 3) generation of a 50 member ensemble through perturbation of inputs (rainfall and temperature) and three selected parameters, 4) validation of the ensemble through the compliance of two criteria based on ensemble's spread, mean square error and skill and, 5) Kalman Gain calculation. In this work, comparison of three satellite data rescaling techniques: 1) cumulative distribution Function (CDF) matching, 2) variance matching and 3) linear least square regression was also performed. Results obtained in this study showed slight improvements of hourly Nash-Sutcliffe Efficiency (NSE) in both catchments, with the different rescaling methods evaluated. Larger improvements were found in terms of seasonal simulated volume error reduction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5048093','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5048093"><span>Ensemble Deep Learning for Biomedical Time Series Classification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2016-01-01</p> <p>Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost. PMID:27725828</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014GeoJI.197.1770W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014GeoJI.197.1770W"><span>On estimating attenuation from the amplitude of the spectrally whitened ambient seismic field</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Weemstra, Cornelis; Westra, Willem; Snieder, Roel; Boschi, Lapo</p> <p>2014-06-01</p> <p>Measuring attenuation on the basis of interferometric, receiver-receiver surface waves is a non-trivial task: the amplitude, more than the phase, of ensemble-averaged cross-correlations is strongly affected by non-uniformities in the ambient wavefield. In addition, ambient noise data are typically pre-processed in ways that affect the amplitude itself. Some authors have recently attempted to measure attenuation in receiver-receiver cross-correlations obtained after the usual pre-processing of seismic ambient-noise records, including, most notably, spectral whitening. Spectral whitening replaces the cross-spectrum with a unit amplitude spectrum. It is generally assumed that cross-terms have cancelled each other prior to spectral whitening. Cross-terms are peaks in the cross-correlation due to simultaneously acting noise sources, that is, spurious traveltime delays due to constructive interference of signal coming from different sources. Cancellation of these cross-terms is a requirement for the successful retrieval of interferometric receiver-receiver signal and results from ensemble averaging. In practice, ensemble averaging is replaced by integrating over sufficiently long time or averaging over several cross-correlation windows. Contrary to the general assumption, we show in this study that cross-terms are not required to cancel each other prior to spectral whitening, but may also cancel each other after the whitening procedure. Specifically, we derive an analytic approximation for the amplitude difference associated with the reversed order of cancellation and normalization. Our approximation shows that an amplitude decrease results from the reversed order. This decrease is predominantly non-linear at small receiver-receiver distances: at distances smaller than approximately two wavelengths, whitening prior to ensemble averaging causes a significantly stronger decay of the cross-spectrum.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28946991','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28946991"><span>Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Soojeong; Chang, Joon-Hyuk</p> <p>2017-11-01</p> <p>This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57  mmHg, respectively. These indicate that the proposed method actually enhances the performance by 9.18% and 10.88% compared with the DBN-DNN single estimator. The proposed methodology improves the accuracy of BP estimation and reduces the uncertainty for BP estimation. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29495774','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29495774"><span>Equipartition terms in transition path ensemble: Insights from molecular dynamics simulations of alanine dipeptide.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Wenjin</p> <p>2018-02-28</p> <p>Transition path ensemble consists of reactive trajectories and possesses all the information necessary for the understanding of the mechanism and dynamics of important condensed phase processes. However, quantitative description of the properties of the transition path ensemble is far from being established. Here, with numerical calculations on a model system, the equipartition terms defined in thermal equilibrium were for the first time estimated in the transition path ensemble. It was not surprising to observe that the energy was not equally distributed among all the coordinates. However, the energies distributed on a pair of conjugated coordinates remained equal. Higher energies were observed to be distributed on several coordinates, which are highly coupled to the reaction coordinate, while the rest were almost equally distributed. In addition, the ensemble-averaged energy on each coordinate as a function of time was also quantified. These quantitative analyses on energy distributions provided new insights into the transition path ensemble.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27918894','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27918894"><span>Perception of ensemble statistics requires attention.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jackson-Nielsen, Molly; Cohen, Michael A; Pitts, Michael A</p> <p>2017-02-01</p> <p>To overcome inherent limitations in perceptual bandwidth, many aspects of the visual world are represented as summary statistics (e.g., average size, orientation, or density of objects). Here, we investigated the relationship between summary (ensemble) statistics and visual attention. Recently, it was claimed that one ensemble statistic in particular, color diversity, can be perceived without focal attention. However, a broader debate exists over the attentional requirements of conscious perception, and it is possible that some form of attention is necessary for ensemble perception. To test this idea, we employed a modified inattentional blindness paradigm and found that multiple types of summary statistics (color and size) often go unnoticed without attention. In addition, we found attentional costs in dual-task situations, further implicating a role for attention in statistical perception. Overall, we conclude that while visual ensembles may be processed efficiently, some amount of attention is necessary for conscious perception of ensemble statistics. Copyright © 2016 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960042630','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960042630"><span>Upper Limit of Weights in TAI Computation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Thomas, Claudine; Azoubib, Jacques</p> <p>1996-01-01</p> <p>The international reference time scale International Atomic Time (TAI) computed by the Bureau International des Poids et Mesures (BIPM) relies on a weighted average of data from a large number of atomic clocks. In it, the weight attributed to a given clock depends on its long-term stability. In this paper the TAI algorithm is used as the basis for a discussion of how to implement an upper limit of weight for clocks contributing to the ensemble time. This problem is approached through the comparison of two different techniques. In one case, a maximum relative weight is fixed: no individual clock can contribute more than a given fraction to the resulting time scale. The weight of each clock is then adjusted according to the qualities of the whole set of contributing elements. In the other case, a parameter characteristic of frequency stability is chosen: no individual clock can appear more stable than the stated limit. This is equivalent to choosing an absolute limit of weight and attributing this to to the most stable clocks independently of the other elements of the ensemble. The first technique is more robust than the second and automatically optimizes the stability of the resulting time scale, but leads to a more complicated computatio. The second technique has been used in the TAI algorithm since the very beginning. Careful analysis of tests on real clock data shows that improvement of the stability of the time scale requires revision from time to time of the fixed value chosen for the upper limit of absolute weight. In particular, we present results which confirm the decision of the CCDS Working Group on TAI to increase the absolute upper limit by a factor of 2.5. We also show that the use of an upper relative contribution further helps to improve the stability and may be a useful step towards better use of the massive ensemble of HP 507IA clocks now contributing to TAI.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=216463&keyword=bias+AND+correction&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=216463&keyword=bias+AND+correction&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Ensemble and Bias-Correction Techniques for Air-Quality Model Forecasts of Surface O3 and PM2.5 during the TEXAQS-II Experiment of 2006</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>Several air quality forecasting ensembles were created from seven models, running in real-time during the 2006 Texas Air Quality (TEXAQS-II) experiment. These multi-model ensembles incorporated a diverse set of meteorological models, chemical mechanisms, and emission inventories...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150003518','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150003518"><span>Ensemble Forecasting of Coronal Mass Ejections Using the WSA-ENLIL with CONED Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Emmons, D.; Acebal, A.; Pulkkinen, A.; Taktakishvili, A.; MacNeice, P.; Odstricil, D.</p> <p>2013-01-01</p> <p>The combination of the Wang-Sheeley-Arge (WSA) coronal model, ENLIL heliospherical model version 2.7, and CONED Model version 1.3 (WSA-ENLIL with CONED Model) was employed to form ensemble forecasts for 15 halo coronal mass ejections (halo CMEs). The input parameter distributions were formed from 100 sets of CME cone parameters derived from the CONED Model. The CONED Model used image processing along with the bootstrap approach to automatically calculate cone parameter distributions from SOHO/LASCO imagery based on techniques described by Pulkkinen et al. (2010). The input parameter distributions were used as input to WSA-ENLIL to calculate the temporal evolution of the CMEs, which were analyzed to determine the propagation times to the L1 Lagrangian point and the maximum Kp indices due to the impact of the CMEs on the Earth's magnetosphere. The Newell et al. (2007) Kp index formula was employed to calculate the maximum Kp indices based on the predicted solar wind parameters near Earth assuming two magnetic field orientations: a completely southward magnetic field and a uniformly distributed clock-angle in the Newell et al. (2007) Kp index formula. The forecasts for 5 of the 15 events had accuracy such that the actual propagation time was within the ensemble average plus or minus one standard deviation. Using the completely southward magnetic field assumption, 10 of the 15 events contained the actual maximum Kp index within the range of the ensemble forecast, compared to 9 of the 15 events when using a uniformly distributed clock angle.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AIPC.1323....6B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AIPC.1323....6B"><span>Fidelity decay of the two-level bosonic embedded ensembles of random matrices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.</p> <p>2010-12-01</p> <p>We study the fidelity decay of the k-body embedded ensembles of random matrices for bosons distributed over two single-particle states. Fidelity is defined in terms of a reference Hamiltonian, which is a purely diagonal matrix consisting of a fixed one-body term and includes the diagonal of the perturbing k-body embedded ensemble matrix, and the perturbed Hamiltonian which includes the residual off-diagonal elements of the k-body interaction. This choice mimics the typical mean-field basis used in many calculations. We study separately the cases k = 2 and 3. We compute the ensemble-averaged fidelity decay as well as the fidelity of typical members with respect to an initial random state. Average fidelity displays a revival at the Heisenberg time, t = tH = 1, and a freeze in the fidelity decay, during which periodic revivals of period tH are observed. We obtain the relevant scaling properties with respect to the number of bosons and the strength of the perturbation. For certain members of the ensemble, we find that the period of the revivals during the freeze of fidelity occurs at fractional times of tH. These fractional periodic revivals are related to the dominance of specific k-body terms in the perturbation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ExFl...59...94N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ExFl...59...94N"><span>Detection of small-amplitude periodic surface pressure fluctuation by pressure-sensitive paint measurements using frequency-domain methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Noda, Takahiro; Nakakita, Kazuyki; Wakahara, Masaki; Kameda, Masaharu</p> <p>2018-06-01</p> <p>Image measurement using pressure-sensitive paint (PSP) is an effective tool for analyzing the unsteady pressure field on the surface of a body in a low-speed air flow, which is associated with wind noise. In this study, the surface pressure fluctuation due to the tonal trailing edge (TE) noise for a two-dimensional NACA 0012 airfoil was quantitatively detected using a porous anodized aluminum PSP (AA-PSP). The emission from the PSP upon illumination by a blue laser diode was captured using a 12-bit high-speed complementary metal-oxide-semiconductor (CMOS) camera. The intensities of the captured images were converted to pressures using a standard intensity-based method. Three image-processing methods based on the fast Fourier transform (FFT) were tested to determine their efficiency in improving the signal-to-noise ratio (SNR) of the unsteady PSP data. In addition to two fundamental FFT techniques (the full data and ensemble averaging FFTs), a technique using the coherent output power (COP), which involves the cross correlation between the PSP data and the signal measured using a pointwise sound-level meter, was tested. Preliminary tests indicated that random photon shot noise dominates the intensity fluctuations in the captured PSP emissions above 200 Hz. Pressure fluctuations associated with the TE noise, whose dominant frequency is approximately 940 Hz, were successfully measured by analyzing 40,960 sequential PSP images recorded at 10 kfps. Quantitative validation using the power spectrum indicates that the COP technique is the most effective method of identification of the pressure fluctuation directly related to TE noise. It is possible to distinguish power differences with a resolution of 10 Pa^2 (4 Pa in amplitude) when the COP was employed without use of another wind-off data. This resolution cannot be achieved by the ensemble averaging FFT because of an insufficient elimination of the background noise.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22416042-canonical-ensemble-state-averaged-complete-active-space-self-consistent-field-sa-casscf-strategy-problems-more-diabatic-than-adiabatic-states-charge-bond-resonance-monomethine-cyanines','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22416042-canonical-ensemble-state-averaged-complete-active-space-self-consistent-field-sa-casscf-strategy-problems-more-diabatic-than-adiabatic-states-charge-bond-resonance-monomethine-cyanines"><span>Canonical-ensemble state-averaged complete active space self-consistent field (SA-CASSCF) strategy for problems with more diabatic than adiabatic states: Charge-bond resonance in monomethine cyanines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Olsen, Seth, E-mail: seth.olsen@uq.edu.au</p> <p>2015-01-28</p> <p>This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed (“microcanonical”) SA-CASSCF ensembles, self-consistency is invariant tomore » any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with “more diabatic than adiabatic” states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse “temperature,” unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler’s hydrol blue. The diabatic CASVB representation is shown to vary weakly for “temperatures” corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25637978','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25637978"><span>Canonical-ensemble state-averaged complete active space self-consistent field (SA-CASSCF) strategy for problems with more diabatic than adiabatic states: charge-bond resonance in monomethine cyanines.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Olsen, Seth</p> <p>2015-01-28</p> <p>This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed ("microcanonical") SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with "more diabatic than adiabatic" states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse "temperature," unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler's hydrol blue. The diabatic CASVB representation is shown to vary weakly for "temperatures" corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1429906-internal-variability-dynamically-downscaled-climate-over-north-america','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1429906-internal-variability-dynamically-downscaled-climate-over-north-america"><span>Internal variability of a dynamically downscaled climate over North America</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wang, Jiali; Bessac, Julie; Kotamarthi, Rao</p> <p></p> <p>This study investigates the internal variability (IV) of a regional climate model, and considers the impacts of horizontal resolution and spectral nudging on the IV. A 16-member simulation ensemble was conducted using the Weather Research Forecasting model for three model configurations. Ensemble members included simulations at spatial resolutions of 50 and 12 km without spectral nudging and simulations at a spatial resolution of 12 km with spectral nudging. All the simulations were generated over the same domain, which covered much of North America. The degree of IV was measured as the spread between the individual members of the ensemble duringmore » the integration period. The IV of the 12 km simulation with spectral nudging was also compared with a future climate change simulation projected by the same model configuration. The variables investigated focus on precipitation and near-surface air temperature. While the IVs show a clear annual cycle with larger values in summer and smaller values in winter, the seasonal IV is smaller for a 50-km spatial resolution than for a 12-km resolution when nudging is not applied. Applying a nudging technique to the 12-km simulation reduces the IV by a factor of two, and produces smaller IV than the simulation at 50 km without nudging. Applying a nudging technique also changes the geographic distributions of IV in all examined variables. The IV is much smaller than the inter-annual variability at seasonal scales for regionally averaged temperature and precipitation. The IV is also smaller than the projected changes in air-temperature for the mid- and late twenty-first century. However, the IV is larger than the projected changes in precipitation for the mid- and late twenty-first century.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ClDy...50.4539W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ClDy...50.4539W"><span>Internal variability of a dynamically downscaled climate over North America</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Jiali; Bessac, Julie; Kotamarthi, Rao; Constantinescu, Emil; Drewniak, Beth</p> <p>2018-06-01</p> <p>This study investigates the internal variability (IV) of a regional climate model, and considers the impacts of horizontal resolution and spectral nudging on the IV. A 16-member simulation ensemble was conducted using the Weather Research Forecasting model for three model configurations. Ensemble members included simulations at spatial resolutions of 50 and 12 km without spectral nudging and simulations at a spatial resolution of 12 km with spectral nudging. All the simulations were generated over the same domain, which covered much of North America. The degree of IV was measured as the spread between the individual members of the ensemble during the integration period. The IV of the 12 km simulation with spectral nudging was also compared with a future climate change simulation projected by the same model configuration. The variables investigated focus on precipitation and near-surface air temperature. While the IVs show a clear annual cycle with larger values in summer and smaller values in winter, the seasonal IV is smaller for a 50-km spatial resolution than for a 12-km resolution when nudging is not applied. Applying a nudging technique to the 12-km simulation reduces the IV by a factor of two, and produces smaller IV than the simulation at 50 km without nudging. Applying a nudging technique also changes the geographic distributions of IV in all examined variables. The IV is much smaller than the inter-annual variability at seasonal scales for regionally averaged temperature and precipitation. The IV is also smaller than the projected changes in air-temperature for the mid- and late twenty-first century. However, the IV is larger than the projected changes in precipitation for the mid- and late twenty-first century.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ClDy..tmp..673W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ClDy..tmp..673W"><span>Internal variability of a dynamically downscaled climate over North America</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Jiali; Bessac, Julie; Kotamarthi, Rao; Constantinescu, Emil; Drewniak, Beth</p> <p>2017-09-01</p> <p>This study investigates the internal variability (IV) of a regional climate model, and considers the impacts of horizontal resolution and spectral nudging on the IV. A 16-member simulation ensemble was conducted using the Weather Research Forecasting model for three model configurations. Ensemble members included simulations at spatial resolutions of 50 and 12 km without spectral nudging and simulations at a spatial resolution of 12 km with spectral nudging. All the simulations were generated over the same domain, which covered much of North America. The degree of IV was measured as the spread between the individual members of the ensemble during the integration period. The IV of the 12 km simulation with spectral nudging was also compared with a future climate change simulation projected by the same model configuration. The variables investigated focus on precipitation and near-surface air temperature. While the IVs show a clear annual cycle with larger values in summer and smaller values in winter, the seasonal IV is smaller for a 50-km spatial resolution than for a 12-km resolution when nudging is not applied. Applying a nudging technique to the 12-km simulation reduces the IV by a factor of two, and produces smaller IV than the simulation at 50 km without nudging. Applying a nudging technique also changes the geographic distributions of IV in all examined variables. The IV is much smaller than the inter-annual variability at seasonal scales for regionally averaged temperature and precipitation. The IV is also smaller than the projected changes in air-temperature for the mid- and late twenty-first century. However, the IV is larger than the projected changes in precipitation for the mid- and late twenty-first century.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27329703','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27329703"><span>Robustness of the far-field response of nonlocal plasmonic ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tserkezis, Christos; Maack, Johan R; Liu, Zhaowei; Wubs, Martijn; Mortensen, N Asger</p> <p>2016-06-22</p> <p>Contrary to classical predictions, the optical response of few-nm plasmonic particles depends on particle size due to effects such as nonlocality and electron spill-out. Ensembles of such nanoparticles are therefore expected to exhibit a nonclassical inhomogeneous spectral broadening due to size distribution. For a normal distribution of free-electron nanoparticles, and within the simple nonlocal hydrodynamic Drude model, both the nonlocal blueshift and the plasmon linewidth are shown to be considerably affected by ensemble averaging. Size-variance effects tend however to conceal nonlocality to a lesser extent when the homogeneous size-dependent broadening of individual nanoparticles is taken into account, either through a local size-dependent damping model or through the Generalized Nonlocal Optical Response theory. The role of ensemble averaging is further explored in realistic distributions of isolated or weakly-interacting noble-metal nanoparticles, as encountered in experiments, while an analytical expression to evaluate the importance of inhomogeneous broadening through measurable quantities is developed. Our findings are independent of the specific nonclassical theory used, thus providing important insight into a large range of experiments on nanoscale and quantum plasmonics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70197818','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70197818"><span>A model ensemble for projecting multi‐decadal coastal cliff retreat during the 21st century</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Limber, Patrick; Barnard, Patrick; Vitousek, Sean; Erikson, Li</p> <p>2018-01-01</p> <p>Sea cliff retreat rates are expected to accelerate with rising sea levels during the 21st century. Here we develop an approach for a multi‐model ensemble that efficiently projects time‐averaged sea cliff retreat over multi‐decadal time scales and large (>50 km) spatial scales. The ensemble consists of five simple 1‐D models adapted from the literature that relate sea cliff retreat to wave impacts, sea level rise (SLR), historical cliff behavior, and cross‐shore profile geometry. Ensemble predictions are based on Monte Carlo simulations of each individual model, which account for the uncertainty of model parameters. The consensus of the individual models also weights uncertainty, such that uncertainty is greater when predictions from different models do not agree. A calibrated, but unvalidated, ensemble was applied to the 475 km‐long coastline of Southern California (USA), with 4 SLR scenarios of 0.5, 0.93, 1.5, and 2 m by 2100. Results suggest that future retreat rates could increase relative to mean historical rates by more than two‐fold for the higher SLR scenarios, causing an average total land loss of 19 – 41 m by 2100. However, model uncertainty ranges from +/‐ 5 – 15 m, reflecting the inherent difficulties of projecting cliff retreat over multiple decades. To enhance ensemble performance, future work could include weighting each model by its skill in matching observations in different morphological settings</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1999AdAtS..16..159K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1999AdAtS..16..159K"><span>An ensemble forecast of the South China Sea monsoon</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Krishnamurti, T. N.; Tewari, Mukul; Bensman, Ed; Han, Wei; Zhang, Zhan; Lau, William K. M.</p> <p>1999-05-01</p> <p>This paper presents a generalized ensemble forecast procedure for the tropical latitudes. Here we propose an empirical orthogonal function-based procedure for the definition of a seven-member ensemble. The wind and the temperature fields are perturbed over the global tropics. Although the forecasts are made over the global belt with a high-resolution model, the emphasis of this study is on a South China Sea monsoon. Over this domain of the South China Sea includes the passage of a Tropical Storm, Gary, that moved eastwards north of the Philippines. The ensemble forecast handled the precipitation of this storm reasonably well. A global model at the resolution Triangular Truncation 126 waves is used to carry out these seven forecasts. The evaluation of the ensemble of forecasts is carried out via standard root mean square errors of the precipitation and the wind fields. The ensemble average is shown to have a higher skill compared to a control experiment, which was a first analysis based on operational data sets over both the global tropical and South China Sea domain. All of these experiments were subjected to physical initialization which provides a spin-up of the model rain close to that obtained from satellite and gauge-based estimates. The results furthermore show that inherently much higher skill resides in the forecast precipitation fields if they are averaged over area elements of the order of 4° latitude by 4° longitude squares.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFMNG31A1568K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFMNG31A1568K"><span>A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Keller, J. D.; Bach, L.; Hense, A.</p> <p>2012-12-01</p> <p>The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4552622','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4552622"><span>Quantitative Connection Between Ensemble Thermodynamics and Single-Molecule Kinetics: A Case Study Using Cryo-EM and smFRET Investigations of the Ribosome</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Frank, Joachim; Gonzalez, Ruben L.</p> <p>2015-01-01</p> <p>At equilibrium, thermodynamic and kinetic information can be extracted from biomolecular energy landscapes by many techniques. However, while static, ensemble techniques yield thermodynamic data, often only dynamic, single-molecule techniques can yield the kinetic data that describes transition-state energy barriers. Here we present a generalized framework based upon dwell-time distributions that can be used to connect such static, ensemble techniques with dynamic, single-molecule techniques, and thus characterize energy landscapes to greater resolutions. We demonstrate the utility of this framework by applying it to cryogenic electron microscopy and single-molecule fluorescence resonance energy transfer studies of the bacterial ribosomal pretranslocation complex. Among other benefits, application of this framework to these data explains why two transient, intermediate conformations of the pretranslocation complex, which are observed in a cryogenic electron microscopy study, may not be observed in several single-molecule fluorescence resonance energy transfer studies. PMID:25785884</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25785884','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25785884"><span>Quantitative Connection between Ensemble Thermodynamics and Single-Molecule Kinetics: A Case Study Using Cryogenic Electron Microscopy and Single-Molecule Fluorescence Resonance Energy Transfer Investigations of the Ribosome.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Thompson, Colin D Kinz; Sharma, Ajeet K; Frank, Joachim; Gonzalez, Ruben L; Chowdhury, Debashish</p> <p>2015-08-27</p> <p>At equilibrium, thermodynamic and kinetic information can be extracted from biomolecular energy landscapes by many techniques. However, while static, ensemble techniques yield thermodynamic data, often only dynamic, single-molecule techniques can yield the kinetic data that describe transition-state energy barriers. Here we present a generalized framework based upon dwell-time distributions that can be used to connect such static, ensemble techniques with dynamic, single-molecule techniques, and thus characterize energy landscapes to greater resolutions. We demonstrate the utility of this framework by applying it to cryogenic electron microscopy (cryo-EM) and single-molecule fluorescence resonance energy transfer (smFRET) studies of the bacterial ribosomal pre-translocation complex. Among other benefits, application of this framework to these data explains why two transient, intermediate conformations of the pre-translocation complex, which are observed in a cryo-EM study, may not be observed in several smFRET studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1712188O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1712188O"><span>The total probabilities from high-resolution ensemble forecasting of floods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian</p> <p>2015-04-01</p> <p>Ensemble forecasting has for a long time been used in meteorological modelling, to give an indication of the uncertainty of the forecasts. As meteorological ensemble forecasts often show some bias and dispersion errors, there is a need for calibration and post-processing of the ensembles. Typical methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). To make optimal predictions of floods along the stream network in hydrology, we can easily use the ensemble members as input to the hydrological models. However, some of the post-processing methods will need modifications when regionalizing the forecasts outside the calibration locations, as done by Hemri et al. (2013). We present a method for spatial regionalization of the post-processed forecasts based on EMOS and top-kriging (Skøien et al., 2006). We will also look into different methods for handling the non-normality of runoff and the effect on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005. Skøien, J. O., Merz, R. and Blöschl, G.: Top-kriging - Geostatistics on stream networks, Hydrol. Earth Syst. Sci., 10(2), 277-287, 2006.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3731171','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3731171"><span>Efficient and Unbiased Sampling of Biomolecular Systems in the Canonical Ensemble: A Review of Self-Guided Langevin Dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wu, Xiongwu; Damjanovic, Ana; Brooks, Bernard R.</p> <p>2013-01-01</p> <p>This review provides a comprehensive description of the self-guided Langevin dynamics (SGLD) and the self-guided molecular dynamics (SGMD) methods and their applications. Example systems are included to provide guidance on optimal application of these methods in simulation studies. SGMD/SGLD has enhanced ability to overcome energy barriers and accelerate rare events to affordable time scales. It has been demonstrated that with moderate parameters, SGLD can routinely cross energy barriers of 20 kT at a rate that molecular dynamics (MD) or Langevin dynamics (LD) crosses 10 kT barriers. The core of these methods is the use of local averages of forces and momenta in a direct manner that can preserve the canonical ensemble. The use of such local averages results in methods where low frequency motion “borrows” energy from high frequency degrees of freedom when a barrier is approached and then returns that excess energy after a barrier is crossed. This self-guiding effect also results in an accelerated diffusion to enhance conformational sampling efficiency. The resulting ensemble with SGLD deviates in a small way from the canonical ensemble, and that deviation can be corrected with either an on-the-fly or a post processing reweighting procedure that provides an excellent canonical ensemble for systems with a limited number of accelerated degrees of freedom. Since reweighting procedures are generally not size extensive, a newer method, SGLDfp, uses local averages of both momenta and forces to preserve the ensemble without reweighting. The SGLDfp approach is size extensive and can be used to accelerate low frequency motion in large systems, or in systems with explicit solvent where solvent diffusion is also to be enhanced. Since these methods are direct and straightforward, they can be used in conjunction with many other sampling methods or free energy methods by simply replacing the integration of degrees of freedom that are normally sampled by MD or LD. PMID:23913991</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMGC22B..03O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMGC22B..03O"><span>Improving wave forecasting by integrating ensemble modelling and machine learning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>O'Donncha, F.; Zhang, Y.; James, S. C.</p> <p>2017-12-01</p> <p>Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19820017377','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19820017377"><span>Investigation of the tip clearance flow inside and at the exit of a compressor rotor passage</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pandya, A.; Lakshminarayana, B.</p> <p>1982-01-01</p> <p>The nature of the tip clearance flow in a moderately loaded compressor rotor is studied. The measurements were taken inside the clearance between the annulus-wall casing and the rotor blade tip. These measurements were obtained using a stationary two-sensor hot-wire probe in combination with an ensemble averaging technique. The flowfield was surveyed at various radial locations and at ten axial locations, four of which were inside the blade passage in the clearance region and the remaining six outside the passage. Variations of the mean flow properties in the tangential and the radial directions at various axial locations were derived from the data. Variation of the leakage velocity at different axial stations and the annulus-wall boundary layer profiles from passage-averaged mean velocities were also estimated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27739015','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27739015"><span>Summary statistics in the attentional blink.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McNair, Nicolas A; Goodbourn, Patrick T; Shone, Lauren T; Harris, Irina M</p> <p>2017-01-01</p> <p>We used the attentional blink (AB) paradigm to investigate the processing stage at which extraction of summary statistics from visual stimuli ("ensemble coding") occurs. Experiment 1 examined whether ensemble coding requires attentional engagement with the items in the ensemble. Participants performed two sequential tasks on each trial: gender discrimination of a single face (T1) and estimating the average emotional expression of an ensemble of four faces (or of a single face, as a control condition) as T2. Ensemble coding was affected by the AB when the tasks were separated by a short temporal lag. In Experiment 2, the order of the tasks was reversed to test whether ensemble coding requires more working-memory resources, and therefore induces a larger AB, than estimating the expression of a single face. Each condition produced a similar magnitude AB in the subsequent gender-discrimination T2 task. Experiment 3 additionally investigated whether the previous results were due to participants adopting a subsampling strategy during the ensemble-coding task. Contrary to this explanation, we found different patterns of performance in the ensemble-coding condition and a condition in which participants were instructed to focus on only a single face within an ensemble. Taken together, these findings suggest that ensemble coding emerges automatically as a result of the deployment of attentional resources across the ensemble of stimuli, prior to information being consolidated in working memory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990087333&hterms=behavior+modification&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dbehavior%2Bmodification','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990087333&hterms=behavior+modification&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dbehavior%2Bmodification"><span>Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zhu, Yanqui; Cohn, Stephen E.; Todling, Ricardo</p> <p>1999-01-01</p> <p>The Kalman filter is the optimal filter in the presence of known gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions. Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz model as well as more realistic models of the means and atmosphere. A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter situations to allow for correct update of the ensemble members. The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to be quite puzzling in that results state estimates are worse than for their filter analogue. In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use the Lorenz model to test and compare the behavior of a variety of implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.A43G0334Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.A43G0334Y"><span>Analog ensemble and Bayesian regression techniques to improve the wind speed prediction during extreme storms in the NE U.S.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, J.; Astitha, M.; Delle Monache, L.; Alessandrini, S.</p> <p>2016-12-01</p> <p>Accuracy of weather forecasts in Northeast U.S. has become very important in recent years, given the serious and devastating effects of extreme weather events. Despite the use of evolved forecasting tools and techniques strengthened by increased super-computing resources, the weather forecasting systems still have their limitations in predicting extreme events. In this study, we examine the combination of analog ensemble and Bayesian regression techniques to improve the prediction of storms that have impacted NE U.S., mostly defined by the occurrence of high wind speeds (i.e. blizzards, winter storms, hurricanes and thunderstorms). The predicted wind speed, wind direction and temperature by two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) are combined using the mentioned techniques, exploring various ways that those variables influence the minimization of the prediction error (systematic and random). This study is focused on retrospective simulations of 146 storms that affected the NE U.S. in the period 2005-2016. In order to evaluate the techniques, leave-one-out cross validation procedure was implemented regarding 145 storms as the training dataset. The analog ensemble method selects a set of past observations that corresponded to the best analogs of the numerical weather prediction and provides a set of ensemble members of the selected observation dataset. The set of ensemble members can then be used in a deterministic or probabilistic way. In the Bayesian regression framework, optimal variances are estimated for the training partition by minimizing the root mean square error and are applied to the out-of-sample storm. The preliminary results indicate a significant improvement in the statistical metrics of 10-m wind speed for 146 storms using both techniques (20-30% bias and error reduction in all observation-model pairs). In this presentation, we discuss the various combinations of atmospheric predictors and techniques and illustrate how the long record of predicted storms is valuable in the improvement of wind speed prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNG41A0126R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNG41A0126R"><span>Long-time Dynamics of Stochastic Wave Breaking</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Restrepo, J. M.; Ramirez, J. M.; Deike, L.; Melville, K.</p> <p>2017-12-01</p> <p>A stochastic parametrization is proposed for the dynamics of wave breaking of progressive water waves. The model is shown to agree with transport estimates, derived from the Lagrangian path of fluid parcels. These trajectories are obtained numerically and are shown to agree well with theory in the non-breaking regime. Of special interest is the impact of wave breaking on transport, momentum exchanges and energy dissipation, as well as dispersion of trajectories. The proposed model, ensemble averaged to larger time scales, is compared to ensemble averages of the numerically generated parcel dynamics, and is then used to capture energy dissipation and path dispersion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012HESSD...9.9425C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012HESSD...9.9425C"><span>Regional climate models downscaling in the Alpine area with Multimodel SuperEnsemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cane, D.; Barbarino, S.; Renier, L. A.; Ronchi, C.</p> <p>2012-08-01</p> <p>The climatic scenarios show a strong signal of warming in the Alpine area already for the mid XXI century. The climate simulations, however, even when obtained with Regional Climate Models (RCMs), are affected by strong errors where compared with observations, due to their difficulties in representing the complex orography of the Alps and limitations in their physical parametrization. Therefore the aim of this work is reducing these model biases using a specific post processing statistic technique to obtain a more suitable projection of climate change scenarios in the Alpine area. For our purposes we use a selection of RCMs runs from the ENSEMBLES project, carefully chosen in order to maximise the variety of leading Global Climate Models and of the RCMs themselves, calculated on the SRES scenario A1B. The reference observation for the Greater Alpine Area are extracted from the European dataset E-OBS produced by the project ENSEMBLES with an available resolution of 25 km. For the study area of Piedmont daily temperature and precipitation observations (1957-present) were carefully gridded on a 14-km grid over Piedmont Region with an Optimal Interpolation technique. Hence, we applied the Multimodel SuperEnsemble technique to temperature fields, reducing the high biases of RCMs temperature field compared to observations in the control period. We propose also the first application to RCMS of a brand new probabilistic Multimodel SuperEnsemble Dressing technique to estimate precipitation fields, already applied successfully to weather forecast models, with careful description of precipitation Probability Density Functions conditioned to the model outputs. This technique reduces the strong precipitation overestimation by RCMs over the alpine chain and reproduces well the monthly behaviour of precipitation in the control period.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014IJSyS..45.2590B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014IJSyS..45.2590B"><span>Online breakage detection of multitooth tools using classifier ensembles for imbalanced data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bustillo, Andrés; Rodríguez, Juan J.</p> <p>2014-12-01</p> <p>Cutting tool breakage detection is an important task, due to its economic impact on mass production lines in the automobile industry. This task presents a central limitation: real data-sets are extremely imbalanced because breakage occurs in very few cases compared with normal operation of the cutting process. In this paper, we present an analysis of different data-mining techniques applied to the detection of insert breakage in multitooth tools. The analysis applies only one experimental variable: the electrical power consumption of the tool drive. This restriction profiles real industrial conditions more accurately than other physical variables, such as acoustic or vibration signals, which are not so easily measured. Many efforts have been made to design a method that is able to identify breakages with a high degree of reliability within a short period of time. The solution is based on classifier ensembles for imbalanced data-sets. Classifier ensembles are combinations of classifiers, which in many situations are more accurate than individual classifiers. Six different base classifiers are tested: Decision Trees, Rules, Naïve Bayes, Nearest Neighbour, Multilayer Perceptrons and Logistic Regression. Three different balancing strategies are tested with each of the classifier ensembles and compared to their performance with the original data-set: Synthetic Minority Over-Sampling Technique (SMOTE), undersampling and a combination of SMOTE and undersampling. To identify the most suitable data-mining solution, Receiver Operating Characteristics (ROC) graph and Recall-precision graph are generated and discussed. The performance of logistic regression ensembles on the balanced data-set using the combination of SMOTE and undersampling turned out to be the most suitable technique. Finally a comparison using industrial performance measures is presented, which concludes that this technique is also more suited to this industrial problem than the other techniques presented in the bibliography.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29868316','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29868316"><span>Using a Guided Machine Learning Ensemble Model to Predict Discharge Disposition following Meningioma Resection.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Muhlestein, Whitney E; Akagi, Dallin S; Kallos, Justiss A; Morone, Peter J; Weaver, Kyle D; Thompson, Reid C; Chambless, Lola B</p> <p>2018-04-01</p> <p>Objective  Machine learning (ML) algorithms are powerful tools for predicting patient outcomes. This study pilots a novel approach to algorithm selection and model creation using prediction of discharge disposition following meningioma resection as a proof of concept. Materials and Methods  A diversity of ML algorithms were trained on a single-institution database of meningioma patients to predict discharge disposition. Algorithms were ranked by predictive power and top performers were combined to create an ensemble model. The final ensemble was internally validated on never-before-seen data to demonstrate generalizability. The predictive power of the ensemble was compared with a logistic regression. Further analyses were performed to identify how important variables impact the ensemble. Results  Our ensemble model predicted disposition significantly better than a logistic regression (area under the curve of 0.78 and 0.71, respectively, p  = 0.01). Tumor size, presentation at the emergency department, body mass index, convexity location, and preoperative motor deficit most strongly influence the model, though the independent impact of individual variables is nuanced. Conclusion  Using a novel ML technique, we built a guided ML ensemble model that predicts discharge destination following meningioma resection with greater predictive power than a logistic regression, and that provides greater clinical insight than a univariate analysis. These techniques can be extended to predict many other patient outcomes of interest.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AdAtS..33..544Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AdAtS..33..544Z"><span>Analyses and forecasts of a tornadic supercell outbreak using a 3DVAR system ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhuang, Zhaorong; Yussouf, Nusrat; Gao, Jidong</p> <p>2016-05-01</p> <p>As part of NOAA's "Warn-On-Forecast" initiative, a convective-scale data assimilation and prediction system was developed using the WRF-ARW model and ARPS 3DVAR data assimilation technique. The system was then evaluated using retrospective short-range ensemble analyses and probabilistic forecasts of the tornadic supercell outbreak event that occurred on 24 May 2011 in Oklahoma, USA. A 36-member multi-physics ensemble system provided the initial and boundary conditions for a 3-km convective-scale ensemble system. Radial velocity and reflectivity observations from four WSR-88Ds were assimilated into the ensemble using the ARPS 3DVAR technique. Five data assimilation and forecast experiments were conducted to evaluate the sensitivity of the system to data assimilation frequencies, in-cloud temperature adjustment schemes, and fixed- and mixed-microphysics ensembles. The results indicated that the experiment with 5-min assimilation frequency quickly built up the storm and produced a more accurate analysis compared with the 10-min assimilation frequency experiment. The predicted vertical vorticity from the moist-adiabatic in-cloud temperature adjustment scheme was larger in magnitude than that from the latent heat scheme. Cycled data assimilation yielded good forecasts, where the ensemble probability of high vertical vorticity matched reasonably well with the observed tornado damage path. Overall, the results of the study suggest that the 3DVAR analysis and forecast system can provide reasonable forecasts of tornadic supercell storms.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28060807','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28060807"><span>SVM and SVM Ensembles in Breast Cancer Prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong</p> <p>2017-01-01</p> <p>Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5217832','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5217832"><span>SVM and SVM Ensembles in Breast Cancer Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong</p> <p>2017-01-01</p> <p>Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers. PMID:28060807</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MPLB...3250132S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MPLB...3250132S"><span>Prediction of drug synergy in cancer using ensemble-based machine learning techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder</p> <p>2018-04-01</p> <p>Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015FNL....1450033L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015FNL....1450033L"><span>Intelligent Ensemble Forecasting System of Stock Market Fluctuations Based on Symetric and Asymetric Wavelet Functions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lahmiri, Salim; Boukadoum, Mounir</p> <p>2015-08-01</p> <p>We present a new ensemble system for stock market returns prediction where continuous wavelet transform (CWT) is used to analyze return series and backpropagation neural networks (BPNNs) for processing CWT-based coefficients, determining the optimal ensemble weights, and providing final forecasts. Particle swarm optimization (PSO) is used for finding optimal weights and biases for each BPNN. To capture symmetry/asymmetry in the underlying data, three wavelet functions with different shapes are adopted. The proposed ensemble system was tested on three Asian stock markets: The Hang Seng, KOSPI, and Taiwan stock market data. Three statistical metrics were used to evaluate the forecasting accuracy; including, mean of absolute errors (MAE), root mean of squared errors (RMSE), and mean of absolute deviations (MADs). Experimental results showed that our proposed ensemble system outperformed the individual CWT-ANN models each with different wavelet function. In addition, the proposed ensemble system outperformed the conventional autoregressive moving average process. As a result, the proposed ensemble system is suitable to capture symmetry/asymmetry in financial data fluctuations for better prediction accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EPJWC.15101001F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EPJWC.15101001F"><span>The structure of liquid metals probed by XAS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Filipponi, Adriano; Di Cicco, Andrea; Iesari, Fabio; Trapananti, Angela</p> <p>2017-08-01</p> <p>X-ray absorption spectroscopy (XAS) is a powerful technique to investigate the short-range order around selected atomic species in condensed matter. The theoretical framework and previous applications to undercooled elemental liquid metals are briefly reviewed. Specific results on undercooled liquid Ni obtained using a peak fitting approach validated on the spectra of solid Ni are presented. This method provides a clear evidence that a signature from close packed triangular configurations of nearest neighbors survives in the liquid state and is clearly detectable below k ≈ 5 Å-1, stimulating the improvement of data-analysis methods that account properly for the ensemble average, such as Reverse Monte Carlo.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.B33C0194K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.B33C0194K"><span>Large Scale Crop Classification in Ukraine using Multi-temporal Landsat-8 Images with Missing Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kussul, N.; Skakun, S.; Shelestov, A.; Lavreniuk, M. S.</p> <p>2014-12-01</p> <p>At present, there are no globally available Earth observation (EO) derived products on crop maps. This issue is being addressed within the Sentinel-2 for Agriculture initiative where a number of test sites (including from JECAM) participate to provide coherent protocols and best practices for various global agriculture systems, and subsequently crop maps from Sentinel-2. One of the problems in dealing with optical images for large territories (more than 10,000 sq. km) is the presence of clouds and shadows that result in having missing values in data sets. In this abstract, a new approach to classification of multi-temporal optical satellite imagery with missing data due to clouds and shadows is proposed. First, self-organizing Kohonen maps (SOMs) are used to restore missing pixel values in a time series of satellite imagery. SOMs are trained for each spectral band separately using non-missing values. Missing values are restored through a special procedure that substitutes input sample's missing components with neuron's weight coefficients. After missing data restoration, a supervised classification is performed for multi-temporal satellite images. For this, an ensemble of neural networks, in particular multilayer perceptrons (MLPs), is proposed. Ensembling of neural networks is done by the technique of average committee, i.e. to calculate the average class probability over classifiers and select the class with the highest average posterior probability for the given input sample. The proposed approach is applied for large scale crop classification using multi temporal Landsat-8 images for the JECAM test site in Ukraine [1-2]. It is shown that ensemble of MLPs provides better performance than a single neural network in terms of overall classification accuracy and kappa coefficient. The obtained classification map is also validated through estimated crop and forest areas and comparison to official statistics. 1. A.Yu. Shelestov et al., "Geospatial information system for agricultural monitoring," Cybernetics Syst. Anal., vol. 49, no. 1, pp. 124-132, 2013. 2. J. Gallego et al., "Efficiency Assessment of Different Approaches to Crop Classification Based on Satellite and Ground Observations," J. Autom. Inform. Scie., vol. 44, no. 5, pp. 67-80, 2012.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhRvL.115m3002R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhRvL.115m3002R"><span>Operating Spin Echo in the Quantum Regime for an Atomic-Ensemble Quantum Memory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rui, Jun; Jiang, Yan; Yang, Sheng-Jun; Zhao, Bo; Bao, Xiao-Hui; Pan, Jian-Wei</p> <p>2015-09-01</p> <p>Spin echo is a powerful technique to extend atomic or nuclear coherence times by overcoming the dephasing due to inhomogeneous broadenings. However, there are disputes about the feasibility of applying this technique to an ensemble-based quantum memory at the single-quanta level. In this experimental study, we find that noise due to imperfections of the rephasing pulses has both intense superradiant and weak isotropic parts. By properly arranging the beam directions and optimizing the pulse fidelities, we successfully manage to operate the spin echo technique in the quantum regime by observing nonclassical photon-photon correlations as well as the quantum behavior of retrieved photons. Our work for the first time demonstrates the feasibility of harnessing the spin echo method to extend the lifetime of ensemble-based quantum memories at the single-quanta level.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1990PhDT.......165H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1990PhDT.......165H"><span>Real-Time Fourier Synthesis of Ensembles with Timbral Interpolation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Haken, Lippold</p> <p>1990-01-01</p> <p>In Fourier synthesis, natural musical sounds are produced by summing time-varying sinusoids. Sounds are analyzed to find the amplitude and frequency characteristics for their sinusoids; interpolation between the characteristics of several sounds is used to produce intermediate timbres. An ensemble can be synthesized by summing all the sinusoids for several sounds, but in practice it is difficult to perform such computations in real time. To solve this problem on inexpensive hardware, it is useful to take advantage of the masking effects of the auditory system. By avoiding the computations for perceptually unimportant sinusoids, and by employing other computation reduction techniques, a large ensemble may be synthesized in real time on the Platypus signal processor. Unlike existing computation reduction techniques, the techniques described in this thesis do not sacrifice independent fine control over the amplitude and frequency characteristics of each sinusoid.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003JPhA...36L.399P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003JPhA...36L.399P"><span>LETTER TO THE EDITOR: Constant-time solution to the global optimization problem using Brüschweiler's ensemble search algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Protopopescu, V.; D'Helon, C.; Barhen, J.</p> <p>2003-06-01</p> <p>A constant-time solution of the continuous global optimization problem (GOP) is obtained by using an ensemble algorithm. We show that under certain assumptions, the solution can be guaranteed by mapping the GOP onto a discrete unsorted search problem, whereupon Brüschweiler's ensemble search algorithm is applied. For adequate sensitivities of the measurement technique, the query complexity of the ensemble search algorithm depends linearly on the size of the function's domain. Advantages and limitations of an eventual NMR implementation are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28419025','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28419025"><span>Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G</p> <p>2017-09-01</p> <p>To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018WtFor..33..369V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018WtFor..33..369V"><span>Skill of Global Raw and Postprocessed Ensemble Predictions of Rainfall over Northern Tropical Africa</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vogel, Peter; Knippertz, Peter; Fink, Andreas H.; Schlueter, Andreas; Gneiting, Tilmann</p> <p>2018-04-01</p> <p>Accumulated precipitation forecasts are of high socioeconomic importance for agriculturally dominated societies in northern tropical Africa. In this study, we analyze the performance of nine operational global ensemble prediction systems (EPSs) relative to climatology-based forecasts for 1 to 5-day accumulated precipitation based on the monsoon seasons 2007-2014 for three regions within northern tropical Africa. To assess the full potential of raw ensemble forecasts across spatial scales, we apply state-of-the-art statistical postprocessing methods in form of Bayesian Model Averaging (BMA) and Ensemble Model Output Statistics (EMOS), and verify against station and spatially aggregated, satellite-based gridded observations. Raw ensemble forecasts are uncalibrated, unreliable, and underperform relative to climatology, independently of region, accumulation time, monsoon season, and ensemble. Differences between raw ensemble and climatological forecasts are large, and partly stem from poor prediction for low precipitation amounts. BMA and EMOS postprocessed forecasts are calibrated, reliable, and strongly improve on the raw ensembles, but - somewhat disappointingly - typically do not outperform climatology. Most EPSs exhibit slight improvements over the period 2007-2014, but overall have little added value compared to climatology. We suspect that the parametrization of convection is a potential cause for the sobering lack of ensemble forecast skill in a region dominated by mesoscale convective systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2615214','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2615214"><span>Similarity Measures for Protein Ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper</p> <p>2009-01-01</p> <p>Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations. However, instead of examining individual conformations it is in many cases more relevant to analyse ensembles of conformations that have been obtained either through experiments or from methods such as molecular dynamics simulations. We here present three approaches that can be used to compare conformational ensembles in the same way as the root mean square deviation is used to compare individual pairs of structures. The methods are based on the estimation of the probability distributions underlying the ensembles and subsequent comparison of these distributions. We first validate the methods using a synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single-molecule refinement. PMID:19145244</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1544146','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1544146"><span>Relation between native ensembles and experimental structures of proteins</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Best, Robert B.; Lindorff-Larsen, Kresten; DePristo, Mark A.; Vendruscolo, Michele</p> <p>2006-01-01</p> <p>Different experimental structures of the same protein or of proteins with high sequence similarity contain many small variations. Here we construct ensembles of “high-sequence similarity Protein Data Bank” (HSP) structures and consider the extent to which such ensembles represent the structural heterogeneity of the native state in solution. We find that different NMR measurements probing structure and dynamics of given proteins in solution, including order parameters, scalar couplings, and residual dipolar couplings, are remarkably well reproduced by their respective high-sequence similarity Protein Data Bank ensembles; moreover, we show that the effects of uncertainties in structure determination are insufficient to explain the results. These results highlight the importance of accounting for native-state protein dynamics in making comparisons with ensemble-averaged experimental data and suggest that even a modest number of structures of a protein determined under different conditions, or with small variations in sequence, capture a representative subset of the true native-state ensemble. PMID:16829580</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AdSR...14..227L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AdSR...14..227L"><span>Wind power application research on the fusion of the determination and ensemble prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lan, Shi; Lina, Xu; Yuzhu, Hao</p> <p>2017-07-01</p> <p>The fused product of wind speed for the wind farm is designed through the use of wind speed products of ensemble prediction from the European Centre for Medium-Range Weather Forecasts (ECMWF) and professional numerical model products on wind power based on Mesoscale Model5 (MM5) and Beijing Rapid Update Cycle (BJ-RUC), which are suitable for short-term wind power forecasting and electric dispatch. The single-valued forecast is formed by calculating the different ensemble statistics of the Bayesian probabilistic forecasting representing the uncertainty of ECMWF ensemble prediction. Using autoregressive integrated moving average (ARIMA) model to improve the time resolution of the single-valued forecast, and based on the Bayesian model averaging (BMA) and the deterministic numerical model prediction, the optimal wind speed forecasting curve and the confidence interval are provided. The result shows that the fusion forecast has made obvious improvement to the accuracy relative to the existing numerical forecasting products. Compared with the 0-24 h existing deterministic forecast in the validation period, the mean absolute error (MAE) is decreased by 24.3 % and the correlation coefficient (R) is increased by 12.5 %. In comparison with the ECMWF ensemble forecast, the MAE is reduced by 11.7 %, and R is increased 14.5 %. Additionally, MAE did not increase with the prolongation of the forecast ahead.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JHyd..555..257V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JHyd..555..257V"><span>Estimating predictive hydrological uncertainty by dressing deterministic and ensemble forecasts; a comparison, with application to Meuse and Rhine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.</p> <p>2017-12-01</p> <p>Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3908317','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3908317"><span>Rapid sampling of local minima in protein energy surface and effective reduction through a multi-objective filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background Many problems in protein modeling require obtaining a discrete representation of the protein conformational space as an ensemble of conformations. In ab-initio structure prediction, in particular, where the goal is to predict the native structure of a protein chain given its amino-acid sequence, the ensemble needs to satisfy energetic constraints. Given the thermodynamic hypothesis, an effective ensemble contains low-energy conformations which are similar to the native structure. The high-dimensionality of the conformational space and the ruggedness of the underlying energy surface currently make it very difficult to obtain such an ensemble. Recent studies have proposed that Basin Hopping is a promising probabilistic search framework to obtain a discrete representation of the protein energy surface in terms of local minima. Basin Hopping performs a series of structural perturbations followed by energy minimizations with the goal of hopping between nearby energy minima. This approach has been shown to be effective in obtaining conformations near the native structure for small systems. Recent work by us has extended this framework to larger systems through employment of the molecular fragment replacement technique, resulting in rapid sampling of large ensembles. Methods This paper investigates the algorithmic components in Basin Hopping to both understand and control their effect on the sampling of near-native minima. Realizing that such an ensemble is reduced before further refinement in full ab-initio protocols, we take an additional step and analyze the quality of the ensemble retained by ensemble reduction techniques. We propose a novel multi-objective technique based on the Pareto front to filter the ensemble of sampled local minima. Results and conclusions We show that controlling the magnitude of the perturbation allows directly controlling the distance between consecutively-sampled local minima and, in turn, steering the exploration towards conformations near the native structure. For the minimization step, we show that the addition of Metropolis Monte Carlo-based minimization is no more effective than a simple greedy search. Finally, we show that the size of the ensemble of sampled local minima can be effectively and efficiently reduced by a multi-objective filter to obtain a simpler representation of the probed energy surface. PMID:24564970</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018InPhT..88...57P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018InPhT..88...57P"><span>Crack detection in oak flooring lamellae using ultrasound-excited thermography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pahlberg, Tobias; Thurley, Matthew; Popovic, Djordje; Hagman, Olle</p> <p>2018-01-01</p> <p>Today, a large number of people are manually grading and detecting defects in wooden lamellae in the parquet flooring industry. This paper investigates the possibility of using the ensemble methods random forests and boosting to automatically detect cracks using ultrasound-excited thermography and a variety of predictor variables. When friction occurs in thin cracks, they become warm and thus visible to a thermographic camera. Several image processing techniques have been used to suppress the noise and enhance probable cracks in the images. The most successful predictor variables captured the upper part of the heat distribution, such as the maximum temperature, kurtosis and percentile values 92-100 of the edge pixels. The texture in the images was captured by Completed Local Binary Pattern histograms and cracks were also segmented by background suppression and thresholding. The classification accuracy was significantly improved from previous research through added image processing, introduction of more predictors, and by using automated machine learning. The best ensemble methods reach an average classification accuracy of 0.8, which is very close to the authors' own manual attempt at separating the images (0.83).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28668122','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28668122"><span>Single-Molecule Methods for Nucleotide Excision Repair: Building a System to Watch Repair in Real Time.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kong, Muwen; Beckwitt, Emily C; Springall, Luke; Kad, Neil M; Van Houten, Bennett</p> <p>2017-01-01</p> <p>Single-molecule approaches to solving biophysical problems are powerful tools that allow static and dynamic real-time observations of specific molecular interactions of interest in the absence of ensemble-averaging effects. Here, we provide detailed protocols for building an experimental system that employs atomic force microscopy and a single-molecule DNA tightrope assay based on oblique angle illumination fluorescence microscopy. Together with approaches for engineering site-specific lesions into DNA substrates, these complementary biophysical techniques are well suited for investigating protein-DNA interactions that involve target-specific DNA-binding proteins, such as those engaged in a variety of DNA repair pathways. In this chapter, we demonstrate the utility of the platform by applying these techniques in the studies of proteins participating in nucleotide excision repair. © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18793021','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18793021"><span>Temporal correlation functions of concentration fluctuations: an anomalous case.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lubelski, Ariel; Klafter, Joseph</p> <p>2008-10-09</p> <p>We calculate, within the framework of the continuous time random walk (CTRW) model, multiparticle temporal correlation functions of concentration fluctuations (CCF) in systems that display anomalous subdiffusion. The subdiffusion stems from the nonstationary nature of the CTRW waiting times, which also lead to aging and ergodicity breaking. Due to aging, a system of diffusing particles tends to slow down as time progresses, and therefore, the temporal correlation functions strongly depend on the initial time of measurement. As a consequence, time averages of the CCF differ from ensemble averages, displaying therefore ergodicity breaking. We provide a simple example that demonstrates the difference between these two averages, a difference that might be amenable to experimental tests. We focus on the case of ensemble averaging and assume that the preparation time of the system coincides with the starting time of the measurement. Our analytical calculations are supported by computer simulations based on the CTRW model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..14.2843C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..14.2843C"><span>Regional Climate Models Downscaling in the Alpine Area with Multimodel SuperEnsemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cane, D.; Barbarino, S.; Renier, L.; Ronchi, C.</p> <p>2012-04-01</p> <p>The climatic scenarios show a strong signal of warming in the Alpine area already for the mid XXI century. The climate simulation, however, even when obtained with Regional Climate Models (RCMs), are affected by strong errors where compared with observations in the control period, due to their difficulties in representing the complex orography of the Alps and limitations in their physical parametrization. In this work we use a selection of RCMs runs from the ENSEMBLES project, carefully chosen in order to maximise the variety of leading Global Climate Models and of the RCMs themselves, calculated on the SRES scenario A1B. The reference observation for the Greater Alpine Area are extracted from the European dataset E-OBS produced by the project ENSEMBLES with an available resolution of 25 km. For the study area of Piemonte daily temperature and precipitation observations (1957-present) were carefully gridded on a 14-km grid over Piemonte Region with an Optimal Interpolation technique. We applied the Multimodel SuperEnsemble technique to temperature fields, reducing the high biases of RCMs temperature field compared to observations in the control period. We propose also the first application to RCMs of a brand new probabilistic Multimodel SuperEnsemble Dressing technique to estimate precipitation fields, already applied successfully to weather forecast models, with careful description of precipitation Probability Density Functions conditioned to the model outputs. This technique reduces the strong precipitation overestimation by RCMs over the alpine chain and reproduces the monthly behaviour of observed precipitation in the control period far better than the direct model outputs.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JHyd..538..243L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JHyd..538..243L"><span>Evaluating uncertainties in multi-layer soil moisture estimation with support vector machines and ensemble Kalman filtering</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Di; Mishra, Ashok K.; Yu, Zhongbo</p> <p>2016-07-01</p> <p>This paper examines the combination of support vector machines (SVM) and the dual ensemble Kalman filter (EnKF) technique to estimate root zone soil moisture at different soil layers up to 100 cm depth. Multiple experiments are conducted in a data rich environment to construct and validate the SVM model and to explore the effectiveness and robustness of the EnKF technique. It was observed that the performance of SVM relies more on the initial length of training set than other factors (e.g., cost function, regularization parameter, and kernel parameters). The dual EnKF technique proved to be efficient to improve SVM with observed data either at each time step or at a flexible time steps. The EnKF technique can reach its maximum efficiency when the updating ensemble size approaches a certain threshold. It was observed that the SVM model performance for the multi-layer soil moisture estimation can be influenced by the rainfall magnitude (e.g., dry and wet spells).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3156487','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3156487"><span>The Upper and Lower Bounds of the Prediction Accuracies of Ensemble Methods for Binary Classification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Xueyi; Davidson, Nicholas J.</p> <p>2011-01-01</p> <p>Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009EGUGA..11.9629A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009EGUGA..11.9629A"><span>Quantitative precipitation forecasts in the Alps - an assessment from the Forecast Demonstration Project MAP D-PHASE</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ament, F.; Weusthoff, T.; Arpagaus, M.; Rotach, M.</p> <p>2009-04-01</p> <p>The main aim of the WWRP Forecast Demonstration Project MAP D-PHASE is to demonstrate the performance of today's models to forecast heavy precipitation and flood events in the Alpine region. Therefore an end-to-end, real-time forecasting system was installed and operated during the D PHASE Operations Period from June to November 2007. Part of this system are 30 numerical weather prediction models (deterministic as well as ensemble systems) operated by weather services and research institutes, which issue alerts if predicted precipitation accumulations exceed critical thresholds. Additionally to the real-time alerts, all relevant model fields of these simulations are stored in a central data archive. This comprehensive data set allows a detailed assessment of today's quantitative precipitation forecast (QPF) performance in the Alpine region. We will present results of QPF verifications against Swiss radar and rain gauge data both from a qualitative point of view, in terms of alerts, as well as from a quantitative perspective, in terms of precipitation rate. Various influencing factors like lead time, accumulation time, selection of warning thresholds, or bias corrections will be discussed. Additional to traditional verifications of area average precipitation amounts, the performance of the models to predict the correct precipitation statistics without requiring a point-to-point match will be described by using modern Fuzzy verification techniques. Both analyses reveal significant advantages of deep convection resolving models compared to coarser models with parameterized convection. An intercomparison of the model forecasts themselves reveals a remarkably high variability between different models, and makes it worthwhile to evaluate the potential of a multi-model ensemble. Various multi-model ensemble strategies will be tested by combining D-PHASE models to virtual ensemble systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AtmRe.198..194K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AtmRe.198..194K"><span>Prediction skill of rainstorm events over India in the TIGGE weather prediction models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.</p> <p>2017-12-01</p> <p>Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H21D1482L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H21D1482L"><span>Enhancing Flood Prediction Reliability Using Bayesian Model Averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Z.; Merwade, V.</p> <p>2017-12-01</p> <p>Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JChPh.146x4112D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JChPh.146x4112D"><span>Girsanov reweighting for path ensembles and Markov state models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Donati, L.; Hartmann, C.; Keller, B. G.</p> <p>2017-06-01</p> <p>The sensitivity of molecular dynamics on changes in the potential energy function plays an important role in understanding the dynamics and function of complex molecules. We present a method to obtain path ensemble averages of a perturbed dynamics from a set of paths generated by a reference dynamics. It is based on the concept of path probability measure and the Girsanov theorem, a result from stochastic analysis to estimate a change of measure of a path ensemble. Since Markov state models (MSMs) of the molecular dynamics can be formulated as a combined phase-space and path ensemble average, the method can be extended to reweight MSMs by combining it with a reweighting of the Boltzmann distribution. We demonstrate how to efficiently implement the Girsanov reweighting in a molecular dynamics simulation program by calculating parts of the reweighting factor "on the fly" during the simulation, and we benchmark the method on test systems ranging from a two-dimensional diffusion process and an artificial many-body system to alanine dipeptide and valine dipeptide in implicit and explicit water. The method can be used to study the sensitivity of molecular dynamics on external perturbations as well as to reweight trajectories generated by enhanced sampling schemes to the original dynamics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22281045','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22281045"><span>Ensemble transcript interaction networks: a case study on Alzheimer's disease.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Armañanzas, Rubén; Larrañaga, Pedro; Bielza, Concha</p> <p>2012-10-01</p> <p>Systems biology techniques are a topic of recent interest within the neurological field. Computational intelligence (CI) addresses this holistic perspective by means of consensus or ensemble techniques ultimately capable of uncovering new and relevant findings. In this paper, we propose the application of a CI approach based on ensemble Bayesian network classifiers and multivariate feature subset selection to induce probabilistic dependences that could match or unveil biological relationships. The research focuses on the analysis of high-throughput Alzheimer's disease (AD) transcript profiling. The analysis is conducted from two perspectives. First, we compare the expression profiles of hippocampus subregion entorhinal cortex (EC) samples of AD patients and controls. Second, we use the ensemble approach to study four types of samples: EC and dentate gyrus (DG) samples from both patients and controls. Results disclose transcript interaction networks with remarkable structures and genes not directly related to AD by previous studies. The ensemble is able to identify a variety of transcripts that play key roles in other neurological pathologies. Classical statistical assessment by means of non-parametric tests confirms the relevance of the majority of the transcripts. The ensemble approach pinpoints key metabolic mechanisms that could lead to new findings in the pathogenesis and development of AD. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990089293&hterms=behavior+modification&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dbehavior%2Bmodification','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990089293&hterms=behavior+modification&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dbehavior%2Bmodification"><span>The Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zhu, Yanqiu; Cohn, Stephen E.; Todling, Ricardo</p> <p>1999-01-01</p> <p>The Kalman filter is the optimal filter in the presence of known Gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions (e.g., Miller 1994). Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz (1963) model as well as more realistic models of the oceans (Evensen and van Leeuwen 1996) and atmosphere (Houtekamer and Mitchell 1998). A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter equations to allow for correct update of the ensemble members (Burgers 1998). The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to quite puzzling in that results of state estimate are worse than for their filter analogue (Evensen 1997). In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use Lorenz (1963) model to test and compare the behavior of a variety implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4881196','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4881196"><span>Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2016-01-01</p> <p>Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..1613434Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..1613434Y"><span>A variational ensemble scheme for noisy image data assimilation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne</p> <p>2014-05-01</p> <p>Data assimilation techniques aim at recovering a system state variables trajectory denoted as X, along time from partially observed noisy measurements of the system denoted as Y. These procedures, which couple dynamics and noisy measurements of the system, fulfill indeed a twofold objective. On one hand, they provide a denoising - or reconstruction - procedure of the data through a given model framework and on the other hand, they provide estimation procedures for unknown parameters of the dynamics. A standard variational data assimilation problem can be formulated as the minimization of the following objective function with respect to the initial discrepancy, η, from the background initial guess: δ« J(η(x)) = 1∥Xb (x) - X (t ,x)∥2 + 1 tf∥H(X (t,x ))- Y (t,x)∥2dt. 2 0 0 B 2 t0 R (1) where the observation operator H links the state variable and the measurements. The cost function can be interpreted as the log likelihood function associated to the a posteriori distribution of the state given the past history of measurements and the background. In this work, we aim at studying ensemble based optimal control strategies for data assimilation. Such formulation nicely combines the ingredients of ensemble Kalman filters and variational data assimilation (4DVar). It is also formulated as the minimization of the objective function (1), but similarly to ensemble filter, it introduces in its objective function an empirical ensemble-based background-error covariance defined as: B ≡ <(Xb - <Xb>)(Xb - <Xb >)T>. (2) Thus, it works in an off-line smoothing mode rather than on the fly like sequential filters. Such resulting ensemble variational data assimilation technique corresponds to a relatively new family of methods [1,2,3]. It presents two main advantages: first, it does not require anymore to construct the adjoint of the dynamics tangent linear operator, which is a considerable advantage with respect to the method's implementation, and second, it enables the handling of a flow-dependent background error covariance matrix that can be consistently adjusted to the background error. These nice advantages come however at the cost of a reduced rank modeling of the solution space. The B matrix is at most of rank N - 1 (N is the size of the ensemble) which is considerably lower than the dimension of state space. This rank deficiency may introduce spurious correlation errors, which particularly impact the quality of results associated with a high resolution computing grid. The common strategy to suppress these distant correlations for ensemble Kalman techniques is through localization procedures. In this paper we present key theoretical properties associated to different choices of methods involved in this setup and compare with an incremental 4DVar method experimentally the performances of several variations of an ensemble technique of interest. The comparisons have been led on the basis of a Shallow Water model and have been carried out both with synthetic data and real observations. We particularly addressed the potential pitfalls and advantages of the different methods. The results indicate an advantage in favor of the ensemble technique both in quality and computational cost when dealing with incomplete observations. We highlight as the premise of using ensemble variational assimilation, that the initial perturbation used to build the initial ensemble has to fit the physics of the observed phenomenon . We also apply the method to a stochastic shallow-water model which incorporate an uncertainty expression if the subgrid stress tensor related to the ensemble spread. References [1] A. C. Lorenc, The potential of the ensemble kalman filter for nwp - a comparison with 4d-var, Quart. J. Roy. Meteor. Soc., Vol. 129, pp. 3183-3203, 2003. [2] C. Liu, Q. Xiao, and B. Wang, An Ensemble-Based Four-Dimensional Variational Data Assimilation Scheme. Part I: Technical Formulation and Preliminary Test, Mon. Wea. Rev., Vol. 136(9), pp. 3363-3373, 2008. [3] M. Buehner, Ensemble-derived stationary and flow-dependent background-error covariances: Evaluation in a quasi- operational NWP setting, Quart. J. Roy. Meteor. Soc., Vol. 131(607), pp. 1013-1043, April 2005.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017E%26ES...58a2019R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017E%26ES...58a2019R"><span>Model Averaging for Predicting the Exposure to Aflatoxin B1 Using DNA Methylation in White Blood Cells of Infants</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rahardiantoro, S.; Sartono, B.; Kurnia, A.</p> <p>2017-03-01</p> <p>In recent years, DNA methylation has been the special issue to reveal the pattern of a lot of human diseases. Huge amount of data would be the inescapable phenomenon in this case. In addition, some researchers interesting to take some predictions based on these huge data, especially using regression analysis. The classical approach would be failed to take the task. Model averaging by Ando and Li [1] could be an alternative approach to face this problem. This research applied the model averaging to get the best prediction in high dimension of data. In the practice, the case study by Vargas et al [3], data of exposure to aflatoxin B1 (AFB1) and DNA methylation in white blood cells of infants in The Gambia, take the implementation of model averaging. The best ensemble model selected based on the minimum of MAPE, MAE, and MSE of predictions. The result is ensemble model by model averaging with number of predictors in model candidate is 15.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28208482','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28208482"><span>Aging underdamped scaled Brownian motion: Ensemble- and time-averaged particle displacements, nonergodicity, and the failure of the overdamping approximation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Safdari, Hadiseh; Cherstvy, Andrey G; Chechkin, Aleksei V; Bodrova, Anna; Metzler, Ralf</p> <p>2017-01-01</p> <p>We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvE..95a2120S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvE..95a2120S"><span>Aging underdamped scaled Brownian motion: Ensemble- and time-averaged particle displacements, nonergodicity, and the failure of the overdamping approximation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Safdari, Hadiseh; Cherstvy, Andrey G.; Chechkin, Aleksei V.; Bodrova, Anna; Metzler, Ralf</p> <p>2017-01-01</p> <p>We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28122561','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28122561"><span>JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D</p> <p>2017-01-25</p> <p>Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21362155-schur-polynomials-biorthogonal-random-matrix-ensembles','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21362155-schur-polynomials-biorthogonal-random-matrix-ensembles"><span>Schur polynomials and biorthogonal random matrix ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Tierz, Miguel</p> <p></p> <p>The study of the average of Schur polynomials over a Stieltjes-Wigert ensemble has been carried out by Dolivet and Tierz [J. Math. Phys. 48, 023507 (2007); e-print arXiv:hep-th/0609167], where it was shown that it is equal to quantum dimensions. Using the same approach, we extend the result to the biorthogonal case. We also study, using the Littlewood-Richardson rule, some particular cases of the quantum dimension result. Finally, we show that the notion of Giambelli compatibility of Schur averages, introduced by Borodin et al. [Adv. Appl. Math. 37, 209 (2006); e-print arXiv:math-ph/0505021], also holds in the biorthogonal setting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhRvE..91d2107S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhRvE..91d2107S"><span>Aging scaled Brownian motion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Safdari, Hadiseh; Chechkin, Aleksei V.; Jafari, Gholamreza R.; Metzler, Ralf</p> <p>2015-04-01</p> <p>Scaled Brownian motion (SBM) is widely used to model anomalous diffusion of passive tracers in complex and biological systems. It is a highly nonstationary process governed by the Langevin equation for Brownian motion, however, with a power-law time dependence of the noise strength. Here we study the aging properties of SBM for both unconfined and confined motion. Specifically, we derive the ensemble and time averaged mean squared displacements and analyze their behavior in the regimes of weak, intermediate, and strong aging. A very rich behavior is revealed for confined aging SBM depending on different aging times and whether the process is sub- or superdiffusive. We demonstrate that the information on the aging factorizes with respect to the lag time and exhibits a functional form that is identical to the aging behavior of scale-free continuous time random walk processes. While SBM exhibits a disparity between ensemble and time averaged observables and is thus weakly nonergodic, strong aging is shown to effect a convergence of the ensemble and time averaged mean squared displacement. Finally, we derive the density of first passage times in the semi-infinite domain that features a crossover defined by the aging time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25974439','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25974439"><span>Aging scaled Brownian motion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Safdari, Hadiseh; Chechkin, Aleksei V; Jafari, Gholamreza R; Metzler, Ralf</p> <p>2015-04-01</p> <p>Scaled Brownian motion (SBM) is widely used to model anomalous diffusion of passive tracers in complex and biological systems. It is a highly nonstationary process governed by the Langevin equation for Brownian motion, however, with a power-law time dependence of the noise strength. Here we study the aging properties of SBM for both unconfined and confined motion. Specifically, we derive the ensemble and time averaged mean squared displacements and analyze their behavior in the regimes of weak, intermediate, and strong aging. A very rich behavior is revealed for confined aging SBM depending on different aging times and whether the process is sub- or superdiffusive. We demonstrate that the information on the aging factorizes with respect to the lag time and exhibits a functional form that is identical to the aging behavior of scale-free continuous time random walk processes. While SBM exhibits a disparity between ensemble and time averaged observables and is thus weakly nonergodic, strong aging is shown to effect a convergence of the ensemble and time averaged mean squared displacement. Finally, we derive the density of first passage times in the semi-infinite domain that features a crossover defined by the aging time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1175481','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1175481"><span>Creating ensembles of decision trees through sampling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Kamath, Chandrika; Cantu-Paz, Erick</p> <p>2005-08-30</p> <p>A system for decision tree ensembles that includes a module to read the data, a module to sort the data, a module to evaluate a potential split of the data according to some criterion using a random sample of the data, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method is based on statistical sampling techniques and includes the steps of reading the data; sorting the data; evaluating a potential split according to some criterion using a random sample of the data, splitting the data, and combining multiple decision trees in ensembles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1818469S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1818469S"><span>Ensemble hydro-meteorological forecasting for early warning of floods and scheduling of hydropower production</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Solvang Johansen, Stian; Steinsland, Ingelin; Engeland, Kolbjørn</p> <p>2016-04-01</p> <p>Running hydrological models with precipitation and temperature ensemble forcing to generate ensembles of streamflow is a commonly used method in operational hydrology. Evaluations of streamflow ensembles have however revealed that the ensembles are biased with respect to both mean and spread. Thus postprocessing of the ensembles is needed in order to improve the forecast skill. The aims of this study is (i) to to evaluate how postprocessing of streamflow ensembles works for Norwegian catchments within different hydrological regimes and to (ii) demonstrate how post processed streamflow ensembles are used operationally by a hydropower producer. These aims were achieved by postprocessing forecasted daily discharge for 10 lead-times for 20 catchments in Norway by using EPS forcing from ECMWF applied the semi-distributed HBV-model dividing each catchment into 10 elevation zones. Statkraft Energi uses forecasts from these catchments for scheduling hydropower production. The catchments represent different hydrological regimes. Some catchments have stable winter condition with winter low flow and a major flood event during spring or early summer caused by snow melting. Others has a more mixed snow-rain regime, often with a secondary flood season during autumn, and in the coastal areas, the stream flow is dominated by rain, and the main flood season is autumn and winter. For post processing, a Bayesian model averaging model (BMA) close to (Kleiber et al 2011) is used. The model creates a predictive PDF that is a weighted average of PDFs centered on the individual bias corrected forecasts. The weights are here equal since all ensemble members come from the same model, and thus have the same probability. For modeling streamflow, the gamma distribution is chosen as a predictive PDF. The bias correction parameters and the PDF parameters are estimated using a 30-day sliding window training period. Preliminary results show that the improvement varies between catchments depending on where they are situated and the hydrological regime. There is an improvement in CRPS for all catchments compared to raw EPS ensembles. The improvement is up to lead-time 5-7. The postprocessing also improves the MAE for the median of the predictive PDF compared to the median of the raw EPS. But less compared to CRPS, often up to lead-time 2-3. The streamflow ensembles are to some extent used operationally in Statkraft Energi (Hydro Power company, Norway), with respect to early warning, risk assessment and decision-making. Presently all forecast used operationally for short-term scheduling are deterministic, but ensembles are used visually for expert assessment of risk in difficult situations where e.g. there is a chance of overflow in a reservoir. However, there are plans to incorporate ensembles in the daily scheduling of hydropower production.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009WRR....45.7424S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009WRR....45.7424S"><span>Sequential updating of multimodal hydrogeologic parameter fields using localization and clustering techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta</p> <p>2009-07-01</p> <p>Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1814752W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1814752W"><span>Statistical uncertainty of extreme wind storms over Europe derived from a probabilistic clustering technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Walz, Michael; Leckebusch, Gregor C.</p> <p>2016-04-01</p> <p>Extratropical wind storms pose one of the most dangerous and loss intensive natural hazards for Europe. However, due to only 50 years of high quality observational data, it is difficult to assess the statistical uncertainty of these sparse events just based on observations. Over the last decade seasonal ensemble forecasts have become indispensable in quantifying the uncertainty of weather prediction on seasonal timescales. In this study seasonal forecasts are used in a climatological context: By making use of the up to 51 ensemble members, a broad and physically consistent statistical base can be created. This base can then be used to assess the statistical uncertainty of extreme wind storm occurrence more accurately. In order to determine the statistical uncertainty of storms with different paths of progression, a probabilistic clustering approach using regression mixture models is used to objectively assign storm tracks (either based on core pressure or on extreme wind speeds) to different clusters. The advantage of this technique is that the entire lifetime of a storm is considered for the clustering algorithm. Quadratic curves are found to describe the storm tracks most accurately. Three main clusters (diagonal, horizontal or vertical progression of the storm track) can be identified, each of which have their own particulate features. Basic storm features like average velocity and duration are calculated and compared for each cluster. The main benefit of this clustering technique, however, is to evaluate if the clusters show different degrees of uncertainty, e.g. more (less) spread for tracks approaching Europe horizontally (diagonally). This statistical uncertainty is compared for different seasonal forecast products.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/42684','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/42684"><span>Unlocking the climate riddle in forested ecosystems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Greg C. Liknes; Christopher W. Woodall; Brian F. Walters; Sara A. Goeking</p> <p>2012-01-01</p> <p>Climate information is often used as a predictor in ecological studies, where temporal averages are typically based on climate normals (30-year means) or seasonal averages. While ensemble projections of future climate forecast a higher global average annual temperature, they also predict increased climate variability. It remains to be seen whether forest ecosystems...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JGRG..119.2171Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JGRG..119.2171Z"><span>Global carbon assimilation system using a local ensemble Kalman filter with multiple ecosystem models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Shupeng; Yi, Xue; Zheng, Xiaogu; Chen, Zhuoqi; Dan, Bo; Zhang, Xuanze</p> <p>2014-11-01</p> <p>In this paper, a global carbon assimilation system (GCAS) is developed for optimizing the global land surface carbon flux at 1° resolution using multiple ecosystem models. In GCAS, three ecosystem models, Boreal Ecosystem Productivity Simulator, Carnegie-Ames-Stanford Approach, and Community Atmosphere Biosphere Land Exchange, produce the prior fluxes, and an atmospheric transport model, Model for OZone And Related chemical Tracers, is used to calculate atmospheric CO2 concentrations resulting from these prior fluxes. A local ensemble Kalman filter is developed to assimilate atmospheric CO2 data observed at 92 stations to optimize the carbon flux for six land regions, and the Bayesian model averaging method is implemented in GCAS to calculate the weighted average of the optimized fluxes based on individual ecosystem models. The weights for the models are found according to the closeness of their forecasted CO2 concentration to observation. Results of this study show that the model weights vary in time and space, allowing for an optimum utilization of different strengths of different ecosystem models. It is also demonstrated that spatial localization is an effective technique to avoid spurious optimization results for regions that are not well constrained by the atmospheric data. Based on the multimodel optimized flux from GCAS, we found that the average global terrestrial carbon sink over the 2002-2008 period is 2.97 ± 1.1 PgC yr-1, and the sinks are 0.88 ± 0.52, 0.27 ± 0.33, 0.67 ± 0.39, 0.90 ± 0.68, 0.21 ± 0.31, and 0.04 ± 0.08 PgC yr-1 for the North America, South America, Africa, Eurasia, Tropical Asia, and Australia, respectively. This multimodel GCAS can be used to improve global carbon cycle estimation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H53N..06S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H53N..06S"><span>Data assimilation of citizen collected information for real-time flood hazard mapping</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sayama, T.; Takara, K. T.</p> <p>2017-12-01</p> <p>Many studies in data assimilation in hydrology have focused on the integration of satellite remote sensing and in-situ monitoring data into hydrologic or land surface models. For flood predictions also, recent studies have demonstrated to assimilate remotely sensed inundation information with flood inundation models. In actual flood disaster situations, citizen collected information including local reports by residents and rescue teams and more recently tweets via social media also contain valuable information. The main interest of this study is how to effectively use such citizen collected information for real-time flood hazard mapping. Here we propose a new data assimilation technique based on pre-conducted ensemble inundation simulations and update inundation depth distributions sequentially when local data becomes available. The propose method is composed by the following two-steps. The first step is based on weighting average of preliminary ensemble simulations, whose weights are updated by Bayesian approach. The second step is based on an optimal interpolation, where the covariance matrix is calculated from the ensemble simulations. The proposed method was applied to case studies including an actual flood event occurred. It considers two situations with more idealized one by assuming continuous flood inundation depth information is available at multiple locations. The other one, which is more realistic case during such a severe flood disaster, assumes uncertain and non-continuous information is available to be assimilated. The results show that, in the first idealized situation, the large scale inundation during the flooding was estimated reasonably with RMSE < 0.4 m in average. For the second more realistic situation, the error becomes larger (RMSE 0.5 m) and the impact of the optimal interpolation becomes comparatively less effective. Nevertheless, the applications of the proposed data assimilation method demonstrated a high potential of this method for assimilating citizen collected information for real-time flood hazard mapping in the future.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890005750','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890005750"><span>Determination of longitudinal aerodynamic derivatives using flight data from an icing research aircraft</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ranaudo, R. J.; Batterson, J. G.; Reehorst, A. L.; Bond, T. H.; Omara, T. M.</p> <p>1989-01-01</p> <p>A flight test was performed with the NASA Lewis Research Center's DH-6 icing research aircraft. The purpose was to employ a flight test procedure and data analysis method, to determine the accuracy with which the effects of ice on aircraft stability and control could be measured. For simplicity, flight testing was restricted to the short period longitudinal mode. Two flights were flown in a clean (baseline) configuration, and two flights were flown with simulated horizontal tail ice. Forty-five repeat doublet maneuvers were performed in each of four test configurations, at a given trim speed, to determine the ensemble variation of the estimated stability and control derivatives. Additional maneuvers were also performed in each configuration, to determine the variation in the longitudinal derivative estimates over a wide range of trim speeds. Stability and control derivatives were estimated by a Modified Stepwise Regression (MSR) technique. A measure of the confidence in the derivative estimates was obtained by comparing the standard error for the ensemble of repeat maneuvers, to the average of the estimated standard errors predicted by the MSR program. A multiplicative relationship was determined between the ensemble standard error, and the averaged program standard errors. In addition, a 95 percent confidence interval analysis was performed for the elevator effectiveness estimates, C sub m sub delta e. This analysis identified the speed range where changes in C sub m sub delta e could be attributed to icing effects. The magnitude of icing effects on the derivative estimates were strongly dependent on flight speed and aircraft wing flap configuration. With wing flaps up, the estimated derivatives were degraded most at lower speeds corresponding to that configuration. With wing flaps extended to 10 degrees, the estimated derivatives were degraded most at the higher corresponding speeds. The effects of icing on the changes in longitudinal stability and control derivatives were adequately determined by the flight test procedure and the MSR analysis method discussed herein.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003JApMe..42..308D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003JApMe..42..308D"><span>Evaluation of an Ensemble Dispersion Calculation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Draxler, Roland R.</p> <p>2003-02-01</p> <p>A Lagrangian transport and dispersion model was modified to generate multiple simulations from a single meteorological dataset. Each member of the simulation was computed by assuming a ±1-gridpoint shift in the horizontal direction and a ±250-m shift in the vertical direction of the particle position, with respect to the meteorological data. The configuration resulted in 27 ensemble members. Each member was assumed to have an equal probability. The model was tested by creating an ensemble of daily average air concentrations for 3 months at 75 measurement locations over the eastern half of the United States during the Across North America Tracer Experiment (ANATEX). Two generic graphical displays were developed to summarize the ensemble prediction and the resulting concentration probabilities for a specific event: a probability-exceed plot and a concentration-probability plot. Although a cumulative distribution of the ensemble probabilities compared favorably with the measurement data, the resulting distribution was not uniform. This result was attributed to release height sensitivity. The trajectory ensemble approach accounts for about 41%-47% of the variance in the measurement data. This residual uncertainty is caused by other model and data errors that are not included in the ensemble design.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29579536','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29579536"><span>Novel forecasting approaches using combination of machine learning and statistical models for flood susceptibility mapping.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah</p> <p>2018-07-01</p> <p>In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..16.2811P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..16.2811P"><span>Using Bayes Model Averaging for Wind Power Forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Preede Revheim, Pål; Beyer, Hans Georg</p> <p>2014-05-01</p> <p>For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data does not contain information, but it has the disadvantage of nearly doubling the number of model parameters to be estimated. Second, the BMA procedure is run with group mean wind power as the response variable instead of group mean wind speed. This also solves the problem with longer consecutive periods without information in the input data, but it leaves the power curve to also be estimated from the data. [1] Raftery, A. E., et al. (2005). Using Bayesian Model Averaging to Calibrate Forecast Ensembles. Monthly Weather Review, 133, 1155-1174. [2]Revheim, P. P. and H. G. Beyer (2013). Using Bayesian Model Averaging for wind farm group forecasts. EWEA Wind Power Forecasting Technology Workshop,Rotterdam, 4-5 December 2013. [3]Sloughter, J. M., T. Gneiting and A. E. Raftery (2010). Probabilistic Wind Speed Forecasting Using Ensembles and Bayesian Model Averaging. Journal of the American Statistical Association, Vol. 105, No. 489, 25-35</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JQSRT.146..365N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JQSRT.146..365N"><span>Laser ektacytometry and evaluation of statistical characteristics of inhomogeneous ensembles of red blood cells</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nikitin, S. Yu.; Priezzhev, A. V.; Lugovtsov, A. E.; Ustinov, V. D.; Razgulin, A. V.</p> <p>2014-10-01</p> <p>The paper is devoted to development of the laser ektacytometry technique for evaluation of the statistical characteristics of inhomogeneous ensembles of red blood cells (RBCs). We have analyzed theoretically laser beam scattering by the inhomogeneous ensembles of elliptical discs, modeling red blood cells in the ektacytometer. The analysis shows that the laser ektacytometry technique allows for quantitative evaluation of such population characteristics of RBCs as the cells mean shape, the cells deformability variance and asymmetry of the cells distribution in the deformability. Moreover, we show that the deformability distribution itself can be retrieved by solving a specific Fredholm integral equation of the first kind. At this stage we do not take into account the scatter in the RBC sizes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4233720','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4233720"><span>The interplay between cooperativity and diversity in model threshold ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cervera, Javier; Manzanares, José A.; Mafe, Salvador</p> <p>2014-01-01</p> <p>The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. PMID:25142516</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7963E..15H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7963E..15H"><span>Sampling-based ensemble segmentation against inter-operator variability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huo, Jing; Okada, Kazunori; Pope, Whitney; Brown, Matthew</p> <p>2011-03-01</p> <p>Inconsistency and a lack of reproducibility are commonly associated with semi-automated segmentation methods. In this study, we developed an ensemble approach to improve reproducibility and applied it to glioblastoma multiforme (GBM) brain tumor segmentation on T1-weigted contrast enhanced MR volumes. The proposed approach combines samplingbased simulations and ensemble segmentation into a single framework; it generates a set of segmentations by perturbing user initialization and user-specified internal parameters, then fuses the set of segmentations into a single consensus result. Three combination algorithms were applied: majority voting, averaging and expectation-maximization (EM). The reproducibility of the proposed framework was evaluated by a controlled experiment on 16 tumor cases from a multicenter drug trial. The ensemble framework had significantly better reproducibility than the individual base Otsu thresholding method (p<.001).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017WRR....5310085B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017WRR....5310085B"><span>Using Meteorological Analogues for Reordering Postprocessed Precipitation Ensembles in Hydrological Forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bellier, Joseph; Bontron, Guillaume; Zin, Isabella</p> <p>2017-12-01</p> <p>Meteorological ensemble forecasts are nowadays widely used as input of hydrological models for probabilistic streamflow forecasting. These forcings are frequently biased and have to be statistically postprocessed, using most of the time univariate techniques that apply independently to individual locations, lead times and weather variables. Postprocessed ensemble forecasts therefore need to be reordered so as to reconstruct suitable multivariate dependence structures. The Schaake shuffle and ensemble copula coupling are the two most popular methods for this purpose. This paper proposes two adaptations of them that make use of meteorological analogues for reconstructing spatiotemporal dependence structures of precipitation forecasts. Performances of the original and adapted techniques are compared through a multistep verification experiment using real forecasts from the European Centre for Medium-Range Weather Forecasts. This experiment evaluates not only multivariate precipitation forecasts but also the corresponding streamflow forecasts that derive from hydrological modeling. Results show that the relative performances of the different reordering methods vary depending on the verification step. In particular, the standard Schaake shuffle is found to perform poorly when evaluated on streamflow. This emphasizes the crucial role of the precipitation spatiotemporal dependence structure in hydrological ensemble forecasting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28655440','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28655440"><span>iACP-GAEnsC: Evolutionary genetic algorithm based ensemble classification of anticancer peptides by utilizing hybrid feature space.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Akbar, Shahid; Hayat, Maqsood; Iqbal, Muhammad; Jan, Mian Ahmad</p> <p>2017-06-01</p> <p>Cancer is a fatal disease, responsible for one-quarter of all deaths in developed countries. Traditional anticancer therapies such as, chemotherapy and radiation, are highly expensive, susceptible to errors and ineffective techniques. These conventional techniques induce severe side-effects on human cells. Due to perilous impact of cancer, the development of an accurate and highly efficient intelligent computational model is desirable for identification of anticancer peptides. In this paper, evolutionary intelligent genetic algorithm-based ensemble model, 'iACP-GAEnsC', is proposed for the identification of anticancer peptides. In this model, the protein sequences are formulated, using three different discrete feature representation methods, i.e., amphiphilic Pseudo amino acid composition, g-Gap dipeptide composition, and Reduce amino acid alphabet composition. The performance of the extracted feature spaces are investigated separately and then merged to exhibit the significance of hybridization. In addition, the predicted results of individual classifiers are combined together, using optimized genetic algorithm and simple majority technique in order to enhance the true classification rate. It is observed that genetic algorithm-based ensemble classification outperforms than individual classifiers as well as simple majority voting base ensemble. The performance of genetic algorithm-based ensemble classification is highly reported on hybrid feature space, with an accuracy of 96.45%. In comparison to the existing techniques, 'iACP-GAEnsC' model has achieved remarkable improvement in terms of various performance metrics. Based on the simulation results, it is observed that 'iACP-GAEnsC' model might be a leading tool in the field of drug design and proteomics for researchers. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29564429','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29564429"><span>Cell-cell bioelectrical interactions and local heterogeneities in genetic networks: a model for the stabilization of single-cell states and multicellular oscillations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cervera, Javier; Manzanares, José A; Mafe, Salvador</p> <p>2018-04-04</p> <p>Genetic networks operate in the presence of local heterogeneities in single-cell transcription and translation rates. Bioelectrical networks and spatio-temporal maps of cell electric potentials can influence multicellular ensembles. Could cell-cell bioelectrical interactions mediated by intercellular gap junctions contribute to the stabilization of multicellular states against local genetic heterogeneities? We theoretically analyze this question on the basis of two well-established experimental facts: (i) the membrane potential is a reliable read-out of the single-cell electrical state and (ii) when the cells are coupled together, their individual cell potentials can be influenced by ensemble-averaged electrical potentials. We propose a minimal biophysical model for the coupling between genetic and bioelectrical networks that associates the local changes occurring in the transcription and translation rates of an ion channel protein with abnormally low (depolarized) cell potentials. We then analyze the conditions under which the depolarization of a small region (patch) in a multicellular ensemble can be reverted by its bioelectrical coupling with the (normally polarized) neighboring cells. We show also that the coupling between genetic and bioelectric networks of non-excitable cells, modulated by average electric potentials at the multicellular ensemble level, can produce oscillatory phenomena. The simulations show the importance of single-cell potentials characteristic of polarized and depolarized states, the relative sizes of the abnormally polarized patch and the rest of the normally polarized ensemble, and intercellular coupling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160007389','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160007389"><span>"Intelligent Ensemble" Projections of Precipitation and Surface Radiation in Support of Agricultural Climate Change Adaptation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Taylor, Patrick C.; Baker, Noel C.</p> <p>2015-01-01</p> <p>Earth's climate is changing and will continue to change into the foreseeable future. Expected changes in the climatological distribution of precipitation, surface temperature, and surface solar radiation will significantly impact agriculture. Adaptation strategies are, therefore, required to reduce the agricultural impacts of climate change. Climate change projections of precipitation, surface temperature, and surface solar radiation distributions are necessary input for adaption planning studies. These projections are conventionally constructed from an ensemble of climate model simulations (e.g., the Coupled Model Intercomparison Project 5 (CMIP5)) as an equal weighted average, one model one vote. Each climate model, however, represents the array of climate-relevant physical processes with varying degrees of fidelity influencing the projection of individual climate variables differently. Presented here is a new approach, termed the "Intelligent Ensemble, that constructs climate variable projections by weighting each model according to its ability to represent key physical processes, e.g., precipitation probability distribution. This approach provides added value over the equal weighted average method. Physical process metrics applied in the "Intelligent Ensemble" method are created using a combination of NASA and NOAA satellite and surface-based cloud, radiation, temperature, and precipitation data sets. The "Intelligent Ensemble" method is applied to the RCP4.5 and RCP8.5 anthropogenic climate forcing simulations within the CMIP5 archive to develop a set of climate change scenarios for precipitation, temperature, and surface solar radiation in each USDA Farm Resource Region for use in climate change adaptation studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhRvE..83e6216B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhRvE..83e6216B"><span>Fidelity decay in interacting two-level boson systems: Freezing and revivals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.</p> <p>2011-05-01</p> <p>We study the fidelity decay in the k-body embedded ensembles of random matrices for bosons distributed in two single-particle states, considering the reference or unperturbed Hamiltonian as the one-body terms and the diagonal part of the k-body embedded ensemble of random matrices and the perturbation as the residual off-diagonal part of the interaction. We calculate the ensemble-averaged fidelity with respect to an initial random state within linear response theory to second order on the perturbation strength and demonstrate that it displays the freeze of the fidelity. During the freeze, the average fidelity exhibits periodic revivals at integer values of the Heisenberg time tH. By selecting specific k-body terms of the residual interaction, we find that the periodicity of the revivals during the freeze of fidelity is an integer fraction of tH, thus relating the period of the revivals with the range of the interaction k of the perturbing terms. Numerical calculations confirm the analytical results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AGUFMPP41C1527T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AGUFMPP41C1527T"><span>High northern latitude temperature extremes, 1400-1999</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tingley, M. P.; Huybers, P.; Hughen, K. A.</p> <p>2009-12-01</p> <p>There is often an interest in determining which interval features the most extreme value of a reconstructed climate field, such as the warmest year or decade in a temperature reconstruction. Previous approaches to this type of question have not fully accounted for the spatial and temporal covariance in the climate field when assessing the significance of extreme values. Here we present results from applying BARSAT, a new, Bayesian approach to reconstructing climate fields, to a 600 year multiproxy temperature data set that covers land areas between 45N and 85N. The end result of the analysis is an ensemble of spatially and temporally complete realizations of the temperature field, each of which is consistent with the observations and the estimated values of the parameters that define the assumed spatial and temporal covariance functions. In terms of the spatial average temperature, 1990-1999 was the warmest decade in the 1400-1999 interval in each of 2000 ensemble members, while 1995 was the warmest year in 98% of the ensemble members. A similar analysis at each node of a regular 5 degree grid gives insight into the spatial distribution of warm temperatures, and reveals that 1995 was anomalously warm in Eurasia, whereas 1998 featured extreme warmth in North America. In 70% of the ensemble members, 1601 featured the coldest spatial average, indicating that the eruption of Huaynaputina in Peru in 1600 (with a volcanic explosivity index of 6) had a major cooling impact on the high northern latitudes. Repeating this analysis at each node reveals the varying impacts of major volcanic eruptions on the distribution of extreme cooling. Finally, we use the ensemble to investigate extremes in the time evolution of centennial temperature trends, and find that in more than half the ensemble members, the greatest rate of change in the spatial mean time series was a cooling centered at 1600. The largest rate of centennial scale warming, however, occurred in the 20th Century in more than 98% of the ensemble members.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008ACPD....821313C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008ACPD....821313C"><span>Single particle characterization using a light scattering module coupled to a time-of-flight aerosol mass spectrometer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cross, E. S.; Onasch, T. B.; Canagaratna, M.; Jayne, J. T.; Kimmel, J.; Yu, X.-Y.; Alexander, M. L.; Worsnop, D. R.; Davidovits, P.</p> <p>2008-12-01</p> <p>We present the first single particle results obtained using an Aerodyne time-of-flight aerosol mass spectrometer coupled with a light scattering module (LS-ToF-AMS). The instrument was deployed at the T1 ground site approximately 40 km northeast of the Mexico City Metropolitan Area (MCMA) as part of the MILAGRO field study in March of 2006. The instrument was operated as a standard AMS from 12-30 March, acquiring average chemical composition and size distributions for the ambient aerosol, and in single particle mode from 27-30 March. Over a 75-h sampling period, 12 853 single particle mass spectra were optically triggered, saved, and analyzed. The correlated optical and chemical detection allowed detailed examination of single particle collection and quantification within the LS-ToF-AMS. The single particle data enabled the mixing states of the ambient aerosol to be characterized within the context of the size-resolved ensemble chemical information. The particulate mixing states were examined as a function of sampling time and most of the particles were found to be internal mixtures containing many of the organic and inorganic species identified in the ensemble analysis. The single particle mass spectra were deconvolved, using techniques developed for ensemble AMS data analysis, into HOA, OOA, NH4NO3, (NH4)2SO4, and NH4Cl fractions. Average single particle mass and chemistry measurements are shown to be in agreement with ensemble MS and PTOF measurements. While a significant fraction of ambient particles were internal mixtures of varying degrees, single particle measurements of chemical composition allowed the identification of time periods during which the ambient ensemble was externally mixed. In some cases the chemical composition of the particles suggested a likely source. Throughout the full sampling period, the ambient ensemble was an external mixture of combustion-generated HOA particles from local sources (e.g. traffic), with number concentrations peaking during morning rush hour (04:00-08:00 LT) each day, and more processed particles of mixed composition from nonspecific sources. From 09:00-12:00 LT all particles within the ambient ensemble, including the locally produced HOA particles, became coated with NH4NO3 due to photochemical production of HNO3. The number concentration of externally mixed HOA particles remained low during daylight hours. Throughout the afternoon the OOA component dominated the organic fraction of the single particles, likely due to secondary organic aerosol formation and condensation. Single particle mass fractions of (NH4)2SO4 were lowest during the day and highest during the night. In one instance, gas-to-particle condensation of (NH4)2SO4 was observed on all measured particles within a strong SO2 plume arriving at T1 from the northwest. Particles with high NH4Cl mass fractions were identified during early morning periods. A limited number of particles (~5% of the total number) with mass spectral features characteristic of biomass burning were also identified.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26703093','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26703093"><span>IntelliHealth: A medical decision support application using a novel weighted multi-layer classifier ensemble framework.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bashir, Saba; Qamar, Usman; Khan, Farhan Hassan</p> <p>2016-02-01</p> <p>Accuracy plays a vital role in the medical field as it concerns with the life of an individual. Extensive research has been conducted on disease classification and prediction using machine learning techniques. However, there is no agreement on which classifier produces the best results. A specific classifier may be better than others for a specific dataset, but another classifier could perform better for some other dataset. Ensemble of classifiers has been proved to be an effective way to improve classification accuracy. In this research we present an ensemble framework with multi-layer classification using enhanced bagging and optimized weighting. The proposed model called "HM-BagMoov" overcomes the limitations of conventional performance bottlenecks by utilizing an ensemble of seven heterogeneous classifiers. The framework is evaluated on five different heart disease datasets, four breast cancer datasets, two diabetes datasets, two liver disease datasets and one hepatitis dataset obtained from public repositories. The analysis of the results show that ensemble framework achieved the highest accuracy, sensitivity and F-Measure when compared with individual classifiers for all the diseases. In addition to this, the ensemble framework also achieved the highest accuracy when compared with the state of the art techniques. An application named "IntelliHealth" is also developed based on proposed model that may be used by hospitals/doctors for diagnostic advice. Copyright © 2015 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19940006497&hterms=Petit&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DPetit','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19940006497&hterms=Petit&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DPetit"><span>An ensemble pulsar time</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Petit, Gerard; Thomas, Claudine; Tavella, Patrizia</p> <p>1993-01-01</p> <p>Millisecond pulsars are galactic objects that exhibit a very stable spinning period. Several tens of these celestial clocks have now been discovered, which opens the possibility that an average time scale may be deduced through a long-term stability algorithm. Such an ensemble average makes it possible to reduce the level of the instabilities originating from the pulsars or from other sources of noise, which are unknown but independent. The basis for such an algorithm is presented and applied to real pulsar data. It is shown that pulsar time could shortly become more stable than the present atomic time, for averaging times of a few years. Pulsar time can also be used as a flywheel to maintain the accuracy of atomic time in case of temporary failure of the primary standards, or to transfer the improved accuracy of future standards back to the present.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24163333','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24163333"><span>Hierarchical encoding makes individuals in a group seem more attractive.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Walker, Drew; Vul, Edward</p> <p>2014-01-01</p> <p>In the research reported here, we found evidence of the cheerleader effect-people seem more attractive in a group than in isolation. We propose that this effect arises via an interplay of three cognitive phenomena: (a) The visual system automatically computes ensemble representations of faces presented in a group, (b) individual members of the group are biased toward this ensemble average, and (c) average faces are attractive. Taken together, these phenomena suggest that individual faces will seem more attractive when presented in a group because they will appear more similar to the average group face, which is more attractive than group members' individual faces. We tested this hypothesis in five experiments in which subjects rated the attractiveness of faces presented either alone or in a group with the same gender. Our results were consistent with the cheerleader effect.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4756621','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4756621"><span>Ensembl regulation resources</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zerbino, Daniel R.; Johnson, Nathan; Juetteman, Thomas; Sheppard, Dan; Wilder, Steven P.; Lavidas, Ilias; Nuhn, Michael; Perry, Emily; Raffaillac-Desfosses, Quentin; Sobral, Daniel; Keefe, Damian; Gräf, Stefan; Ahmed, Ikhlak; Kinsella, Rhoda; Pritchard, Bethan; Brent, Simon; Amode, Ridwan; Parker, Anne; Trevanion, Steven; Birney, Ewan; Dunham, Ian; Flicek, Paul</p> <p>2016-01-01</p> <p>New experimental techniques in epigenomics allow researchers to assay a diversity of highly dynamic features such as histone marks, DNA modifications or chromatin structure. The study of their fluctuations should provide insights into gene expression regulation, cell differentiation and disease. The Ensembl project collects and maintains the Ensembl regulation data resources on epigenetic marks, transcription factor binding and DNA methylation for human and mouse, as well as microarray probe mappings and annotations for a variety of chordate genomes. From this data, we produce a functional annotation of the regulatory elements along the human and mouse genomes with plans to expand to other species as data becomes available. Starting from well-studied cell lines, we will progressively expand our library of measurements to a greater variety of samples. Ensembl’s regulation resources provide a central and easy-to-query repository for reference epigenomes. As with all Ensembl data, it is freely available at http://www.ensembl.org, from the Perl and REST APIs and from the public Ensembl MySQL database server at ensembldb.ensembl.org. Database URL: http://www.ensembl.org PMID:26888907</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22039957','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22039957"><span>Calculation of relative free energies for ligand-protein binding, solvation, and conformational transitions using the GROMOS software.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Riniker, Sereina; Christ, Clara D; Hansen, Halvor S; Hünenberger, Philippe H; Oostenbrink, Chris; Steiner, Denise; van Gunsteren, Wilfred F</p> <p>2011-11-24</p> <p>The calculation of the relative free energies of ligand-protein binding, of solvation for different compounds, and of different conformational states of a polypeptide is of considerable interest in the design or selection of potential enzyme inhibitors. Since such processes in aqueous solution generally comprise energetic and entropic contributions from many molecular configurations, adequate sampling of the relevant parts of configurational space is required and can be achieved through molecular dynamics simulations. Various techniques to obtain converged ensemble averages and their implementation in the GROMOS software for biomolecular simulation are discussed, and examples of their application to biomolecules in aqueous solution are given. © 2011 American Chemical Society</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/14630222','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/14630222"><span>Intracellular applications of fluorescence correlation spectroscopy: prospects for neuroscience.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, Sally A; Schwille, Petra</p> <p>2003-10-01</p> <p>Based on time-averaging fluctuation analysis of small fluorescent molecular ensembles in equilibrium, fluorescence correlation spectroscopy has recently been applied to investigate processes in the intracellular milieu. The exquisite sensitivity of fluorescence correlation spectroscopy provides access to a multitude of measurement parameters (rates of diffusion, local concentration, states of aggregation and molecular interactions) in real time with fast temporal and high spatial resolution. The introduction of dual-color cross-correlation, imaging, two-photon excitation, and coincidence analysis coupled with fluorescence correlation spectroscopy has expanded the utility of the technique to encompass a wide range of promising applications in living cells that may provide unprecedented insight into understanding the molecular mechanisms of intracellular neurobiological processes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5145790','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5145790"><span>Evaluating Alignment of Shapes by Ensemble Visualization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Raj, Mukund; Mirzargar, Mahsa; Preston, J. Samuel; Kirby, Robert M.; Whitaker, Ross T.</p> <p>2016-01-01</p> <p>The visualization of variability in surfaces embedded in 3D, which is a type of ensemble uncertainty visualization, provides a means of understanding the underlying distribution of a collection or ensemble of surfaces. Although ensemble visualization for isosurfaces has been described in the literature, we conduct an expert-based evaluation of various ensemble visualization techniques in a particular medical imaging application: the construction of atlases or templates from a population of images. In this work, we extend contour boxplot to 3D, allowing us to evaluate it against an enumeration-style visualization of the ensemble members and other conventional visualizations used by atlas builders, namely examining the atlas image and the corresponding images/data provided as part of the construction process. We present feedback from domain experts on the efficacy of contour boxplot compared to other modalities when used as part of the atlas construction and analysis stages of their work. PMID:26186768</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..173a2013P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..173a2013P"><span>A study of fuzzy logic ensemble system performance on face recognition problem</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Polyakova, A.; Lipinskiy, L.</p> <p>2017-02-01</p> <p>Some problems are difficult to solve by using a single intelligent information technology (IIT). The ensemble of the various data mining (DM) techniques is a set of models which are able to solve the problem by itself, but the combination of which allows increasing the efficiency of the system as a whole. Using the IIT ensembles can improve the reliability and efficiency of the final decision, since it emphasizes on the diversity of its components. The new method of the intellectual informational technology ensemble design is considered in this paper. It is based on the fuzzy logic and is designed to solve the classification and regression problems. The ensemble consists of several data mining algorithms: artificial neural network, support vector machine and decision trees. These algorithms and their ensemble have been tested by solving the face recognition problems. Principal components analysis (PCA) is used for feature selection.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25321967','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25321967"><span>Entanglement distillation for quantum communication network with atomic-ensemble memories.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Tao; Yang, Guo-Jian; Deng, Fu-Guo</p> <p>2014-10-06</p> <p>Atomic ensembles are effective memory nodes for quantum communication network due to the long coherence time and the collective enhancement effect for the nonlinear interaction between an ensemble and a photon. Here we investigate the possibility of achieving the entanglement distillation for nonlocal atomic ensembles by the input-output process of a single photon as a result of cavity quantum electrodynamics. We give an optimal entanglement concentration protocol (ECP) for two-atomic-ensemble systems in a partially entangled pure state with known parameters and an efficient ECP for the systems in an unknown partially entangled pure state with a nondestructive parity-check detector (PCD). For the systems in a mixed entangled state, we introduce an entanglement purification protocol with PCDs. These entanglement distillation protocols have high fidelity and efficiency with current experimental techniques, and they are useful for quantum communication network with atomic-ensemble memories.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFM.B31D0348W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFM.B31D0348W"><span>Diagnosis and Quantification of Climatic Sensitivity of Carbon Fluxes in Ensemble Global Ecosystem Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, W.; Hashimoto, H.; Milesi, C.; Nemani, R. R.; Myneni, R.</p> <p>2011-12-01</p> <p>Terrestrial ecosystem models are primary scientific tools to extrapolate our understanding of ecosystem functioning from point observations to global scales as well as from the past climatic conditions into the future. However, no model is nearly perfect and there are often considerable structural uncertainties existing between different models. Ensemble model experiments thus become a mainstream approach in evaluating the current status of global carbon cycle and predicting its future changes. A key task in such applications is to quantify the sensitivity of the simulated carbon fluxes to climate variations and changes. Here we develop a systematic framework to address this question solely by analyzing the inputs and the outputs from the models. The principle of our approach is to assume the long-term (~30 years) average of the inputs/outputs as a quasi-equlibrium of the climate-vegetation system while treat the anomalies of carbon fluxes as responses to climatic disturbances. In this way, the corresponding relationships can be largely linearized and analyzed using conventional time-series techniques. This method is used to characterize three major aspects of the vegetation models that are mostly important to global carbon cycle, namely the primary production, the biomass dynamics, and the ecosystem respiration. We apply this analytical framework to quantify the climatic sensitivity of an ensemble of models including CASA, Biome-BGC, LPJ as well as several other DGVMs from previous studies, all driven by the CRU-NCEP climate dataset. The detailed analysis results are reported in this study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhRvA..86e2324H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhRvA..86e2324H"><span>Ensembles of physical states and random quantum circuits on graphs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hamma, Alioscia; Santra, Siddhartha; Zanardi, Paolo</p> <p>2012-11-01</p> <p>In this paper we continue and extend the investigations of the ensembles of random physical states introduced in Hamma [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.040502 109, 040502 (2012)]. These ensembles are constructed by finite-length random quantum circuits (RQC) acting on the (hyper)edges of an underlying (hyper)graph structure. The latter encodes for the locality structure associated with finite-time quantum evolutions generated by physical, i.e., local, Hamiltonians. Our goal is to analyze physical properties of typical states in these ensembles; in particular here we focus on proxies of quantum entanglement as purity and α-Renyi entropies. The problem is formulated in terms of matrix elements of superoperators which depend on the graph structure, choice of probability measure over the local unitaries, and circuit length. In the α=2 case these superoperators act on a restricted multiqubit space generated by permutation operators associated to the subsets of vertices of the graph. For permutationally invariant interactions the dynamics can be further restricted to an exponentially smaller subspace. We consider different families of RQCs and study their typical entanglement properties for finite time as well as their asymptotic behavior. We find that area law holds in average and that the volume law is a typical property (that is, it holds in average and the fluctuations around the average are vanishing for the large system) of physical states. The area law arises when the evolution time is O(1) with respect to the size L of the system, while the volume law arises as is typical when the evolution time scales like O(L).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3996711','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3996711"><span>The Dropout Learning Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Baldi, Pierre; Sadowski, Peter</p> <p>2014-01-01</p> <p>Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JCAMD..20..263B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JCAMD..20..263B"><span>RNA unrestrained molecular dynamics ensemble improves agreement with experimental NMR data compared to single static structure: a test case</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Beckman, Robert A.; Moreland, David; Louise-May, Shirley; Humblet, Christine</p> <p>2006-05-01</p> <p>Nuclear magnetic resonance (NMR) provides structural and dynamic information reflecting an average, often non-linear, of multiple solution-state conformations. Therefore, a single optimized structure derived from NMR refinement may be misleading if the NMR data actually result from averaging of distinct conformers. It is hypothesized that a conformational ensemble generated by a valid molecular dynamics (MD) simulation should be able to improve agreement with the NMR data set compared with the single optimized starting structure. Using a model system consisting of two sequence-related self-complementary ribonucleotide octamers for which NMR data was available, 0.3 ns particle mesh Ewald MD simulations were performed in the AMBER force field in the presence of explicit water and counterions. Agreement of the averaged properties of the molecular dynamics ensembles with NMR data such as homonuclear proton nuclear Overhauser effect (NOE)-based distance constraints, homonuclear proton and heteronuclear 1H-31P coupling constant ( J) data, and qualitative NMR information on hydrogen bond occupancy, was systematically assessed. Despite the short length of the simulation, the ensemble generated from it agreed with the NMR experimental constraints more completely than the single optimized NMR structure. This suggests that short unrestrained MD simulations may be of utility in interpreting NMR results. As expected, a 0.5 ns simulation utilizing a distance dependent dielectric did not improve agreement with the NMR data, consistent with its inferior exploration of conformational space as assessed by 2-D RMSD plots. Thus, ability to rapidly improve agreement with NMR constraints may be a sensitive diagnostic of the MD methods themselves.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JPhD...49w3001M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JPhD...49w3001M"><span>Tracking single mRNA molecules in live cells</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Moon, Hyungseok C.; Lee, Byung Hun; Lim, Kiseong; Son, Jae Seok; Song, Minho S.; Park, Hye Yoon</p> <p>2016-06-01</p> <p>mRNAs inside cells interact with numerous RNA-binding proteins, microRNAs, and ribosomes that together compose a highly heterogeneous population of messenger ribonucleoprotein (mRNP) particles. Perhaps one of the best ways to investigate the complex regulation of mRNA is to observe individual molecules. Single molecule imaging allows the collection of quantitative and statistical data on subpopulations and transient states that are otherwise obscured by ensemble averaging. In addition, single particle tracking reveals the sequence of events that occur in the formation and remodeling of mRNPs in real time. Here, we review the current state-of-the-art techniques in tagging, delivery, and imaging to track single mRNAs in live cells. We also discuss how these techniques are applied to extract dynamic information on the transcription, transport, localization, and translation of mRNAs. These studies demonstrate how single molecule tracking is transforming the understanding of mRNA regulation in live cells.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21513388','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21513388"><span>Coordinate space translation technique for simulation of electronic process in the ion-atom collision.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Feng; Hong, Xuhai; Wang, Jian; Kim, Kwang S</p> <p>2011-04-21</p> <p>Recently we developed a theoretical model of ion-atom collisions, which was made on the basis of a time-dependent density functional theory description of the electron dynamics and a classical treatment of the heavy particle motion. Taking advantage of the real-space grid method, we introduce a "coordinate space translation" technique to allow one to focus on a certain space of interest such as the region around the projectile or the target. Benchmark calculations are given for collisions between proton and oxygen over a wide range of impact energy. To extract the probability of charge transfer, the formulation of Lüdde and Dreizler [J. Phys. B 16, 3973 (1983)] has been generalized to ensemble-averaging application in the particular case of O((3)P). Charge transfer total cross sections are calculated, showing fairly good agreements between experimental data and present theoretical results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMIN21D0065T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMIN21D0065T"><span>The NASA Reanalysis Ensemble Service - Advanced Capabilities for Integrated Reanalysis Access and Intercomparison</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.</p> <p>2017-12-01</p> <p>NASA's efforts to advance climate analytics-as-a-service are making new capabilities available to the research community: (1) A full-featured Reanalysis Ensemble Service (RES) comprising monthly means data from multiple reanalysis data sets, accessible through an enhanced set of extraction, analytic, arithmetic, and intercomparison operations. The operations are made accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib; (2) A cloud-based, high-performance Virtual Real-Time Analytics Testbed supporting a select set of climate variables. This near real-time capability enables advanced technologies like Spark and Hadoop-based MapReduce analytics over native NetCDF files; and (3) A WPS-compliant Web service interface to our climate data analytics service that will enable greater interoperability with next-generation systems such as ESGF. The Reanalysis Ensemble Service includes the following: - New API that supports full temporal, spatial, and grid-based resolution services with sample queries - A Docker-ready RES application to deploy across platforms - Extended capabilities that enable single- and multiple reanalysis area average, vertical average, re-gridding, standard deviation, and ensemble averages - Convenient, one-stop shopping for commonly used data products from multiple reanalyses including basic sub-setting and arithmetic operations (e.g., avg, sum, max, min, var, count, anomaly) - Full support for the MERRA-2 reanalysis dataset in addition to, ECMWF ERA-Interim, NCEP CFSR, JMA JRA-55 and NOAA/ESRL 20CR… - A Jupyter notebook-based distribution mechanism designed for client use cases that combines CDSlib documentation with interactive scenarios and personalized project management - Supporting analytic services for NASA GMAO Forward Processing datasets - Basic uncertainty quantification services that combine heterogeneous ensemble products with comparative observational products (e.g., reanalysis, observational, visualization) - The ability to compute and visualize multiple reanalysis for ease of inter-comparisons - Automated tools to retrieve and prepare data collections for analytic processing</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Conductor&pg=2&id=EJ834887','ERIC'); return false;" href="https://eric.ed.gov/?q=Conductor&pg=2&id=EJ834887"><span>The Effect of Conductor Expressivity on Ensemble Performance Evaluation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Morrison, Steven J.; Price, Harry E.; Geiger, Carla G.; Cornacchio, Rachel A.</p> <p>2009-01-01</p> <p>In this study, the authors examined whether a conductor's use of high-expressivity or low-expressivity techniques affected evaluations of ensemble performances that were identical across conducting conditions. Two conductors each conducted two 1-minute parallel excerpts from Percy Grainger's "Walking Tune." Each directed one excerpt…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005ASAJ..118Q1918L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005ASAJ..118Q1918L"><span>Air flow measurement techniques applied to noise reduction of a centrifugal blower</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Laage, John W.; Armstrong, Ashli J.; Eilers, Daniel J.; Olsen, Michael G.; Mann, J. Adin</p> <p>2005-09-01</p> <p>The air flow in a centrifugal blower was studied using a variety of flow and sound measurement techniques. The flow measurement techniques employed included Particle Image Velocimetry (PIV), pitot tubes, and a five hole spherical probe. PIV was used to measure instantaneous and ensemble-averaged velocity fields over large area of the outlet duct as a function of fan position, allowing for the visualization of the flow as it leave the fan blades and progressed downstream. The results from the flow measurements were reviewed along side the results of the sound measurements with the goal of identifying sources of noise and inefficiencies in flow performance. The radiated sound power was divided into broadband and tone noise and measures of the flow. The changes in the tone and broadband sound were compared to changes in flow quantities such as the turbulent kinetic energy and Reynolds stress. Results for each method will be presented to demonstrate the strengths of each flow measurement technique as well as their limitations. Finally, the role that each played in identifying noise sources is described.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27940377','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27940377"><span>Using simulation to interpret experimental data in terms of protein conformational ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Allison, Jane R</p> <p>2017-04-01</p> <p>In their biological environment, proteins are dynamic molecules, necessitating an ensemble structural description. Molecular dynamics simulations and solution-state experiments provide complimentary information in the form of atomically detailed coordinates and averaged or distributions of structural properties or related quantities. Recently, increases in the temporal and spatial scale of conformational sampling and comparison of the more diverse conformational ensembles thus generated have revealed the importance of sampling rare events. Excitingly, new methods based on maximum entropy and Bayesian inference are promising to provide a statistically sound mechanism for combining experimental data with molecular dynamics simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12929922','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12929922"><span>Training in cortical control of neuroprosthetic devices improves signal extraction from small neuronal ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Helms Tillery, S I; Taylor, D M; Schwartz, A B</p> <p>2003-01-01</p> <p>We have recently developed a closed-loop environment in which we can test the ability of primates to control the motion of a virtual device using ensembles of simultaneously recorded neurons /29/. Here we use a maximum likelihood method to assess the information about task performance contained in the neuronal ensemble. We trained two animals to control the motion of a computer cursor in three dimensions. Initially the animals controlled cursor motion using arm movements, but eventually they learned to drive the cursor directly from cortical activity. Using a population vector (PV) based upon the relation between cortical activity and arm motion, the animals were able to control the cursor directly from the brain in a closed-loop environment, but with difficulty. We added a supervised learning method that modified the parameters of the PV according to task performance (adaptive PV), and found that animals were able to exert much finer control over the cursor motion from brain signals. Here we describe a maximum likelihood method (ML) to assess the information about target contained in neuronal ensemble activity. Using this method, we compared the information about target contained in the ensemble during arm control, during brain control early in the adaptive PV, and during brain control after the adaptive PV had settled and the animal could drive the cursor reliably and with fine gradations. During the arm-control task, the ML was able to determine the target of the movement in as few as 10% of the trials, and as many as 75% of the trials, with an average of 65%. This average dropped when the animals used a population vector to control motion of the cursor. On average we could determine the target in around 35% of the trials. This low percentage was also reflected in poor control of the cursor, so that the animal was unable to reach the target in a large percentage of trials. Supervised adjustment of the population vector parameters produced new weighting coefficients and directional tuning parameters for many neurons. This produced a much better performance of the brain-controlled cursor motion. It was also reflected in the maximum likelihood measure of cell activity, producing the correct target based only on neuronal activity in over 80% of the trials on average. The changes in maximum likelihood estimates of target location based on ensemble firing show that an animal's ability to regulate the motion of a cortically controlled device is not crucially dependent on the experimenter's ability to estimate intention from neuronal activity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EGUGA..12.2601X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EGUGA..12.2601X"><span>Upgrades to the REA method for producing probabilistic climate change projections</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, Ying; Gao, Xuejie; Giorgi, Filippo</p> <p>2010-05-01</p> <p>We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..1412372R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..1412372R"><span>A new strategy for snow-cover mapping using remote sensing data and ensemble based systems techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Roberge, S.; Chokmani, K.; De Sève, D.</p> <p>2012-04-01</p> <p>The snow cover plays an important role in the hydrological cycle of Quebec (Eastern Canada). Consequently, evaluating its spatial extent interests the authorities responsible for the management of water resources, especially hydropower companies. The main objective of this study is the development of a snow-cover mapping strategy using remote sensing data and ensemble based systems techniques. Planned to be tested in a near real-time operational mode, this snow-cover mapping strategy has the advantage to provide the probability of a pixel to be snow covered and its uncertainty. Ensemble systems are made of two key components. First, a method is needed to build an ensemble of classifiers that is diverse as much as possible. Second, an approach is required to combine the outputs of individual classifiers that make up the ensemble in such a way that correct decisions are amplified, and incorrect ones are cancelled out. In this study, we demonstrate the potential of ensemble systems to snow-cover mapping using remote sensing data. The chosen classifier is a sequential thresholds algorithm using NOAA-AVHRR data adapted to conditions over Eastern Canada. Its special feature is the use of a combination of six sequential thresholds varying according to the day in the winter season. Two versions of the snow-cover mapping algorithm have been developed: one is specific for autumn (from October 1st to December 31st) and the other for spring (from March 16th to May 31st). In order to build the ensemble based system, different versions of the algorithm are created by varying randomly its parameters. One hundred of the versions are included in the ensemble. The probability of a pixel to be snow, no-snow or cloud covered corresponds to the amount of votes the pixel has been classified as such by all classifiers. The overall performance of ensemble based mapping is compared to the overall performance of the chosen classifier, and also with ground observations at meteorological stations.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010APS..SES.CA001W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010APS..SES.CA001W"><span>Observing the conformation of individual SNARE proteins inside live cells</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Weninger, Keith</p> <p>2010-10-01</p> <p>Protein conformational dynamics are directly linked to function in many instances. Within living cells, protein dynamics are rarely synchronized so observing ensemble-averaged behaviors can hide details of signaling pathways. Here we present an approach using single molecule fluorescence resonance energy transfer (FRET) to observe the conformation of individual SNARE proteins as they fold to enter the SNARE complex in living cells. Proteins were recombinantly expressed, labeled with small-molecule fluorescent dyes and microinjected for in vivo imaging and tracking using total internal reflection microscopy. Observing single molecules avoids the difficulties of averaging over unsynchronized ensembles. Our approach is easily generalized to a wide variety of proteins in many cellular signaling pathways.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27991626','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27991626"><span>Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S</p> <p>2017-01-05</p> <p>The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EL.....9030004K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EL.....9030004K"><span>Ergodicity of financial indices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kolesnikov, A. V.; Rühl, T.</p> <p>2010-05-01</p> <p>We introduce the concept of the ensemble averaging for financial markets. We address the question of equality of ensemble and time averaging in their sequence and investigate if these averagings are equivalent for large amount of equity indices and branches. We start with the model of Gaussian-distributed returns, equal-weighted stocks in each index and absence of correlations within a single day and show that even this oversimplified model captures already the run of the corresponding index reasonably well due to its self-averaging properties. We introduce the concept of the instant cross-sectional volatility and discuss its relation to the ordinary time-resolved counterpart. The role of the cross-sectional volatility for the description of the corresponding index as well as the role of correlations between the single stocks and the role of non-Gaussianity of stock distributions is briefly discussed. Our model reveals quickly and efficiently some anomalies or bubbles in a particular financial market and gives an estimate of how large these effects can be and how quickly they disappear.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28522849','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28522849"><span>CarcinoPred-EL: Novel models for predicting the carcinogenicity of chemicals using molecular fingerprints and ensemble learning methods.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Li; Ai, Haixin; Chen, Wen; Yin, Zimo; Hu, Huan; Zhu, Junfeng; Zhao, Jian; Zhao, Qi; Liu, Hongsheng</p> <p>2017-05-18</p> <p>Carcinogenicity refers to a highly toxic end point of certain chemicals, and has become an important issue in the drug development process. In this study, three novel ensemble classification models, namely Ensemble SVM, Ensemble RF, and Ensemble XGBoost, were developed to predict carcinogenicity of chemicals using seven types of molecular fingerprints and three machine learning methods based on a dataset containing 1003 diverse compounds with rat carcinogenicity. Among these three models, Ensemble XGBoost is found to be the best, giving an average accuracy of 70.1 ± 2.9%, sensitivity of 67.0 ± 5.0%, and specificity of 73.1 ± 4.4% in five-fold cross-validation and an accuracy of 70.0%, sensitivity of 65.2%, and specificity of 76.5% in external validation. In comparison with some recent methods, the ensemble models outperform some machine learning-based approaches and yield equal accuracy and higher specificity but lower sensitivity than rule-based expert systems. It is also found that the ensemble models could be further improved if more data were available. As an application, the ensemble models are employed to discover potential carcinogens in the DrugBank database. The results indicate that the proposed models are helpful in predicting the carcinogenicity of chemicals. A web server called CarcinoPred-EL has been built for these models ( http://ccsipb.lnu.edu.cn/toxicity/CarcinoPred-EL/ ).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25142516','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25142516"><span>The interplay between cooperativity and diversity in model threshold ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cervera, Javier; Manzanares, José A; Mafe, Salvador</p> <p>2014-10-06</p> <p>The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. © 2014 The Author(s) Published by the Royal Society. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7962E..2PH','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7962E..2PH"><span>Confidence-based ensemble for GBM brain tumor segmentation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huo, Jing; van Rikxoort, Eva M.; Okada, Kazunori; Kim, Hyun J.; Pope, Whitney; Goldin, Jonathan; Brown, Matthew</p> <p>2011-03-01</p> <p>It is a challenging task to automatically segment glioblastoma multiforme (GBM) brain tumors on T1w post-contrast isotropic MR images. A semi-automated system using fuzzy connectedness has recently been developed for computing the tumor volume that reduces the cost of manual annotation. In this study, we propose a an ensemble method that combines multiple segmentation results into a final ensemble one. The method is evaluated on a dataset of 20 cases from a multi-center pharmaceutical drug trial and compared to the fuzzy connectedness method. Three individual methods were used in the framework: fuzzy connectedness, GrowCut, and voxel classification. The combination method is a confidence map averaging (CMA) method. The CMA method shows an improved ROC curve compared to the fuzzy connectedness method (p < 0.001). The CMA ensemble result is more robust compared to the three individual methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JHyd..555..371A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JHyd..555..371A"><span>On the incidence of meteorological and hydrological processors: Effect of resolution, sharpness and reliability of hydrological ensemble forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abaza, Mabrouk; Anctil, François; Fortin, Vincent; Perreault, Luc</p> <p>2017-12-01</p> <p>Meteorological and hydrological ensemble prediction systems are imperfect. Their outputs could often be improved through the use of a statistical processor, opening up the question of the necessity of using both processors (meteorological and hydrological), only one of them, or none. This experiment compares the predictive distributions from four hydrological ensemble prediction systems (H-EPS) utilising the Ensemble Kalman filter (EnKF) probabilistic sequential data assimilation scheme. They differ in the inclusion or not of the Distribution Based Scaling (DBS) method for post-processing meteorological forecasts and the ensemble Bayesian Model Averaging (ensemble BMA) method for hydrological forecast post-processing. The experiment is implemented on three large watersheds and relies on the combination of two meteorological reforecast products: the 4-member Canadian reforecasts from the Canadian Centre for Meteorological and Environmental Prediction (CCMEP) and the 10-member American reforecasts from the National Oceanic and Atmospheric Administration (NOAA), leading to 14 members at each time step. Results show that all four tested H-EPS lead to resolution and sharpness values that are quite similar, with an advantage to DBS + EnKF. The ensemble BMA is unable to compensate for any bias left in the precipitation ensemble forecasts. On the other hand, it succeeds in calibrating ensemble members that are otherwise under-dispersed. If reliability is preferred over resolution and sharpness, DBS + EnKF + ensemble BMA performs best, making use of both processors in the H-EPS system. Conversely, for enhanced resolution and sharpness, DBS is the preferred method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFM.A23F..03A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFM.A23F..03A"><span>Ensemble Downscaling of Winter Seasonal Forecasts: The MRED Project</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Arritt, R. W.; Mred Team</p> <p>2010-12-01</p> <p>The Multi-Regional climate model Ensemble Downscaling (MRED) project is a multi-institutional project that is producing large ensembles of downscaled winter seasonal forecasts from coupled atmosphere-ocean seasonal prediction models. Eight regional climate models each are downscaling 15-member ensembles from the National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) and the new NASA seasonal forecast system based on the GEOS5 atmospheric model coupled with the MOM4 ocean model. This produces 240-member ensembles, i.e., 8 regional models x 15 global ensemble members x 2 global models, for each winter season (December-April) of 1982-2003. Results to date show that combined global-regional downscaled forecasts have greatest skill for seasonal precipitation anomalies during strong El Niño events such as 1982-83 and 1997-98. Ensemble means of area-averaged seasonal precipitation for the regional models generally track the corresponding results for the global model, though there is considerable inter-model variability amongst the regional models. For seasons and regions where area mean precipitation is accurately simulated the regional models bring added value by extracting greater spatial detail from the global forecasts, mainly due to better resolution of terrain in the regional models. Our results also emphasize that an ensemble approach is essential to realizing the added value from the combined global-regional modeling system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=334179','PESTICIDES'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=334179"><span>Insights into the deterministic skill of air quality ensembles ...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3), nitrogen dioxide (NO2) and particulate matter (PM10). Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII). The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme) and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each stati</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JHyd..501...73V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JHyd..501...73V"><span>Post-processing ECMWF precipitation and temperature ensemble reforecasts for operational hydrologic forecasting at various spatial scales</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Verkade, J. S.; Brown, J. D.; Reggiani, P.; Weerts, A. H.</p> <p>2013-09-01</p> <p>The ECMWF temperature and precipitation ensemble reforecasts are evaluated for biases in the mean, spread and forecast probabilities, and how these biases propagate to streamflow ensemble forecasts. The forcing ensembles are subsequently post-processed to reduce bias and increase skill, and to investigate whether this leads to improved streamflow ensemble forecasts. Multiple post-processing techniques are used: quantile-to-quantile transform, linear regression with an assumption of bivariate normality and logistic regression. Both the raw and post-processed ensembles are run through a hydrologic model of the river Rhine to create streamflow ensembles. The results are compared using multiple verification metrics and skill scores: relative mean error, Brier skill score and its decompositions, mean continuous ranked probability skill score and its decomposition, and the ROC score. Verification of the streamflow ensembles is performed at multiple spatial scales: relatively small headwater basins, large tributaries and the Rhine outlet at Lobith. The streamflow ensembles are verified against simulated streamflow, in order to isolate the effects of biases in the forcing ensembles and any improvements therein. The results indicate that the forcing ensembles contain significant biases, and that these cascade to the streamflow ensembles. Some of the bias in the forcing ensembles is unconditional in nature; this was resolved by a simple quantile-to-quantile transform. Improvements in conditional bias and skill of the forcing ensembles vary with forecast lead time, amount, and spatial scale, but are generally moderate. The translation to streamflow forecast skill is further muted, and several explanations are considered, including limitations in the modelling of the space-time covariability of the forcing ensembles and the presence of storages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70035825','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70035825"><span>Assessing the impact of land use change on hydrology by ensemble modelling (LUCHEM) II: Ensemble combinations and predictions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.</p> <p>2009-01-01</p> <p>This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in multi-model ensembles. The reasons behind these observations may relate to the effects of the weighting schemes, non-stationarity of the climate series and possible cross-correlations between models. Crown Copyright ?? 2008.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhDT........82D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhDT........82D"><span>Assimilation of lightning data by nudging tropospheric water vapor and applications to numerical forecasts of convective events</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dixon, Kenneth</p> <p></p> <p>A lightning data assimilation technique is developed for use with observations from the World Wide Lightning Location Network (WWLLN). The technique nudges the water vapor mixing ratio toward saturation within 10 km of a lightning observation. This technique is applied to deterministic forecasts of convective events on 29 June 2012, 17 November 2013, and 19 April 2011 as well as an ensemble forecast of the 29 June 2012 event using the Weather Research and Forecasting (WRF) model. Lightning data are assimilated over the first 3 hours of the forecasts, and the subsequent impact on forecast quality is evaluated. The nudged deterministic simulations for all events produce composite reflectivity fields that are closer to observations. For the ensemble forecasts of the 29 June 2012 event, the improvement in forecast quality from lightning assimilation is more subtle than for the deterministic forecasts, suggesting that the lightning assimilation may improve ensemble convective forecasts where conventional observations (e.g., aircraft, surface, radiosonde, satellite) are less dense or unavailable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JSV...403..152B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JSV...403..152B"><span>An approach for the assessment of the statistical aspects of the SEA coupling loss factors and the vibrational energy transmission in complex aircraft structures: Experimental investigation and methods benchmark</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bouhaj, M.; von Estorff, O.; Peiffer, A.</p> <p>2017-09-01</p> <p>In the application of Statistical Energy Analysis "SEA" to complex assembled structures, a purely predictive model often exhibits errors. These errors are mainly due to a lack of accurate modelling of the power transmission mechanism described through the Coupling Loss Factors (CLF). Experimental SEA (ESEA) is practically used by the automotive and aerospace industry to verify and update the model or to derive the CLFs for use in an SEA predictive model when analytical estimates cannot be made. This work is particularly motivated by the lack of procedures that allow an estimate to be made of the variance and confidence intervals of the statistical quantities when using the ESEA technique. The aim of this paper is to introduce procedures enabling a statistical description of measured power input, vibration energies and the derived SEA parameters. Particular emphasis is placed on the identification of structural CLFs of complex built-up structures comparing different methods. By adopting a Stochastic Energy Model (SEM), the ensemble average in ESEA is also addressed. For this purpose, expressions are obtained to randomly perturb the energy matrix elements and generate individual samples for the Monte Carlo (MC) technique applied to derive the ensemble averaged CLF. From results of ESEA tests conducted on an aircraft fuselage section, the SEM approach provides a better performance of estimated CLFs compared to classical matrix inversion methods. The expected range of CLF values and the synthesized energy are used as quality criteria of the matrix inversion, allowing to assess critical SEA subsystems, which might require a more refined statistical description of the excitation and the response fields. Moreover, the impact of the variance of the normalized vibration energy on uncertainty of the derived CLFs is outlined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4760937','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4760937"><span>Texture Descriptors Ensembles Enable Image-Based Classification of Maturation of Human Stem Cell-Derived Retinal Pigmented Epithelium</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Caetano dos Santos, Florentino Luciano; Skottman, Heli; Juuti-Uusitalo, Kati; Hyttinen, Jari</p> <p>2016-01-01</p> <p>Aims A fast, non-invasive and observer-independent method to analyze the homogeneity and maturity of human pluripotent stem cell (hPSC) derived retinal pigment epithelial (RPE) cells is warranted to assess the suitability of hPSC-RPE cells for implantation or in vitro use. The aim of this work was to develop and validate methods to create ensembles of state-of-the-art texture descriptors and to provide a robust classification tool to separate three different maturation stages of RPE cells by using phase contrast microscopy images. The same methods were also validated on a wide variety of biological image classification problems, such as histological or virus image classification. Methods For image classification we used different texture descriptors, descriptor ensembles and preprocessing techniques. Also, three new methods were tested. The first approach was an ensemble of preprocessing methods, to create an additional set of images. The second was the region-based approach, where saliency detection and wavelet decomposition divide each image in two different regions, from which features were extracted through different descriptors. The third method was an ensemble of Binarized Statistical Image Features, based on different sizes and thresholds. A Support Vector Machine (SVM) was trained for each descriptor histogram and the set of SVMs combined by sum rule. The accuracy of the computer vision tool was verified in classifying the hPSC-RPE cell maturation level. Dataset and Results The RPE dataset contains 1862 subwindows from 195 phase contrast images. The final descriptor ensemble outperformed the most recent stand-alone texture descriptors, obtaining, for the RPE dataset, an area under ROC curve (AUC) of 86.49% with the 10-fold cross validation and 91.98% with the leave-one-image-out protocol. The generality of the three proposed approaches was ascertained with 10 more biological image datasets, obtaining an average AUC greater than 97%. Conclusions Here we showed that the developed ensembles of texture descriptors are able to classify the RPE cell maturation stage. Moreover, we proved that preprocessing and region-based decomposition improves many descriptors’ accuracy in biological dataset classification. Finally, we built the first public dataset of stem cell-derived RPE cells, which is publicly available to the scientific community for classification studies. The proposed tool is available at https://www.dei.unipd.it/node/2357 and the RPE dataset at http://www.biomeditech.fi/data/RPE_dataset/. Both are available at https://figshare.com/s/d6fb591f1beb4f8efa6f. PMID:26895509</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010JCAMD..24..675Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010JCAMD..24..675Y"><span>Dynamic clustering threshold reduces conformer ensemble size while maintaining a biologically relevant ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yongye, Austin B.; Bender, Andreas; Martínez-Mayorga, Karina</p> <p>2010-08-01</p> <p>Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged- RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged- RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1-4), medium (5-9) and high (10-15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1312046-near-optimal-protocols-complex-nonequilibrium-transformations','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1312046-near-optimal-protocols-complex-nonequilibrium-transformations"><span>Near-optimal protocols in complex nonequilibrium transformations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Gingrich, Todd R.; Rotskoff, Grant M.; Crooks, Gavin E.; ...</p> <p>2016-08-29</p> <p>The development of sophisticated experimental means to control nanoscale systems has motivated efforts to design driving protocols that minimize the energy dissipated to the environment. Computational models are a crucial tool in this practical challenge. In this paper, we describe a general method for sampling an ensemble of finite-time, nonequilibrium protocols biased toward a low average dissipation. In addition, we show that this scheme can be carried out very efficiently in several limiting cases. As an application, we sample the ensemble of low-dissipation protocols that invert the magnetization of a 2D Ising model and explore how the diversity of themore » protocols varies in response to constraints on the average dissipation. In this example, we find that there is a large set of protocols with average dissipation close to the optimal value, which we argue is a general phenomenon.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3580869','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3580869"><span>Automated Delineation of Lung Tumors from CT Images Using a Single Click Ensemble Segmentation Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gu, Yuhua; Kumar, Virendra; Hall, Lawrence O; Goldgof, Dmitry B; Li, Ching-Yen; Korn, René; Bendtsen, Claus; Velazquez, Emmanuel Rios; Dekker, Andre; Aerts, Hugo; Lambin, Philippe; Li, Xiuli; Tian, Jie; Gatenby, Robert A; Gillies, Robert J</p> <p>2012-01-01</p> <p>A single click ensemble segmentation (SCES) approach based on an existing “Click&Grow” algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated. PMID:23459617</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25913899','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25913899"><span>Real time detection of farm-level swine mycobacteriosis outbreak using time series modeling of the number of condemned intestines in abattoirs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Adachi, Yasumoto; Makita, Kohei</p> <p>2015-09-01</p> <p>Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28763673','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28763673"><span>Performance assessment of individual and ensemble data-mining techniques for gully erosion modeling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pourghasemi, Hamid Reza; Yousefi, Saleh; Kornejady, Aiding; Cerdà, Artemi</p> <p>2017-12-31</p> <p>Gully erosion is identified as an important sediment source in a range of environments and plays a conclusive role in redistribution of eroded soils on a slope. Hence, addressing spatial occurrence pattern of this phenomenon is very important. Different ensemble models and their single counterparts, mostly data mining methods, have been used for gully erosion susceptibility mapping; however, their calibration and validation procedures need to be thoroughly addressed. The current study presents a series of individual and ensemble data mining methods including artificial neural network (ANN), support vector machine (SVM), maximum entropy (ME), ANN-SVM, ANN-ME, and SVM-ME to map gully erosion susceptibility in Aghemam watershed, Iran. To this aim, a gully inventory map along with sixteen gully conditioning factors was used. A 70:30% randomly partitioned sets were used to assess goodness-of-fit and prediction power of the models. The robustness, as the stability of models' performance in response to changes in the dataset, was assessed through three training/test replicates. As a result, conducted preliminary statistical tests showed that ANN has the highest concordance and spatial differentiation with a chi-square value of 36,656 at 95% confidence level, while the ME appeared to have the lowest concordance (1772). The ME model showed an impractical result where 45% of the study area was introduced as highly susceptible to gullying, in contrast, ANN-SVM indicated a practical result with focusing only on 34% of the study area. Through all three replicates, the ANN-SVM ensemble showed the highest goodness-of-fit and predictive power with a respective values of 0.897 (area under the success rate curve) and 0.879 (area under the prediction rate curve), on average, and correspondingly the highest robustness. This attests the important role of ensemble modeling in congruently building accurate and generalized models which emphasizes the necessity to examine different models integrations. The result of this study can prepare an outline for further biophysical designs on gullies scattered in the study area. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1712240J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1712240J"><span>Glyph-based analysis of multimodal directional distributions in vector field ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger</p> <p>2015-04-01</p> <p>Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21576895-measuring-lensing-magnification-quasars-large-scale-structure-using-variability-luminosity-relation','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21576895-measuring-lensing-magnification-quasars-large-scale-structure-using-variability-luminosity-relation"><span>MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE USING THE VARIABILITY-LUMINOSITY RELATION</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Bauer, Anne H.; Seitz, Stella; Jerke, Jonathan</p> <p>2011-05-10</p> <p>We introduce a technique to measure gravitational lensing magnification using the variability of type I quasars. Quasars' variability amplitudes and luminosities are tightly correlated, on average. Magnification due to gravitational lensing increases the quasars' apparent luminosity, while leaving the variability amplitude unchanged. Therefore, the mean magnification of an ensemble of quasars can be measured through the mean shift in the variability-luminosity relation. As a proof of principle, we use this technique to measure the magnification of quasars spectroscopically identified in the Sloan Digital Sky Survey (SDSS), due to gravitational lensing by galaxy clusters in the SDSS MaxBCG catalog. The Palomar-QUESTmore » Variability Survey, reduced using the DeepSky pipeline, provides variability data for the sources. We measure the average quasar magnification as a function of scaled distance (r/R{sub 200}) from the nearest cluster; our measurements are consistent with expectations assuming Navarro-Frenk-White cluster profiles, particularly after accounting for the known uncertainty in the clusters' centers. Variability-based lensing measurements are a valuable complement to shape-based techniques because their systematic errors are very different, and also because the variability measurements are amenable to photometric errors of a few percent and to depths seen in current wide-field surveys. Given the volume data of the expected from current and upcoming surveys, this new technique has the potential to be competitive with weak lensing shear measurements of large-scale structure.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26660692','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26660692"><span>Automatic Estimation of Osteoporotic Fracture Cases by Using Ensemble Learning Approaches.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kilic, Niyazi; Hosgormez, Erkan</p> <p>2016-03-01</p> <p>Ensemble learning methods are one of the most powerful tools for the pattern classification problems. In this paper, the effects of ensemble learning methods and some physical bone densitometry parameters on osteoporotic fracture detection were investigated. Six feature set models were constructed including different physical parameters and they fed into the ensemble classifiers as input features. As ensemble learning techniques, bagging, gradient boosting and random subspace (RSM) were used. Instance based learning (IBk) and random forest (RF) classifiers applied to six feature set models. The patients were classified into three groups such as osteoporosis, osteopenia and control (healthy), using ensemble classifiers. Total classification accuracy and f-measure were also used to evaluate diagnostic performance of the proposed ensemble classification system. The classification accuracy has reached to 98.85 % by the combination of model 6 (five BMD + five T-score values) using RSM-RF classifier. The findings of this paper suggest that the patients will be able to be warned before a bone fracture occurred, by just examining some physical parameters that can easily be measured without invasive operations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014NPGeo..21..303G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014NPGeo..21..303G"><span>A hybrid variational ensemble data assimilation for the HIgh Resolution Limited Area Model (HIRLAM)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gustafsson, N.; Bojarova, J.; Vignes, O.</p> <p>2014-02-01</p> <p>A hybrid variational ensemble data assimilation has been developed on top of the HIRLAM variational data assimilation. It provides the possibility of applying a flow-dependent background error covariance model during the data assimilation at the same time as full rank characteristics of the variational data assimilation are preserved. The hybrid formulation is based on an augmentation of the assimilation control variable with localised weights to be assigned to a set of ensemble member perturbations (deviations from the ensemble mean). The flow-dependency of the hybrid assimilation is demonstrated in single simulated observation impact studies and the improved performance of the hybrid assimilation in comparison with pure 3-dimensional variational as well as pure ensemble assimilation is also proven in real observation assimilation experiments. The performance of the hybrid assimilation is comparable to the performance of the 4-dimensional variational data assimilation. The sensitivity to various parameters of the hybrid assimilation scheme and the sensitivity to the applied ensemble generation techniques are also examined. In particular, the inclusion of ensemble perturbations with a lagged validity time has been examined with encouraging results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.1422H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.1422H"><span>Multivariate postprocessing techniques for probabilistic hydrological forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian</p> <p>2016-04-01</p> <p>Hydrologic ensemble forecasts driven by atmospheric ensemble prediction systems need statistical postprocessing in order to account for systematic errors in terms of both mean and spread. Runoff is an inherently multivariate process with typical events lasting from hours in case of floods to weeks or even months in case of droughts. This calls for multivariate postprocessing techniques that yield well calibrated forecasts in univariate terms and ensure a realistic temporal dependence structure at the same time. To this end, the univariate ensemble model output statistics (EMOS; Gneiting et al., 2005) postprocessing method is combined with two different copula approaches that ensure multivariate calibration throughout the entire forecast horizon. These approaches comprise ensemble copula coupling (ECC; Schefzik et al., 2013), which preserves the dependence structure of the raw ensemble, and a Gaussian copula approach (GCA; Pinson and Girard, 2012), which estimates the temporal correlations from training observations. Both methods are tested in a case study covering three subcatchments of the river Rhine that represent different sizes and hydrological regimes: the Upper Rhine up to the gauge Maxau, the river Moselle up to the gauge Trier, and the river Lahn up to the gauge Kalkofen. The results indicate that both ECC and GCA are suitable for modelling the temporal dependences of probabilistic hydrologic forecasts (Hemri et al., 2015). References Gneiting, T., A. E. Raftery, A. H. Westveld, and T. Goldman (2005), Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation, Monthly Weather Review, 133(5), 1098-1118, DOI: 10.1175/MWR2904.1. Hemri, S., D. Lisniak, and B. Klein, Multivariate postprocessing techniques for probabilistic hydrological forecasting, Water Resources Research, 51(9), 7436-7451, DOI: 10.1002/2014WR016473. Pinson, P., and R. Girard (2012), Evaluating the quality of scenarios of short-term wind power generation, Applied Energy, 96, 12-20, DOI: 10.1016/j.apenergy.2011.11.004. Schefzik, R., T. L. Thorarinsdottir, and T. Gneiting (2013), Uncertainty quantification in complex simulation models using ensemble copula coupling, Statistical Science, 28, 616-640, DOI: 10.1214/13-STS443.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1069555.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1069555.pdf"><span>Peer-Teaching in the Secondary Music Ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Johnson, Erik</p> <p>2015-01-01</p> <p>Peer-teaching is an instructional technique that has been used by teachers world-wide to successfully engage, exercise and deepen student learning. Yet, in some instances, teachers find the application of peer-teaching in large music ensembles at the secondary level to be daunting. This article is meant to be a practical resource for secondary…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=database+AND+performance&id=EJ934757','ERIC'); return false;" href="https://eric.ed.gov/?q=database+AND+performance&id=EJ934757"><span>Programming Practices of Atlantic Coast Conference Wind Ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Wiltshire, Eric S.; Paul, Timothy A.; Paul, Phyllis M.; Rudnicki, Erika</p> <p>2010-01-01</p> <p>This study examined the programming trends of the elite wind bands/ensembles of the Atlantic Coast Conference universities. Using survey techniques previously employed by Powell (2009) and Paul (2010; in press), we contacted the directors of the Atlantic Coast Conference band programs and requested concert programs from their top groups for the…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4168494','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4168494"><span>Clustering-Based Ensemble Learning for Activity Recognition in Smart Homes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Jurek, Anna; Nugent, Chris; Bi, Yaxin; Wu, Shengli</p> <p>2014-01-01</p> <p>Application of sensor-based technology within activity monitoring systems is becoming a popular technique within the smart environment paradigm. Nevertheless, the use of such an approach generates complex constructs of data, which subsequently requires the use of intricate activity recognition techniques to automatically infer the underlying activity. This paper explores a cluster-based ensemble method as a new solution for the purposes of activity recognition within smart environments. With this approach activities are modelled as collections of clusters built on different subsets of features. A classification process is performed by assigning a new instance to its closest cluster from each collection. Two different sensor data representations have been investigated, namely numeric and binary. Following the evaluation of the proposed methodology it has been demonstrated that the cluster-based ensemble method can be successfully applied as a viable option for activity recognition. Results following exposure to data collected from a range of activities indicated that the ensemble method had the ability to perform with accuracies of 94.2% and 97.5% for numeric and binary data, respectively. These results outperformed a range of single classifiers considered as benchmarks. PMID:25014095</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25014095','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25014095"><span>Clustering-based ensemble learning for activity recognition in smart homes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jurek, Anna; Nugent, Chris; Bi, Yaxin; Wu, Shengli</p> <p>2014-07-10</p> <p>Application of sensor-based technology within activity monitoring systems is becoming a popular technique within the smart environment paradigm. Nevertheless, the use of such an approach generates complex constructs of data, which subsequently requires the use of intricate activity recognition techniques to automatically infer the underlying activity. This paper explores a cluster-based ensemble method as a new solution for the purposes of activity recognition within smart environments. With this approach activities are modelled as collections of clusters built on different subsets of features. A classification process is performed by assigning a new instance to its closest cluster from each collection. Two different sensor data representations have been investigated, namely numeric and binary. Following the evaluation of the proposed methodology it has been demonstrated that the cluster-based ensemble method can be successfully applied as a viable option for activity recognition. Results following exposure to data collected from a range of activities indicated that the ensemble method had the ability to perform with accuracies of 94.2% and 97.5% for numeric and binary data, respectively. These results outperformed a range of single classifiers considered as benchmarks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148l3329Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148l3329Z"><span>Inferring properties of disordered chains from FRET transfer efficiencies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zheng, Wenwei; Zerze, Gül H.; Borgia, Alessandro; Mittal, Jeetain; Schuler, Benjamin; Best, Robert B.</p> <p>2018-03-01</p> <p>Förster resonance energy transfer (FRET) is a powerful tool for elucidating both structural and dynamic properties of unfolded or disordered biomolecules, especially in single-molecule experiments. However, the key observables, namely, the mean transfer efficiency and fluorescence lifetimes of the donor and acceptor chromophores, are averaged over a broad distribution of donor-acceptor distances. The inferred average properties of the ensemble therefore depend on the form of the model distribution chosen to describe the distance, as has been widely recognized. In addition, while the distribution for one type of polymer model may be appropriate for a chain under a given set of physico-chemical conditions, it may not be suitable for the same chain in a different environment so that even an apparently consistent application of the same model over all conditions may distort the apparent changes in chain dimensions with variation of temperature or solution composition. Here, we present an alternative and straightforward approach to determining ensemble properties from FRET data, in which the polymer scaling exponent is allowed to vary with solution conditions. In its simplest form, it requires either the mean FRET efficiency or fluorescence lifetime information. In order to test the accuracy of the method, we have utilized both synthetic FRET data from implicit and explicit solvent simulations for 30 different protein sequences, and experimental single-molecule FRET data for an intrinsically disordered and a denatured protein. In all cases, we find that the inferred radii of gyration are within 10% of the true values, thus providing higher accuracy than simpler polymer models. In addition, the scaling exponents obtained by our procedure are in good agreement with those determined directly from the molecular ensemble. Our approach can in principle be generalized to treating other ensemble-averaged functions of intramolecular distances from experimental data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ThApC.tmp..394S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ThApC.tmp..394S"><span>Simulation of tropical cyclone activity over the western North Pacific based on CMIP5 models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shen, Haibo; Zhou, Weican; Zhao, Haikun</p> <p>2017-09-01</p> <p>Based on the Coupled Model Inter-comparison Project 5 (CMIP5) models, the tropical cyclone (TC) activity in the summers of 1965-2005 over the western North Pacific (WNP) is simulated by a TC dynamically downscaling system. In consideration of diversity among climate models, Bayesian model averaging (BMA) and equal-weighed model averaging (EMA) methods are applied to produce the ensemble large-scale environmental factors of the CMIP5 model outputs. The environmental factors generated by BMA and EMA methods are compared, as well as the corresponding TC simulations by the downscaling system. Results indicate that BMA method shows a significant advantage over the EMA. In addition, impacts of model selections on BMA method are examined. To each factor, ten models with better performance are selected from 30 CMIP5 models and then conduct BMA, respectively. As a consequence, the ensemble environmental factors and simulated TC activity are similar with the results from the 30 models' BMA, which verifies the BMA method can afford corresponding weight for each model in the ensemble based on the model's predictive skill. Thereby, the existence of poor performance models will not particularly affect the BMA effectiveness and the ensemble outcomes are improved. Finally, based upon the BMA method and downscaling system, we analyze the sensitivity of TC activity to three important environmental factors, i.e., sea surface temperature (SST), large-scale steering flow, and vertical wind shear. Among three factors, SST and large-scale steering flow greatly affect TC tracks, while average intensity distribution is sensitive to all three environmental factors. Moreover, SST and vertical wind shear jointly play a critical role in the inter-annual variability of TC lifetime maximum intensity and frequency of intense TCs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..1514107S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..1514107S"><span>Synchronization Experiments With A Global Coupled Model of Intermediate Complexity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin</p> <p>2013-04-01</p> <p>In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4980076','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4980076"><span>Bayesian Ensemble Trees (BET) for Clustering and Prediction in Heterogeneous Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Duan, Leo L.; Clancy, John P.; Szczesniak, Rhonda D.</p> <p>2016-01-01</p> <p>We propose a novel “tree-averaging” model that utilizes the ensemble of classification and regression trees (CART). Each constituent tree is estimated with a subset of similar data. We treat this grouping of subsets as Bayesian Ensemble Trees (BET) and model them as a Dirichlet process. We show that BET determines the optimal number of trees by adapting to the data heterogeneity. Compared with the other ensemble methods, BET requires much fewer trees and shows equivalent prediction accuracy using weighted averaging. Moreover, each tree in BET provides variable selection criterion and interpretation for each subset. We developed an efficient estimating procedure with improved estimation strategies in both CART and mixture models. We demonstrate these advantages of BET with simulations and illustrate the approach with a real-world data example involving regression of lung function measurements obtained from patients with cystic fibrosis. Supplemental materials are available online. PMID:27524872</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5024108','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5024108"><span>Controllable quantum dynamics of inhomogeneous nitrogen-vacancy center ensembles coupled to superconducting resonators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Song, Wan-lu; Yang, Wan-li; Yin, Zhang-qi; Chen, Chang-yong; Feng, Mang</p> <p>2016-01-01</p> <p>We explore controllable quantum dynamics of a hybrid system, which consists of an array of mutually coupled superconducting resonators (SRs) with each containing a nitrogen-vacancy center spin ensemble (NVE) in the presence of inhomogeneous broadening. We focus on a three-site model, which compared with the two-site case, shows more complicated and richer dynamical behavior, and displays a series of damped oscillations under various experimental situations, reflecting the intricate balance and competition between the NVE-SR collective coupling and the adjacent-site photon hopping. Particularly, we find that the inhomogeneous broadening of the spin ensemble can suppress the population transfer between the SR and the local NVE. In this context, although the inhomogeneous broadening of the spin ensemble diminishes entanglement among the NVEs, optimal entanglement, characterized by averaging the lower bound of concurrence, could be achieved through accurately adjusting the tunable parameters. PMID:27627994</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28036236','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28036236"><span>Ensemble Perception of Dynamic Emotional Groups.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Elias, Elric; Dyer, Michael; Sweeny, Timothy D</p> <p>2017-02-01</p> <p>Crowds of emotional faces are ubiquitous, so much so that the visual system utilizes a specialized mechanism known as ensemble coding to see them. In addition to being proximally close, members of emotional crowds, such as a laughing audience or an angry mob, often behave together. The manner in which crowd members behave-in sync or out of sync-may be critical for understanding their collective affect. Are ensemble mechanisms sensitive to these dynamic properties of groups? Here, observers estimated the average emotion of a crowd of dynamic faces. The members of some crowds changed their expressions synchronously, whereas individuals in other crowds acted asynchronously. Observers perceived the emotion of a synchronous group more precisely than the emotion of an asynchronous crowd or even a single dynamic face. These results demonstrate that ensemble representation is particularly sensitive to coordinated behavior, and they suggest that shared behavior is critical for understanding emotion in groups.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018InvPr..34g5008I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018InvPr..34g5008I"><span>Ensemble-marginalized Kalman filter for linear time-dependent PDEs with noisy boundary conditions: application to heat transfer in building walls</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Iglesias, Marco; Sawlan, Zaid; Scavino, Marco; Tempone, Raúl; Wood, Christopher</p> <p>2018-07-01</p> <p>In this work, we present the ensemble-marginalized Kalman filter (EnMKF), a sequential algorithm analogous to our previously proposed approach (Ruggeri et al 2017 Bayesian Anal. 12 407–33, Iglesias et al 2018 Int. J. Heat Mass Transfer 116 417–31), for estimating the state and parameters of linear parabolic partial differential equations in initial-boundary value problems when the boundary data are noisy. We apply EnMKF to infer the thermal properties of building walls and to estimate the corresponding heat flux from real and synthetic data. Compared with a modified ensemble Kalman filter (EnKF) that is not marginalized, EnMKF reduces the bias error, avoids the collapse of the ensemble without needing to add inflation, and converges to the mean field posterior using or less of the ensemble size required by EnKF. According to our results, the marginalization technique in EnMKF is key to performance improvement with smaller ensembles at any fixed time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AJ....152...44I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AJ....152...44I"><span>Repeatability and Accuracy of Exoplanet Eclipse Depths Measured with Post-cryogenic Spitzer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ingalls, James G.; Krick, J. E.; Carey, S. J.; Stauffer, John R.; Lowrance, Patrick J.; Grillmair, Carl J.; Buzasi, Derek; Deming, Drake; Diamond-Lowe, Hannah; Evans, Thomas M.; Morello, G.; Stevenson, Kevin B.; Wong, Ian; Capak, Peter; Glaccum, William; Laine, Seppo; Surace, Jason; Storrie-Lombardi, Lisa</p> <p>2016-08-01</p> <p>We examine the repeatability, reliability, and accuracy of differential exoplanet eclipse depth measurements made using the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope during the post-cryogenic mission. We have re-analyzed an existing 4.5 μm data set, consisting of 10 observations of the XO-3b system during secondary eclipse, using seven different techniques for removing correlated noise. We find that, on average, for a given technique, the eclipse depth estimate is repeatable from epoch to epoch to within 156 parts per million (ppm). Most techniques derive eclipse depths that do not vary by more than a factor 3 of the photon noise limit. All methods but one accurately assess their own errors: for these methods, the individual measurement uncertainties are comparable to the scatter in eclipse depths over the 10 epoch sample. To assess the accuracy of the techniques as well as to clarify the difference between instrumental and other sources of measurement error, we have also analyzed a simulated data set of 10 visits to XO-3b, for which the eclipse depth is known. We find that three of the methods (BLISS mapping, Pixel Level Decorrelation, and Independent Component Analysis) obtain results that are within three times the photon limit of the true eclipse depth. When averaged over the 10 epoch ensemble, 5 out of 7 techniques come within 60 ppm of the true value. Spitzer exoplanet data, if obtained following current best practices and reduced using methods such as those described here, can measure repeatable and accurate single eclipse depths, with close to photon-limited results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1393517','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1393517"><span>A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr</p> <p></p> <p>We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1393517-performance-analysis-ensemble-averaging-high-fidelity-turbulence-simulations-strong-scaling-limit','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1393517-performance-analysis-ensemble-averaging-high-fidelity-turbulence-simulations-strong-scaling-limit"><span>A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr; ...</p> <p>2017-06-07</p> <p>We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19760010865','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19760010865"><span>A strictly Markovian expansion for plasma turbulence theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jones, F. C.</p> <p>1976-01-01</p> <p>The collision operator that appears in the equation of motion for a particle distribution function that was averaged over an ensemble of random Hamiltonians is non-Markovian. It is non-Markovian in that it involves a propagated integral over the past history of the ensemble averaged distribution function. All formal expansions of this nonlinear collision operator to date preserve this non-Markovian character term by term yielding an integro-differential equation that must be converted to a diffusion equation by an additional approximation. An expansion is derived for the collision operator that is strictly Markovian to any finite order and yields a diffusion equation as the lowest nontrivial order. The validity of this expansion is seen to be the same as that of the standard quasilinear expansion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4655909','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4655909"><span>Molecular dynamics simulations: advances and applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hospital, Adam; Goñi, Josep Ramon; Orozco, Modesto; Gelpí, Josep L</p> <p>2015-01-01</p> <p>Molecular dynamics simulations have evolved into a mature technique that can be used effectively to understand macromolecular structure-to-function relationships. Present simulation times are close to biologically relevant ones. Information gathered about the dynamic properties of macromolecules is rich enough to shift the usual paradigm of structural bioinformatics from studying single structures to analyze conformational ensembles. Here, we describe the foundations of molecular dynamics and the improvements made in the direction of getting such ensemble. Specific application of the technique to three main issues (allosteric regulation, docking, and structure refinement) is discussed. PMID:26604800</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29051918','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29051918"><span>Effects of ensemble and summary displays on interpretations of geospatial uncertainty data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Padilla, Lace M; Ruginski, Ian T; Creem-Regehr, Sarah H</p> <p>2017-01-01</p> <p>Ensemble and summary displays are two widely used methods to represent visual-spatial uncertainty; however, there is disagreement about which is the most effective technique to communicate uncertainty to the general public. Visualization scientists create ensemble displays by plotting multiple data points on the same Cartesian coordinate plane. Despite their use in scientific practice, it is more common in public presentations to use visualizations of summary displays, which scientists create by plotting statistical parameters of the ensemble members. While prior work has demonstrated that viewers make different decisions when viewing summary and ensemble displays, it is unclear what components of the displays lead to diverging judgments. This study aims to compare the salience of visual features - or visual elements that attract bottom-up attention - as one possible source of diverging judgments made with ensemble and summary displays in the context of hurricane track forecasts. We report that salient visual features of both ensemble and summary displays influence participant judgment. Specifically, we find that salient features of summary displays of geospatial uncertainty can be misunderstood as displaying size information. Further, salient features of ensemble displays evoke judgments that are indicative of accurate interpretations of the underlying probability distribution of the ensemble data. However, when participants use ensemble displays to make point-based judgments, they may overweight individual ensemble members in their decision-making process. We propose that ensemble displays are a promising alternative to summary displays in a geospatial context but that decisions about visualization methods should be informed by the viewer's task.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AtmRe.194..245S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AtmRe.194..245S"><span>Probabilistic precipitation nowcasting based on an extrapolation of radar reflectivity and an ensemble approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sokol, Zbyněk; Mejsnar, Jan; Pop, Lukáš; Bližňák, Vojtěch</p> <p>2017-09-01</p> <p>A new method for the probabilistic nowcasting of instantaneous rain rates (ENS) based on the ensemble technique and extrapolation along Lagrangian trajectories of current radar reflectivity is presented. Assuming inaccurate forecasts of the trajectories, an ensemble of precipitation forecasts is calculated and used to estimate the probability that rain rates will exceed a given threshold in a given grid point. Although the extrapolation neglects the growth and decay of precipitation, their impact on the probability forecast is taken into account by the calibration of forecasts using the reliability component of the Brier score (BS). ENS forecasts the probability that the rain rates will exceed thresholds of 0.1, 1.0 and 3.0 mm/h in squares of 3 km by 3 km. The lead times were up to 60 min, and the forecast accuracy was measured by the BS. The ENS forecasts were compared with two other methods: combined method (COM) and neighbourhood method (NEI). NEI considered the extrapolated values in the square neighbourhood of 5 by 5 grid points of the point of interest as ensemble members, and the COM ensemble was comprised of united ensemble members of ENS and NEI. The results showed that the calibration technique significantly improves bias of the probability forecasts by including additional uncertainties that correspond to neglected processes during the extrapolation. In addition, the calibration can also be used for finding the limits of maximum lead times for which the forecasting method is useful. We found that ENS is useful for lead times up to 60 min for thresholds of 0.1 and 1 mm/h and approximately 30 to 40 min for a threshold of 3 mm/h. We also found that a reasonable size of the ensemble is 100 members, which provided better scores than ensembles with 10, 25 and 50 members. In terms of the BS, the best results were obtained by ENS and COM, which are comparable. However, ENS is better calibrated and thus preferable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ACPD...13.4989S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ACPD...13.4989S"><span>Pauci ex tanto numero: reducing redundancy in multi-model ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.</p> <p>2013-02-01</p> <p>We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date no attempts in this direction are documented within the air quality (AQ) community, although the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared biases among models will determine a biased ensemble, making therefore essential the errors of the ensemble members to be independent so that bias can cancel out. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated) we discourage selecting the members of the ensemble simply on the basis of scores, that is, independence and skills need to be considered disjointly.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvE..97e3310C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvE..97e3310C"><span>Generalized Green's function molecular dynamics for canonical ensemble simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Coluci, V. R.; Dantas, S. O.; Tewary, V. K.</p> <p>2018-05-01</p> <p>The need of small integration time steps (˜1 fs) in conventional molecular dynamics simulations is an important issue that inhibits the study of physical, chemical, and biological systems in real timescales. Additionally, to simulate those systems in contact with a thermal bath, thermostating techniques are usually applied. In this work, we generalize the Green's function molecular dynamics technique to allow simulations within the canonical ensemble. By applying this technique to one-dimensional systems, we were able to correctly describe important thermodynamic properties such as the temperature fluctuations, the temperature distribution, and the velocity autocorrelation function. We show that the proposed technique also allows the use of time steps one order of magnitude larger than those typically used in conventional molecular dynamics simulations. We expect that this technique can be used in long-timescale molecular dynamics simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006Sci...313..200S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006Sci...313..200S"><span>Electric Fields at the Active Site of an Enzyme: Direct Comparison of Experiment with Theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Suydam, Ian T.; Snow, Christopher D.; Pande, Vijay S.; Boxer, Steven G.</p> <p>2006-07-01</p> <p>The electric fields produced in folded proteins influence nearly every aspect of protein function. We present a vibrational spectroscopy technique that measures changes in electric field at a specific site of a protein as shifts in frequency (Stark shifts) of a calibrated nitrile vibration. A nitrile-containing inhibitor is used to deliver a unique probe vibration to the active site of human aldose reductase, and the response of the nitrile stretch frequency is measured for a series of mutations in the enzyme active site. These shifts yield quantitative information on electric fields that can be directly compared with electrostatics calculations. We show that extensive molecular dynamics simulations and ensemble averaging are required to reproduce the observed changes in field.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3524795','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3524795"><span>Modelling dynamics in protein crystal structures by ensemble refinement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Burnley, B Tom; Afonine, Pavel V; Adams, Paul D; Gros, Piet</p> <p>2012-01-01</p> <p>Single-structure models derived from X-ray data do not adequately account for the inherent, functionally important dynamics of protein molecules. We generated ensembles of structures by time-averaged refinement, where local molecular vibrations were sampled by molecular-dynamics (MD) simulation whilst global disorder was partitioned into an underlying overall translation–libration–screw (TLS) model. Modeling of 20 protein datasets at 1.1–3.1 Å resolution reduced cross-validated Rfree values by 0.3–4.9%, indicating that ensemble models fit the X-ray data better than single structures. The ensembles revealed that, while most proteins display a well-ordered core, some proteins exhibit a ‘molten core’ likely supporting functionally important dynamics in ligand binding, enzyme activity and protomer assembly. Order–disorder changes in HIV protease indicate a mechanism of entropy compensation for ordering the catalytic residues upon ligand binding by disordering specific core residues. Thus, ensemble refinement extracts dynamical details from the X-ray data that allow a more comprehensive understanding of structure–dynamics–function relationships. DOI: http://dx.doi.org/10.7554/eLife.00311.001 PMID:23251785</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1334906-selecting-classification-ensemble-detecting-process-drift-evolving-data-stream','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1334906-selecting-classification-ensemble-detecting-process-drift-evolving-data-stream"><span>Selecting a Classification Ensemble and Detecting Process Drift in an Evolving Data Stream</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Heredia-Langner, Alejandro; Rodriguez, Luke R.; Lin, Andy</p> <p>2015-09-30</p> <p>We characterize the commercial behavior of a group of companies in a common line of business using a small ensemble of classifiers on a stream of records containing commercial activity information. This approach is able to effectively find a subset of classifiers that can be used to predict company labels with reasonable accuracy. Performance of the ensemble, its error rate under stable conditions, can be characterized using an exponentially weighted moving average (EWMA) statistic. The behavior of the EWMA statistic can be used to monitor a record stream from the commercial network and determine when significant changes have occurred. Resultsmore » indicate that larger classification ensembles may not necessarily be optimal, pointing to the need to search the combinatorial classifier space in a systematic way. Results also show that current and past performance of an ensemble can be used to detect when statistically significant changes in the activity of the network have occurred. The dataset used in this work contains tens of thousands of high level commercial activity records with continuous and categorical variables and hundreds of labels, making classification challenging.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4948663','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4948663"><span>A benchmark for reaction coordinates in the transition path ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2016-01-01</p> <p>The molecular mechanism of a reaction is embedded in its transition path ensemble, the complete collection of reactive trajectories. Utilizing the information in the transition path ensemble alone, we developed a novel metric, which we termed the emergent potential energy, for distinguishing reaction coordinates from the bath modes. The emergent potential energy can be understood as the average energy cost for making a displacement of a coordinate in the transition path ensemble. Where displacing a bath mode invokes essentially no cost, it costs significantly to move the reaction coordinate. Based on some general assumptions of the behaviors of reaction and bath coordinates in the transition path ensemble, we proved theoretically with statistical mechanics that the emergent potential energy could serve as a benchmark of reaction coordinates and demonstrated its effectiveness by applying it to a prototypical system of biomolecular dynamics. Using the emergent potential energy as guidance, we developed a committor-free and intuition-independent method for identifying reaction coordinates in complex systems. We expect this method to be applicable to a wide range of reaction processes in complex biomolecular systems. PMID:27059559</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27787827','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27787827"><span>Predicting Real-Valued Protein Residue Fluctuation Using FlexPred.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Peterson, Lenna; Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke</p> <p>2017-01-01</p> <p>The conventional view of a protein structure as static provides only a limited picture. There is increasing evidence that protein dynamics are often vital to protein function including interaction with partners such as other proteins, nucleic acids, and small molecules. Considering flexibility is also important in applications such as computational protein docking and protein design. While residue flexibility is partially indicated by experimental measures such as the B-factor from X-ray crystallography and ensemble fluctuation from nuclear magnetic resonance (NMR) spectroscopy as well as computational molecular dynamics (MD) simulation, these techniques are resource-intensive. In this chapter, we describe the web server and stand-alone version of FlexPred, which rapidly predicts absolute per-residue fluctuation from a three-dimensional protein structure. On a set of 592 nonredundant structures, comparing the fluctuations predicted by FlexPred to the observed fluctuations in MD simulations showed an average correlation coefficient of 0.669 and an average root mean square error of 1.07 Å. FlexPred is available at http://kiharalab.org/flexPred/ .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3443104','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3443104"><span>Improvement of Disease Prediction and Modeling through the Use of Meteorological Ensembles: Human Plague in Uganda</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Moore, Sean M.; Monaghan, Andrew; Griffith, Kevin S.; Apangu, Titus; Mead, Paul S.; Eisen, Rebecca J.</p> <p>2012-01-01</p> <p>Climate and weather influence the occurrence, distribution, and incidence of infectious diseases, particularly those caused by vector-borne or zoonotic pathogens. Thus, models based on meteorological data have helped predict when and where human cases are most likely to occur. Such knowledge aids in targeting limited prevention and control resources and may ultimately reduce the burden of diseases. Paradoxically, localities where such models could yield the greatest benefits, such as tropical regions where morbidity and mortality caused by vector-borne diseases is greatest, often lack high-quality in situ local meteorological data. Satellite- and model-based gridded climate datasets can be used to approximate local meteorological conditions in data-sparse regions, however their accuracy varies. Here we investigate how the selection of a particular dataset can influence the outcomes of disease forecasting models. Our model system focuses on plague (Yersinia pestis infection) in the West Nile region of Uganda. The majority of recent human cases have been reported from East Africa and Madagascar, where meteorological observations are sparse and topography yields complex weather patterns. Using an ensemble of meteorological datasets and model-averaging techniques we find that the number of suspected cases in the West Nile region was negatively associated with dry season rainfall (December-February) and positively with rainfall prior to the plague season. We demonstrate that ensembles of available meteorological datasets can be used to quantify climatic uncertainty and minimize its impacts on infectious disease models. These methods are particularly valuable in regions with sparse observational networks and high morbidity and mortality from vector-borne diseases. PMID:23024750</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25187852','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25187852"><span>Enhancing coronary Wave Intensity Analysis robustness by high order central finite differences.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rivolo, Simone; Asrress, Kaleab N; Chiribiri, Amedeo; Sammut, Eva; Wesolowski, Roman; Bloch, Lars Ø; Grøndal, Anne K; Hønge, Jesper L; Kim, Won Y; Marber, Michael; Redwood, Simon; Nagel, Eike; Smith, Nicolas P; Lee, Jack</p> <p>2014-09-01</p> <p>Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. Studies have identified WIA-derived indices that are closely correlated with several disease processes and predictive of functional recovery following myocardial infarction. The cWIA clinical application has, however, been limited by technical challenges including a lack of standardization across different studies and the derived indices' sensitivity to the processing parameters. Specifically, a critical step in WIA is the noise removal for evaluation of derivatives of the acquired signals, typically performed by applying a Savitzky-Golay filter, to reduce the high frequency acquisition noise. The impact of the filter parameter selection on cWIA output, and on the derived clinical metrics (integral areas and peaks of the major waves), is first analysed. The sensitivity analysis is performed either by using the filter as a differentiator to calculate the signals' time derivative or by applying the filter to smooth the ensemble-averaged waveforms. Furthermore, the power-spectrum of the ensemble-averaged waveforms contains little high-frequency components, which motivated us to propose an alternative approach to compute the time derivatives of the acquired waveforms using a central finite difference scheme. The cWIA output and consequently the derived clinical metrics are significantly affected by the filter parameters, irrespective of its use as a smoothing filter or a differentiator. The proposed approach is parameter-free and, when applied to the 10 in-vivo human datasets and the 50 in-vivo animal datasets, enhances the cWIA robustness by significantly reducing the outcome variability (by 60%).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27227721','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27227721"><span>AUC-Maximizing Ensembles through Metalearning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>LeDell, Erin; van der Laan, Mark J; Petersen, Maya</p> <p>2016-05-01</p> <p>Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4912128','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4912128"><span>AUC-Maximizing Ensembles through Metalearning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>LeDell, Erin; van der Laan, Mark J.; Peterson, Maya</p> <p>2016-01-01</p> <p>Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJSS...48.3334F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJSS...48.3334F"><span>A target recognition method for maritime surveillance radars based on hybrid ensemble selection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fan, Xueman; Hu, Shengliang; He, Jingbo</p> <p>2017-11-01</p> <p>In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EGUGA..1214356H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EGUGA..1214356H"><span>Quasi-most unstable modes: a window to 'À la carte' ensemble diversity?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Homar Santaner, Victor; Stensrud, David J.</p> <p>2010-05-01</p> <p>The atmospheric scientific community is nowadays facing the ambitious challenge of providing useful forecasts of atmospheric events that produce high societal impact. The low level of social resilience to false alarms creates tremendous pressure on forecasting offices to issue accurate, timely and reliable warnings.Currently, no operational numerical forecasting system is able to respond to the societal demand for high-resolution (in time and space) predictions in the 12-72h time span. The main reasons for such deficiencies are the lack of adequate observations and the high non-linearity of the numerical models that are currently used. The whole weather forecasting problem is intrinsically probabilistic and current methods aim at coping with the various sources of uncertainties and the error propagation throughout the forecasting system. This probabilistic perspective is often created by generating ensembles of deterministic predictions that are aimed at sampling the most important sources of uncertainty in the forecasting system. The ensemble generation/sampling strategy is a crucial aspect of their performance and various methods have been proposed. Although global forecasting offices have been using ensembles of perturbed initial conditions for medium-range operational forecasts since 1994, no consensus exists regarding the optimum sampling strategy for high resolution short-range ensemble forecasts. Bred vectors, however, have been hypothesized to better capture the growing modes in the highly nonlinear mesoscale dynamics of severe episodes than singular vectors or observation perturbations. Yet even this technique is not able to produce enough diversity in the ensembles to accurately and routinely predict extreme phenomena such as severe weather. Thus, we propose a new method to generate ensembles of initial conditions perturbations that is based on the breeding technique. Given a standard bred mode, a set of customized perturbations is derived with specified amplitudes and horizontal scales. This allows the ensemble to excite growing modes across a wider range of scales. Results show that this approach produces significantly more spread in the ensemble prediction than standard bred modes alone. Several examples that illustrate the benefits from this approach for severe weather forecasts will be provided.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001APS..MARW32012B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001APS..MARW32012B"><span>Variety of Behavior of Equity Returns in Financial Markets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bonanno, Giovanni; Lillo, Fabrizio; Mantegna, Rosario N.</p> <p>2001-03-01</p> <p>The price dynamics of a set of equities traded in an efficient market is pretty complex. It consists of almost not redundant time series which have (i) long-range correlated volatility and (ii) cross-correlation between each pair of equities. We perform a study of the statistical properties of an ensemble of equities returns which is fruitful to elucidate the nature and role of time and ensemble correlation. Specifically, we investigate a statistical ensemble of daily returns of n equities traded in United States financial markets. For each trading day of our database, we study the ensemble return distribution. We find that a typical ensemble return distribution exists in most of the trading days [1] with the exception of crash and rally days and of the days following to these extreme events [2]. We analyze each ensemble return distribution by extracting its first two central moments. We call the second moment of the ensemble return distribution the variety of the market. We choose this term because high variety implies a variated behavior of the equities returns in the considered day. We observe that the mean return and the variety are fluctuating in time and are stochastic processes themselves. The variety is a long-range correlated stochastic process. Customary time-averaged statistical properties of time series of stock returns are also considered. In general, time-averaged and portfolio-averaged returns have different statistical properties [1]. We infer from these differences information about the relative strength of correlation between equities and between different trading days. We also compare our empirical results with those predicted by the single-index model and we conclude that this simple model is unable to explain the statistical properties of the second moment of the ensemble return distribution. Correlation between pairs of equities are continuously present in the dynamics of a stock portfolio. Hence, it is relevant to investigate pair correlation in a efficient and original way. We propose to investigate these correlations at a daily and intra daily time horizon with a method based on concepts of random frustrated systems. Specifically, a hierarchical organization of the investigated equities is obtained by determining a metric distance between stocks and by investigating the properties of the subdominant ultrametric associated with it [3]. The high-frequency cross-correlation existing between pairs of equities are investigated in a set of 100 stocks traded in US equity markets. The decrease of the cross-correlation between the equity returns observed for diminishing time horizons progressively changes the nature of the hierarchical structure associated to each different time horizon [4]. The nature of the correlation present between pairs of time series of equity returns collected in a portfolio has a strong influence on the variety of the market. We finally discuss the relation between pair correlation and variety of an ensemble return distribution. References [1] Fabrizio Lillo and Rosario N. Mantegna, Variety and volatility in financial markets, Phys. Rev. E 62, 6126-6134 (2000). [2] Fabrizio Lillo and Rosario N. Mantegna, Symmetry alteration of ensemble return distribution in crash and rally days of financial market, Eur. Phys. J. B 15, 603-606 (2000). [3] Rosario N. Mantegna, Hierarchical structure in financial markets, Eur. Phys. J. B 11, 193-197 (1999). [4] Giovanni Bonanno, Fabrizio Lillo, and Rosario N. Mantegna, High-frequency cross-correlation in a set of stocks, Quantitative Finance (in press).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=planes&id=EJ1076540','ERIC'); return false;" href="https://eric.ed.gov/?q=planes&id=EJ1076540"><span>Effects of Conducting Plane on Band and Choral Musicians' Perceptions of Conductor and Ensemble Expressivity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Silvey, Brian A.; Fisher, Ryan A.</p> <p>2015-01-01</p> <p>The purpose of this study was to examine whether one aspect of conducting technique, the conducting plane, would affect band and/or choral musicians' perceptions of conductor and ensemble expressivity. A band and a choral conductor were each videotaped conducting 1-min excerpts from Morten Lauridsen's "O Magnum Mysterium" while using a…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..1514030H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..1514030H"><span>Verifying and Postprocesing the Ensemble Spread-Error Relationship</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hopson, Tom; Knievel, Jason; Liu, Yubao; Roux, Gregory; Wu, Wanli</p> <p>2013-04-01</p> <p>With the increased utilization of ensemble forecasts in weather and hydrologic applications, there is a need to verify their benefit over less expensive deterministic forecasts. One such potential benefit of ensemble systems is their capacity to forecast their own forecast error through the ensemble spread-error relationship. The paper begins by revisiting the limitations of the Pearson correlation alone in assessing this relationship. Next, we introduce two new metrics to consider in assessing the utility an ensemble's varying dispersion. We argue there are two aspects of an ensemble's dispersion that should be assessed. First, and perhaps more fundamentally: is there enough variability in the ensembles dispersion to justify the maintenance of an expensive ensemble prediction system (EPS), irrespective of whether the EPS is well-calibrated or not? To diagnose this, the factor that controls the theoretical upper limit of the spread-error correlation can be useful. Secondly, does the variable dispersion of an ensemble relate to variable expectation of forecast error? Representing the spread-error correlation in relation to its theoretical limit can provide a simple diagnostic of this attribute. A context for these concepts is provided by assessing two operational ensembles: 30-member Western US temperature forecasts for the U.S. Army Test and Evaluation Command and 51-member Brahmaputra River flow forecasts of the Climate Forecast and Applications Project for Bangladesh. Both of these systems utilize a postprocessing technique based on quantile regression (QR) under a step-wise forward selection framework leading to ensemble forecasts with both good reliability and sharpness. In addition, the methodology utilizes the ensemble's ability to self-diagnose forecast instability to produce calibrated forecasts with informative skill-spread relationships. We will describe both ensemble systems briefly, review the steps used to calibrate the ensemble forecast, and present verification statistics using error-spread metrics, along with figures from operational ensemble forecasts before and after calibration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25879060','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25879060"><span>Negative correlation learning for customer churn prediction: a comparison study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rodan, Ali; Fayyoumi, Ayham; Faris, Hossam; Alsakran, Jamal; Al-Kadi, Omar</p> <p>2015-01-01</p> <p>Recently, telecommunication companies have been paying more attention toward the problem of identification of customer churn behavior. In business, it is well known for service providers that attracting new customers is much more expensive than retaining existing ones. Therefore, adopting accurate models that are able to predict customer churn can effectively help in customer retention campaigns and maximizing the profit. In this paper we will utilize an ensemble of Multilayer perceptrons (MLP) whose training is obtained using negative correlation learning (NCL) for predicting customer churn in a telecommunication company. Experiments results confirm that NCL based MLP ensemble can achieve better generalization performance (high churn rate) compared with ensemble of MLP without NCL (flat ensemble) and other common data mining techniques used for churn analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20180000052','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20180000052"><span>Application of Ensemble Detection and Analysis to Modeling Uncertainty in Non Stationary Process</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Racette, Paul</p> <p>2010-01-01</p> <p>Characterization of non stationary and nonlinear processes is a challenge in many engineering and scientific disciplines. Climate change modeling and projection, retrieving information from Doppler measurements of hydrometeors, and modeling calibration architectures and algorithms in microwave radiometers are example applications that can benefit from improvements in the modeling and analysis of non stationary processes. Analyses of measured signals have traditionally been limited to a single measurement series. Ensemble Detection is a technique whereby mixing calibrated noise produces an ensemble measurement set. The collection of ensemble data sets enables new methods for analyzing random signals and offers powerful new approaches to studying and analyzing non stationary processes. Derived information contained in the dynamic stochastic moments of a process will enable many novel applications.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25601718','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25601718"><span>Designing a deep brain stimulator to suppress pathological neuronal synchrony.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Montaseri, Ghazal; Yazdanpanah, Mohammad Javad; Bahrami, Fariba</p> <p>2015-03-01</p> <p>Some of neuropathologies are believed to be related to abnormal synchronization of neurons. In the line of therapy, designing effective deep brain stimulators to suppress the pathological synchrony among neuronal ensembles is a challenge of high clinical relevance. The stimulation should be able to disrupt the synchrony in the presence of latencies due to imperfect knowledge about parameters of a neuronal ensemble and stimulation impacts on the ensemble. We propose an adaptive desynchronizing deep brain stimulator capable of dealing with these uncertainties. We analyze the collective behavior of the stimulated neuronal ensemble and show that, using the designed stimulator, the resulting asynchronous state is stable. Simulation results reveal the efficiency of the proposed technique. Copyright © 2014 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUOSOD24C2480H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUOSOD24C2480H"><span>GPU-Based Interactive Exploration and Online Probability Maps Calculation for Visualizing Assimilated Ocean Ensembles Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hoteit, I.; Hollt, T.; Hadwiger, M.; Knio, O. M.; Gopalakrishnan, G.; Zhan, P.</p> <p>2016-02-01</p> <p>Ocean reanalyses and forecasts are nowadays generated by combining ensemble simulations with data assimilation techniques. Most of these techniques resample the ensemble members after each assimilation cycle. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially with the number of assimilation cycles. In general a single possible path is not of interest but only the probabilities that any point in space might be reached by a particle at some point in time. We present an approach using probability-weighted piecewise particle trajectories to allow for interactive probability mapping. This is achieved by binning the domain and splitting up the tracing process into the individual assimilation cycles, so that particles that fall into the same bin after a cycle can be treated as a single particle with a larger probability as input for the next cycle. As a result we loose the possibility to track individual particles, but can create probability maps for any desired seed at interactive rates. The technique is integrated in an interactive visualization system that enables the visual analysis of the particle traces side by side with other forecast variables, such as the sea surface height, and their corresponding behavior over time. By harnessing the power of modern graphics processing units (GPUs) for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real-time, view specific parameter settings or simulation models and move between different spatial or temporal regions without delay. In addition our system provides advanced visualizations to highlight the uncertainty, or show the complete distribution of the simulations at user-defined positions over the complete time series of the domain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28541232','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28541232"><span>Locally Weighted Ensemble Clustering.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Huang, Dong; Wang, Chang-Dong; Lai, Jian-Huang</p> <p>2018-05-01</p> <p>Due to its ability to combine multiple base clusterings into a probably better and more robust clustering, the ensemble clustering technique has been attracting increasing attention in recent years. Despite the significant success, one limitation to most of the existing ensemble clustering methods is that they generally treat all base clusterings equally regardless of their reliability, which makes them vulnerable to low-quality base clusterings. Although some efforts have been made to (globally) evaluate and weight the base clusterings, yet these methods tend to view each base clustering as an individual and neglect the local diversity of clusters inside the same base clustering. It remains an open problem how to evaluate the reliability of clusters and exploit the local diversity in the ensemble to enhance the consensus performance, especially, in the case when there is no access to data features or specific assumptions on data distribution. To address this, in this paper, we propose a novel ensemble clustering approach based on ensemble-driven cluster uncertainty estimation and local weighting strategy. In particular, the uncertainty of each cluster is estimated by considering the cluster labels in the entire ensemble via an entropic criterion. A novel ensemble-driven cluster validity measure is introduced, and a locally weighted co-association matrix is presented to serve as a summary for the ensemble of diverse clusters. With the local diversity in ensembles exploited, two novel consensus functions are further proposed. Extensive experiments on a variety of real-world datasets demonstrate the superiority of the proposed approach over the state-of-the-art.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29342958','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29342958"><span>An Enhanced Method to Estimate Heart Rate from Seismocardiography via Ensemble Averaging of Body Movements at Six Degrees of Freedom.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Hyunwoo; Lee, Hana; Whang, Mincheol</p> <p>2018-01-15</p> <p>Continuous cardiac monitoring has been developed to evaluate cardiac activity outside of clinical environments due to the advancement of novel instruments. Seismocardiography (SCG) is one of the vital components that could develop such a monitoring system. Although SCG has been presented with a lower accuracy, this novel cardiac indicator has been steadily proposed over traditional methods such as electrocardiography (ECG). Thus, it is necessary to develop an enhanced method by combining the significant cardiac indicators. In this study, the six-axis signals of accelerometer and gyroscope were measured and integrated by the L2 normalization and multi-dimensional kineticardiography (MKCG) approaches, respectively. The waveforms of accelerometer and gyroscope were standardized and combined via ensemble averaging, and the heart rate was calculated from the dominant frequency. Thirty participants (15 females) were asked to stand or sit in relaxed and aroused conditions. Their SCG was measured during the task. As a result, proposed method showed higher accuracy than traditional SCG methods in all measurement conditions. The three main contributions are as follows: (1) the ensemble averaging enhanced heart rate estimation with the benefits of the six-axis signals; (2) the proposed method was compared with the previous SCG method that employs fewer-axis; and (3) the method was tested in various measurement conditions for a more practical application.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015APS..DFDR31008S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015APS..DFDR31008S"><span>Measurements of wind-waves under transient wind conditions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shemer, Lev; Zavadsky, Andrey</p> <p>2015-11-01</p> <p>Wind forcing in nature is always unsteady, resulting in a complicated evolution pattern that involves numerous time and space scales. In the present work, wind waves in a laboratory wind-wave flume are studied under unsteady forcing`. The variation of the surface elevation is measured by capacitance wave gauges, while the components of the instantaneous surface slope in across-wind and along-wind directions are determined by a regular or scanning laser slope gauge. The locations of the wave gauge and of the laser slope gauge are separated by few centimeters in across-wind direction. Instantaneous wind velocity was recorded simultaneously using Pitot tube. Measurements are performed at a number of fetches and for different patterns of wind velocity variation. For each case, at least 100 independent realizations were recorded for a given wind velocity variation pattern. The accumulated data sets allow calculating ensemble-averaged values of the measured parameters. Significant differences between the evolution patterns of the surface elevation and of the slope components were found. Wavelet analysis was applied to determine dominant wave frequency of the surface elevation and of the slope variation at each instant. Corresponding ensemble-averaged values acquired by different sensors were computed and compared. Analysis of the measured ensemble-averaged quantities at different fetches makes it possible to identify different stages in the wind-wave evolution and to estimate the appropriate time and length scales.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhRvE..87e2713K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhRvE..87e2713K"><span>Improved estimation of anomalous diffusion exponents in single-particle tracking experiments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kepten, Eldad; Bronshtein, Irena; Garini, Yuval</p> <p>2013-05-01</p> <p>The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24666629','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24666629"><span>The Fukushima-137Cs deposition case study: properties of the multi-model ensemble.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Solazzo, E; Galmarini, S</p> <p>2015-01-01</p> <p>In this paper we analyse the properties of an eighteen-member ensemble generated by the combination of five atmospheric dispersion modelling systems and six meteorological data sets. The models have been applied to the total deposition of (137)Cs, following the nuclear accident of the Fukushima power plant in March 2011. Analysis is carried out with the scope of determining whether the ensemble is reliable, sufficiently diverse and if its accuracy and precision can be improved. Although ensemble practice is becoming more and more popular in many geophysical applications, good practice guidelines are missing as to how models should be combined for the ensembles to offer an improvement over single model realisations. We show that the ensemble of models share large portions of bias and variance and make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble mean with the advantage of being poorly correlated, allowing to save computational resources and reduce noise (and thus improving accuracy). We further propose and discuss two methods for selecting subsets of skilful and diverse members, and prove that, in the contingency of the present analysis, their mean outscores the full ensemble mean in terms of both accuracy (error) and precision (variance). Copyright © 2014. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/6912541-interactions-between-moist-heating-dynamics-atmospheric-predictability','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/6912541-interactions-between-moist-heating-dynamics-atmospheric-predictability"><span>Interactions between moist heating and dynamics in atmospheric predictability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Straus, D.M.; Huntley, M.A.</p> <p>1994-02-01</p> <p>The predictability properties of a fixed heating version of a GCM in which the moist heating is specified beforehand are studied in a series of identical twin experiments. Comparison is made to an identical set of experiments using the control GCM, a five-level R30 version of the COLA GCM. The experiments each contain six ensembles, with a single ensemble consisting of six 30-day integrations starting from slightly perturbed Northern Hemisphere wintertime initial conditions. The moist heating from each integration within a single control ensemble was averaged over the ensemble. This averaged heating (a function of three spatial dimensions and time)more » was used as the prespecified heating in each member of the corresponding fixed heating ensemble. The errors grow less rapidly in the fixed heating case. The most rapidly growing scales at small times (global wavenumber 6) have doubling times of 3.2 days compared to 2.4 days for the control experiments. The predictability times for the most energetic scales (global wavenumbers 9-12) are about two weeks for the fixed heating experiments, compared to 9 days for the control. The ratio of error energy in the fixed heating to the control case falls below 0.5 by day 8, and then gradually increases as the error growth slows in the control case. The growth of errors is described in terms of budgets of error kinetic energy (EKE) and error available potential energy (EAPE) developed in terms of global wavenumber n. The diabatic generation of EAPE (G[sub APE]) is positive in the control case and is dominated by midlatitude heating errors after day 2. The fixed heating G[sub APE] is negative at all times due to longwave radiative cooling. 36 refs., 9 figs., 1 tab.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JGRD..117.5309L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JGRD..117.5309L"><span>Simultaneous assimilation of AIRS Xco2 and meteorological observations in a carbon climate model with an ensemble Kalman filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Junjie; Fung, Inez; Kalnay, Eugenia; Kang, Ji-Sun; Olsen, Edward T.; Chen, Luke</p> <p>2012-03-01</p> <p>This study is our first step toward the generation of 6 hourly 3-D CO2 fields that can be used to validate CO2 forecast models by combining CO2 observations from multiple sources using ensemble Kalman filtering. We discuss a procedure to assimilate Atmospheric Infrared Sounder (AIRS) column-averaged dry-air mole fraction of CO2 (Xco2) in conjunction with meteorological observations with the coupled Local Ensemble Transform Kalman Filter (LETKF)-Community Atmospheric Model version 3.5. We examine the impact of assimilating AIRS Xco2 observations on CO2 fields by comparing the results from the AIRS-run, which assimilates both AIRS Xco2 and meteorological observations, to those from the meteor-run, which only assimilates meteorological observations. We find that assimilating AIRS Xco2 results in a surface CO2 seasonal cycle and the N-S surface gradient closer to the observations. When taking account of the CO2 uncertainty estimation from the LETKF, the CO2 analysis brackets the observed seasonal cycle. Verification against independent aircraft observations shows that assimilating AIRS Xco2 improves the accuracy of the CO2 vertical profiles by about 0.5-2 ppm depending on location and altitude. The results show that the CO2 analysis ensemble spread at AIRS Xco2 space is between 0.5 and 2 ppm, and the CO2 analysis ensemble spread around the peak level of the averaging kernels is between 1 and 2 ppm. This uncertainty estimation is consistent with the magnitude of the CO2 analysis error verified against AIRS Xco2 observations and the independent aircraft CO2 vertical profiles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24483403','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24483403"><span>Transient aging in fractional Brownian and Langevin-equation motion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kursawe, Jochen; Schulz, Johannes; Metzler, Ralf</p> <p>2013-12-01</p> <p>Stochastic processes driven by stationary fractional Gaussian noise, that is, fractional Brownian motion and fractional Langevin-equation motion, are usually considered to be ergodic in the sense that, after an algebraic relaxation, time and ensemble averages of physical observables coincide. Recently it was demonstrated that fractional Brownian motion and fractional Langevin-equation motion under external confinement are transiently nonergodic-time and ensemble averages behave differently-from the moment when the particle starts to sense the confinement. Here we show that these processes also exhibit transient aging, that is, physical observables such as the time-averaged mean-squared displacement depend on the time lag between the initiation of the system at time t=0 and the start of the measurement at the aging time t(a). In particular, it turns out that for fractional Langevin-equation motion the aging dependence on t(a) is different between the cases of free and confined motion. We obtain explicit analytical expressions for the aged moments of the particle position as well as the time-averaged mean-squared displacement and present a numerical analysis of this transient aging phenomenon.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/5848410','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/5848410"><span>Stresses and elastic constants of crystalline sodium, from molecular dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Schiferl, S.K.</p> <p>1985-02-01</p> <p>The stresses and the elastic constants of bcc sodium are calculated by molecular dynamics (MD) for temperatures to T = 340K. The total adiabatic potential of a system of sodium atoms is represented by pseudopotential model. The resulting expression has two terms: a large, strictly volume-dependent potential, plus a sum over ion pairs of a small, volume-dependent two-body potential. The stresses and the elastic constants are given as strain derivatives of the Helmholtz free energy. The resulting expressions involve canonical ensemble averages (and fluctuation averages) of the position and volume derivatives of the potential. An ensemble correction relates the resultsmore » to MD equilibrium averages. Evaluation of the potential and its derivatives requires the calculation of integrals with infinite upper limits of integration, and integrand singularities. Methods for calculating these integrals and estimating the effects of integration errors are developed. A method is given for choosing initial conditions that relax quickly to a desired equilibrium state. Statistical methods developed earlier for MD data are extended to evaluate uncertainties in fluctuation averages, and to test for symmetry. 45 refs., 10 figs., 4 tabs.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AGUFMNG41C1195T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AGUFMNG41C1195T"><span>Nonlinear problems in data-assimilation : Can synchronization help?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tribbia, J. J.; Duane, G. S.</p> <p>2009-12-01</p> <p>Over the past several years, operational weather centers have initiated ensemble prediction and assimilation techniques to estimate the error covariance of forecasts in the short and the medium range. The ensemble techniques used are based on linear methods. The theory This technique s been shown to be a useful indicator of skill in the linear range where forecast errors are small relative to climatological variance. While this advance has been impressive, there are still ad hoc aspects of its use in practice, like the need for covariance inflation which are troubling. Furthermore, to be of utility in the nonlinear range an ensemble assimilation and prediction method must be capable of giving probabilistic information for the situation where a probability density forecast becomes multi-modal. A prototypical, simplest example of such a situation is the planetary-wave regime transition where the pdf is bimodal. Our recent research show how the inconsistencies and extensions of linear methodology can be consistently treated using the paradigm of synchronization which views the problems of assimilation and forecasting as that of optimizing the forecast model state with respect to the future evolution of the atmosphere.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100016345','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100016345"><span>Impact of Damping Uncertainty on SEA Model Response Variance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand</p> <p>2010-01-01</p> <p>Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010NHESS..10.2371V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010NHESS..10.2371V"><span>Multiphysics superensemble forecast applied to Mediterranean heavy precipitation situations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vich, M.; Romero, R.</p> <p>2010-11-01</p> <p>The high-impact precipitation events that regularly affect the western Mediterranean coastal regions are still difficult to predict with the current prediction systems. Bearing this in mind, this paper focuses on the superensemble technique applied to the precipitation field. Encouraged by the skill shown by a previous multiphysics ensemble prediction system applied to western Mediterranean precipitation events, the superensemble is fed with this ensemble. The training phase of the superensemble contributes to the actual forecast with weights obtained by comparing the past performance of the ensemble members and the corresponding observed states. The non-hydrostatic MM5 mesoscale model is used to run the multiphysics ensemble. Simulations are performed with a 22.5 km resolution domain (Domain 1 in <a href=" http://mm5forecasts.uib.es" target ="_blank"> http://mm5forecasts.uib.es</a>) nested in the ECMWF forecast fields. The period between September and December 2001 is used to train the superensemble and a collection of 19~MEDEX cyclones is used to test it. The verification procedure involves testing the superensemble performance and comparing it with that of the poor-man and bias-corrected ensemble mean and the multiphysic EPS control member. The results emphasize the need of a well-behaved training phase to obtain good results with the superensemble technique. A strategy to obtain this improved training phase is already outlined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ACP....13.8315S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ACP....13.8315S"><span>Pauci ex tanto numero: reduce redundancy in multi-model ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.</p> <p>2013-08-01</p> <p>We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date, no attempts in this direction have been documented within the air quality (AQ) community despite the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared, dependant biases among models do not cancel out but will instead determine a biased ensemble. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated), we discourage selecting the members of the ensemble simply on the basis of scores; that is, independence and skills need to be considered disjointly.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1917358B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1917358B"><span>High resolution statistical downscaling of the EUROSIP seasonal prediction. Application for southeastern Romania</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Busuioc, Aristita; Dumitrescu, Alexandru; Dumitrache, Rodica; Iriza, Amalia</p> <p>2017-04-01</p> <p>Seasonal climate forecasts in Europe are currently issued at the European Centre for Medium-Range Weather Forecasts (ECMWF) in the form of multi-model ensemble predictions available within the "EUROSIP" system. Different statistical techniques to calibrate, downscale and combine the EUROSIP direct model output are used to optimize the quality of the final probabilistic forecasts. In this study, a statistical downscaling model (SDM) based on canonical correlation analysis (CCA) is used to downscale the EUROSIP seasonal forecast at a spatial resolution of 1km x 1km over the Movila farm placed in southeastern Romania. This application is achieved in the framework of the H2020 MOSES project (http://www.moses-project.eu). The combination between monthly standardized values of three climate variables (maximum/minimum temperatures-Tmax/Tmin, total precipitation-Prec) is used as predictand while combinations of various large-scale predictors are tested in terms of their availability as outputs in the seasonal EUROSIP probabilistic forecasting (sea level pressure, temperature at 850 hPa and geopotential height at 500 hPa). The predictors are taken from the ECMWF system considering 15 members of the ensemble, for which the hindcasts since 1991 until present are available. The model was calibrated over the period 1991-2014 and predictions for summers 2015 and 2016 were achieved. The calibration was made for the ensemble average as well as for each ensemble member. The model was developed for each lead time: one month anticipation for June, two months anticipation for July and three months anticipation for August. The main conclusions from these preliminary results are: best predictions (in terms of the anomaly sign) for Tmax (July-2 months anticipation, August-3 months anticipation) for both years (2015, 2016); for Tmin - good predictions only for August (3 months anticipation ) for both years; for precipitation, good predictions for July (2 months anticipation) in 2015 and August (3 months anticipation) in 2016; failed prediction for June (1-month anticipation) for all parameters. To see if the results obtained for 2015 and 2016 summers are in agreement with the general ECMWF model performance in forecast of the three predictors used in the CCA SDM calibration, the mean bias and root mean square errors (RMSE) calculated over the entire period in each grid point, for each ensemble member and ensemble average were computed. The obtained results are confirmed, showing highest ECMWF performance in forecasting of the three predictors for 3 months anticipation (August) and lowest performance for one month anticipation (June). The added value of the CCA SDM in forecasting local Tmax/Tmin and total precipitation was compared to the ECMWF performance using nearest grid point method. Comparisons were performed for the 1991-2014 period, taking into account the forecast made in May for July. An important improvement was found for the CCA SDM predictions in terms of the RMSE value (computed against observations) for Tmax/Tmin and less for precipitation. The tests are in progress for the other summer months (June, July).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28708399','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28708399"><span>Development and Validation of a Computational Model Ensemble for the Early Detection of BCRP/ABCG2 Substrates during the Drug Design Stage.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gantner, Melisa E; Peroni, Roxana N; Morales, Juan F; Villalba, María L; Ruiz, María E; Talevi, Alan</p> <p>2017-08-28</p> <p>Breast Cancer Resistance Protein (BCRP) is an ATP-dependent efflux transporter linked to the multidrug resistance phenomenon in many diseases such as epilepsy and cancer and a potential source of drug interactions. For these reasons, the early identification of substrates and nonsubstrates of this transporter during the drug discovery stage is of great interest. We have developed a computational nonlinear model ensemble based on conformational independent molecular descriptors using a combined strategy of genetic algorithms, J48 decision tree classifiers, and data fusion. The best model ensemble consists in averaging the ranking of the 12 decision trees that showed the best performance on the training set, which also demonstrated a good performance for the test set. It was experimentally validated using the ex vivo everted rat intestinal sac model. Five anticonvulsant drugs classified as nonsubstrates for BRCP by the model ensemble were experimentally evaluated, and none of them proved to be a BCRP substrate under the experimental conditions used, thus confirming the predictive ability of the model ensemble. The model ensemble reported here is a potentially valuable tool to be used as an in silico ADME filter in computer-aided drug discovery campaigns intended to overcome BCRP-mediated multidrug resistance issues and to prevent drug-drug interactions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3992658','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3992658"><span>From a structural average to the conformational ensemble of a DNA bulge</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shi, Xuesong; Beauchamp, Kyle A.; Harbury, Pehr B.; Herschlag, Daniel</p> <p>2014-01-01</p> <p>Direct experimental measurements of conformational ensembles are critical for understanding macromolecular function, but traditional biophysical methods do not directly report the solution ensemble of a macromolecule. Small-angle X-ray scattering interferometry has the potential to overcome this limitation by providing the instantaneous distance distribution between pairs of gold-nanocrystal probes conjugated to a macromolecule in solution. Our X-ray interferometry experiments reveal an increasing bend angle of DNA duplexes with bulges of one, three, and five adenosine residues, consistent with previous FRET measurements, and further reveal an increasingly broad conformational ensemble with increasing bulge length. The distance distributions for the AAA bulge duplex (3A-DNA) with six different Au-Au pairs provide strong evidence against a simple elastic model in which fluctuations occur about a single conformational state. Instead, the measured distance distributions suggest a 3A-DNA ensemble with multiple conformational states predominantly across a region of conformational space with bend angles between 24 and 85 degrees and characteristic bend directions and helical twists and displacements. Additional X-ray interferometry experiments revealed perturbations to the ensemble from changes in ionic conditions and the bulge sequence, effects that can be understood in terms of electrostatic and stacking contributions to the ensemble and that demonstrate the sensitivity of X-ray interferometry. Combining X-ray interferometry ensemble data with molecular dynamics simulations gave atomic-level models of representative conformational states and of the molecular interactions that may shape the ensemble, and fluorescence measurements with 2-aminopurine-substituted 3A-DNA provided initial tests of these atomistic models. More generally, X-ray interferometry will provide powerful benchmarks for testing and developing computational methods. PMID:24706812</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20110013169&hterms=Coding&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DCoding','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20110013169&hterms=Coding&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DCoding"><span>The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush</p> <p>2008-01-01</p> <p>We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19780063736&hterms=self+expansion+theory&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dself%2Bexpansion%2Btheory','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19780063736&hterms=self+expansion+theory&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dself%2Bexpansion%2Btheory"><span>A strictly Markovian expansion for plasma turbulence theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jones, F. C.</p> <p>1978-01-01</p> <p>The collision operator that appears in the equation of motion for a particle distribution function that has been averaged over an ensemble of random Hamiltonians is non-Markovian. It is non-Markovian in that it involves a propagated integral over the past history of the ensemble averaged distribution function. All formal expansions of this nonlinear collision operator to date preserve this non-Markovian character term by term yielding an integro-differential equation that must be converted to a diffusion equation by an additional approximation. In this note we derive an expansion of the collision operator that is strictly Markovian to any finite order and yields a diffusion equation as the lowest non-trivial order. The validity of this expansion is seen to be the same as that of the standard quasi-linear expansion.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvA..96b3859C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvA..96b3859C"><span>Self-consistent Maxwell-Bloch model of quantum-dot photonic-crystal-cavity lasers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cartar, William; Mørk, Jesper; Hughes, Stephen</p> <p>2017-08-01</p> <p>We present a powerful computational approach to simulate the threshold behavior of photonic-crystal quantum-dot (QD) lasers. Using a finite-difference time-domain (FDTD) technique, Maxwell-Bloch equations representing a system of thousands of statistically independent and randomly positioned two-level emitters are solved numerically. Phenomenological pure dephasing and incoherent pumping is added to the optical Bloch equations to allow for a dynamical lasing regime, but the cavity-mediated radiative dynamics and gain coupling of each QD dipole (artificial atom) is contained self-consistently within the model. These Maxwell-Bloch equations are implemented by using Lumerical's flexible material plug-in tool, which allows a user to define additional equations of motion for the nonlinear polarization. We implement the gain ensemble within triangular-lattice photonic-crystal cavities of various length N (where N refers to the number of missing holes), and investigate the cavity mode characteristics and the threshold regime as a function of cavity length. We develop effective two-dimensional model simulations which are derived after studying the full three-dimensional passive material structures by matching the cavity quality factors and resonance properties. We also demonstrate how to obtain the correct point-dipole radiative decay rate from Fermi's golden rule, which is captured naturally by the FDTD method. Our numerical simulations predict that the pump threshold plateaus around cavity lengths greater than N =9 , which we identify as a consequence of the complex spatial dynamics and gain coupling from the inhomogeneous QD ensemble. This behavior is not expected from simple rate-equation analysis commonly adopted in the literature, but is in qualitative agreement with recent experiments. Single-mode to multimode lasing is also observed, depending on the spectral peak frequency of the QD ensemble. Using a statistical modal analysis of the average decay rates, we also show how the average radiative decay rate decreases as a function of cavity size. In addition, we investigate the role of structural disorder on both the passive cavity and active lasers, where the latter show a general increase in the pump threshold for cavity lengths greater than N =7 , and a reduction in the nominal cavity mode volume for increasing amounts of disorder.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JChPh.135v4113G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JChPh.135v4113G"><span>Boiling point determination using adiabatic Gibbs ensemble Monte Carlo simulations: Application to metals described by embedded-atom potentials</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gelb, Lev D.; Chakraborty, Somendra Nath</p> <p>2011-12-01</p> <p>The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JCHyd.203....1O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JCHyd.203....1O"><span>Conservative strategy-based ensemble surrogate model for optimal groundwater remediation design at DNAPLs-contaminated sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ouyang, Qi; Lu, Wenxi; Lin, Jin; Deng, Wenbing; Cheng, Weiguo</p> <p>2017-08-01</p> <p>The surrogate-based simulation-optimization techniques are frequently used for optimal groundwater remediation design. When this technique is used, surrogate errors caused by surrogate-modeling uncertainty may lead to generation of infeasible designs. In this paper, a conservative strategy that pushes the optimal design into the feasible region was used to address surrogate-modeling uncertainty. In addition, chance-constrained programming (CCP) was adopted to compare with the conservative strategy in addressing this uncertainty. Three methods, multi-gene genetic programming (MGGP), Kriging (KRG) and support vector regression (SVR), were used to construct surrogate models for a time-consuming multi-phase flow model. To improve the performance of the surrogate model, ensemble surrogates were constructed based on combinations of different stand-alone surrogate models. The results show that: (1) the surrogate-modeling uncertainty was successfully addressed by the conservative strategy, which means that this method is promising for addressing surrogate-modeling uncertainty. (2) The ensemble surrogate model that combines MGGP with KRG showed the most favorable performance, which indicates that this ensemble surrogate can utilize both stand-alone surrogate models to improve the performance of the surrogate model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28269374','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28269374"><span>Detection of chewing from piezoelectric film sensor signals using ensemble classifiers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Farooq, Muhammad; Sazonov, Edward</p> <p>2016-08-01</p> <p>Selection and use of pattern recognition algorithms is application dependent. In this work, we explored the use of several ensembles of weak classifiers to classify signals captured from a wearable sensor system to detect food intake based on chewing. Three sensor signals (Piezoelectric sensor, accelerometer, and hand to mouth gesture) were collected from 12 subjects in free-living conditions for 24 hrs. Sensor signals were divided into 10 seconds epochs and for each epoch combination of time and frequency domain features were computed. In this work, we present a comparison of three different ensemble techniques: boosting (AdaBoost), bootstrap aggregation (bagging) and stacking, each trained with 3 different weak classifiers (Decision Trees, Linear Discriminant Analysis (LDA) and Logistic Regression). Type of feature normalization used can also impact the classification results. For each ensemble method, three feature normalization techniques: (no-normalization, z-score normalization, and minmax normalization) were tested. A 12 fold cross-validation scheme was used to evaluate the performance of each model where the performance was evaluated in terms of precision, recall, and accuracy. Best results achieved here show an improvement of about 4% over our previous algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoRL..45.4429Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoRL..45.4429Y"><span>Medium-Range Forecast Skill for Extraordinary Arctic Cyclones in Summer of 2008-2016</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yamagami, Akio; Matsueda, Mio; Tanaka, Hiroshi L.</p> <p>2018-05-01</p> <p>Arctic cyclones (ACs) are a severe atmospheric phenomenon that affects the Arctic environment. This study assesses the forecast skill of five leading operational medium-range ensemble forecasts for 10 extraordinary ACs that occurred in summer during 2008-2016. Average existence probability of the predicted ACs was >0.9 at lead times of ≤3.5 days. Average central position error of the predicted ACs was less than half of the mean radius of the 10 ACs (469.1 km) at lead times of 2.5-4.5 days. Average central pressure error of the predicted ACs was 5.5-10.7 hPa at such lead times. Therefore, the operational ensemble prediction systems generally predict the position of ACs within 469.1 km 2.5-4.5 days before they mature. The forecast skill for the extraordinary ACs is lower than that for midlatitude cyclones in the Northern Hemisphere but similar to that in the Southern Hemisphere.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvE..97a2502K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvE..97a2502K"><span>Shear-stress fluctuations and relaxation in polymer glasses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kriuchevskyi, I.; Wittmer, J. P.; Meyer, H.; Benzerara, O.; Baschnagel, J.</p> <p>2018-01-01</p> <p>We investigate by means of molecular dynamics simulation a coarse-grained polymer glass model focusing on (quasistatic and dynamical) shear-stress fluctuations as a function of temperature T and sampling time Δ t . The linear response is characterized using (ensemble-averaged) expectation values of the contributions (time averaged for each shear plane) to the stress-fluctuation relation μsf for the shear modulus and the shear-stress relaxation modulus G (t ) . Using 100 independent configurations, we pay attention to the respective standard deviations. While the ensemble-averaged modulus μsf(T ) decreases continuously with increasing T for all Δ t sampled, its standard deviation δ μsf(T ) is nonmonotonic with a striking peak at the glass transition. The question of whether the shear modulus is continuous or has a jump singularity at the glass transition is thus ill posed. Confirming the effective time-translational invariance of our systems, the Δ t dependence of μsf and related quantities can be understood using a weighted integral over G (t ) .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011OcScD...8..761P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011OcScD...8..761P"><span>ENSURF: multi-model sea level forecast - implementation and validation results for the IBIROOS and Western Mediterranean regions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pérez, B.; Brower, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hacket, B.; Verlaan, M.; Alvarez Fanjul, E.</p> <p>2011-04-01</p> <p>ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of existing storm surge or circulation models today operational in Europe, as well as near-real time tide gauge data in the region, with the following main goals: - providing an easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool - generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average Technique (BMA) The system was developed and implemented within ECOOP (C.No. 036355) European Project for the NOOS and the IBIROOS regions, based on MATROOS visualization tool developed by Deltares. Both systems are today operational at Deltares and Puertos del Estado respectively. The Bayesian Modelling Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the probability that a model will give the correct forecast PDF and are determined and updated operationally based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. Results of validation of the different models and BMA implementation for the main harbours will be presented for the IBIROOS and Western Mediterranean regions, where this kind of activity is performed for the first time. The work has proved to be useful to detect problems in some of the circulation models not previously well calibrated with sea level data, to identify the differences on baroclinic and barotropic models for sea level applications and to confirm the general improvement of the BMA forecasts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012OcSci...8..211P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012OcSci...8..211P"><span>ENSURF: multi-model sea level forecast - implementation and validation results for the IBIROOS and Western Mediterranean regions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pérez, B.; Brouwer, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hackett, B.; Verlaan, M.; Fanjul, E. A.</p> <p>2012-03-01</p> <p>ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of several storm surge or circulation models and near-real time tide gauge data in the region, with the following main goals: 1. providing easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool; 2. generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average technique (BMA). The Bayesian Model Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the Bayesian likelihood that a model will give the correct forecast and are continuously updated based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. The system was implemented for the European Atlantic facade (IBIROOS region) and Western Mediterranean coast based on the MATROOS visualization tool developed by Deltares. Results of validation of the different models and BMA implementation for the main harbours are presented for these regions where this kind of activity is performed for the first time. The system is currently operational at Puertos del Estado and has proved to be useful in the detection of calibration problems in some of the circulation models, in the identification of the systematic differences between baroclinic and barotropic models for sea level forecasts and to demonstrate the feasibility of providing an overall probabilistic forecast, based on the BMA method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4386545','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4386545"><span>Negative Correlation Learning for Customer Churn Prediction: A Comparison Study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Faris, Hossam</p> <p>2015-01-01</p> <p>Recently, telecommunication companies have been paying more attention toward the problem of identification of customer churn behavior. In business, it is well known for service providers that attracting new customers is much more expensive than retaining existing ones. Therefore, adopting accurate models that are able to predict customer churn can effectively help in customer retention campaigns and maximizing the profit. In this paper we will utilize an ensemble of Multilayer perceptrons (MLP) whose training is obtained using negative correlation learning (NCL) for predicting customer churn in a telecommunication company. Experiments results confirm that NCL based MLP ensemble can achieve better generalization performance (high churn rate) compared with ensemble of MLP without NCL (flat ensemble) and other common data mining techniques used for churn analysis. PMID:25879060</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=supercomputer&pg=2&id=EJ410944','ERIC'); return false;" href="https://eric.ed.gov/?q=supercomputer&pg=2&id=EJ410944"><span>Analytical Applications of Monte Carlo Techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Guell, Oscar A.; Holcombe, James A.</p> <p>1990-01-01</p> <p>Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3382574','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3382574"><span>Structural Changes in Isometrically Contracting Insect Flight Muscle Trapped following a Mechanical Perturbation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wu, Shenping; Liu, Jun; Perz-Edwards, Robert J.; Tregear, Richard T.; Winkler, Hanspeter; Franzini-Armstrong, Clara; Sasaki, Hiroyuki; Goldman, Yale E.; Reedy, Michael K.; Taylor, Kenneth A.</p> <p>2012-01-01</p> <p>The application of rapidly applied length steps to actively contracting muscle is a classic method for synchronizing the response of myosin cross-bridges so that the average response of the ensemble can be measured. Alternatively, electron tomography (ET) is a technique that can report the structure of the individual members of the ensemble. We probed the structure of active myosin motors (cross-bridges) by applying 0.5% changes in length (either a stretch or a release) within 2 ms to isometrically contracting insect flight muscle (IFM) fibers followed after 5–6 ms by rapid freezing against a liquid helium cooled copper mirror. ET of freeze-substituted fibers, embedded and thin-sectioned, provides 3-D cross-bridge images, sorted by multivariate data analysis into ∼40 classes, distinct in average structure, population size and lattice distribution. Individual actin subunits are resolved facilitating quasi-atomic modeling of each class average to determine its binding strength (weak or strong) to actin. ∼98% of strong-binding acto-myosin attachments present after a length perturbation are confined to “target zones” of only two actin subunits located exactly midway between successive troponin complexes along each long-pitch helical repeat of actin. Significant changes in the types, distribution and structure of actin-myosin attachments occurred in a manner consistent with the mechanical transients. Most dramatic is near disappearance, after either length perturbation, of a class of weak-binding cross-bridges, attached within the target zone, that are highly likely to be precursors of strong-binding cross-bridges. These weak-binding cross-bridges were originally observed in isometrically contracting IFM. Their disappearance following a quick stretch or release can be explained by a recent kinetic model for muscle contraction, as behaviour consistent with their identification as precursors of strong-binding cross-bridges. The results provide a detailed model for contraction in IFM that may be applicable to contraction in other types of muscle. PMID:22761792</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008PhFl...20c5106M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008PhFl...20c5106M"><span>The mean and turbulent flow structure of a weak hydraulic jump</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Misra, S. K.; Kirby, J. T.; Brocchini, M.; Veron, F.; Thomas, M.; Kambhamettu, C.</p> <p>2008-03-01</p> <p>The turbulent air-water interface and flow structure of a weak, turbulent hydraulic jump are analyzed in detail using particle image velocimetry measurements. The study is motivated by the need to understand the detailed dynamics of turbulence generated in steady spilling breakers and the relative importance of the reverse-flow and breaker shear layer regions with attention to their topology, mean flow, and turbulence structure. The intermittency factor derived from turbulent fluctuations of the air-water interface in the breaker region is found to fit theoretical distributions of turbulent interfaces well. A conditional averaging technique is used to calculate ensemble-averaged properties of the flow. The computed mean velocity field accurately satisfies mass conservation. A thin, curved shear layer oriented parallel to the surface is responsible for most of the turbulence production with the turbulence intensity decaying rapidly away from the toe of the breaker (location of largest surface curvature) with both increasing depth and downstream distance. The reverse-flow region, localized about the ensemble-averaged free surface, is characterized by a weak downslope mean flow and entrainment of water from below. The Reynolds shear stress is negative in the breaker shear layer, which shows that momentum diffuses upward into the shear layer from the flow underneath, and it is positive just below the mean surface indicating a downward flux of momentum from the reverse-flow region into the shear layer. The turbulence structure of the breaker shear layer resembles that of a mixing layer originating from the toe of the breaker, and the streamwise variations of the length scale and growth rate are found to be in good agreement with observed values in typical mixing layers. All evidence suggests that breaking is driven by a surface-parallel adverse pressure gradient and a streamwise flow deceleration at the toe of the breaker. Both effects force the shear layer to thicken rapidly, thereby inducing a sharp free surface curvature change at the toe.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19860021905','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19860021905"><span>Computer simulation of surface and film processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tiller, W. A.; Halicioglu, M. T.</p> <p>1984-01-01</p> <p>All the investigations which were performed employed in one way or another a computer simulation technique based on atomistic level considerations. In general, three types of simulation methods were used for modeling systems with discrete particles that interact via well defined potential functions: molecular dynamics (a general method for solving the classical equations of motion of a model system); Monte Carlo (the use of Markov chain ensemble averaging technique to model equilibrium properties of a system); and molecular statics (provides properties of a system at T = 0 K). The effects of three-body forces on the vibrational frequencies of triatomic cluster were investigated. The multilayer relaxation phenomena for low index planes of an fcc crystal was analyzed also as a function of the three-body interactions. Various surface properties for Si and SiC system were calculated. Results obtained from static simulation calculations for slip formation were presented. The more elaborate molecular dynamics calculations on the propagation of cracks in two-dimensional systems were outlined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EGUGA..12.1940C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EGUGA..12.1940C"><span>A Wind Forecasting System for Energy Application</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Courtney, Jennifer; Lynch, Peter; Sweeney, Conor</p> <p>2010-05-01</p> <p>Accurate forecasting of available energy is crucial for the efficient management and use of wind power in the national power grid. With energy output critically dependent upon wind strength there is a need to reduce the errors associated wind forecasting. The objective of this research is to get the best possible wind forecasts for the wind energy industry. To achieve this goal, three methods are being applied. First, a mesoscale numerical weather prediction (NWP) model called WRF (Weather Research and Forecasting) is being used to predict wind values over Ireland. Currently, a gird resolution of 10km is used and higher model resolutions are being evaluated to establish whether they are economically viable given the forecast skill improvement they produce. Second, the WRF model is being used in conjunction with ECMWF (European Centre for Medium-Range Weather Forecasts) ensemble forecasts to produce a probabilistic weather forecasting product. Due to the chaotic nature of the atmosphere, a single, deterministic weather forecast can only have limited skill. The ECMWF ensemble methods produce an ensemble of 51 global forecasts, twice a day, by perturbing initial conditions of a 'control' forecast which is the best estimate of the initial state of the atmosphere. This method provides an indication of the reliability of the forecast and a quantitative basis for probabilistic forecasting. The limitation of ensemble forecasting lies in the fact that the perturbed model runs behave differently under different weather patterns and each model run is equally likely to be closest to the observed weather situation. Models have biases, and involve assumptions about physical processes and forcing factors such as underlying topography. Third, Bayesian Model Averaging (BMA) is being applied to the output from the ensemble forecasts in order to statistically post-process the results and achieve a better wind forecasting system. BMA is a promising technique that will offer calibrated probabilistic wind forecasts which will be invaluable in wind energy management. In brief, this method turns the ensemble forecasts into a calibrated predictive probability distribution. Each ensemble member is provided with a 'weight' determined by its relative predictive skill over a training period of around 30 days. Verification of data is carried out using observed wind data from operational wind farms. These are then compared to existing forecasts produced by ECMWF and Met Eireann in relation to skill scores. We are developing decision-making models to show the benefits achieved using the data produced by our wind energy forecasting system. An energy trading model will be developed, based on the rules currently used by the Single Electricity Market Operator for energy trading in Ireland. This trading model will illustrate the potential for financial savings by using the forecast data generated by this research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JGeod.tmp..480A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JGeod.tmp..480A"><span>On the assimilation of absolute geodetic dynamic topography in a global ocean model: impact on the deep ocean state</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Androsov, Alexey; Nerger, Lars; Schnur, Reiner; Schröter, Jens; Albertella, Alberta; Rummel, Reiner; Savcenko, Roman; Bosch, Wolfgang; Skachko, Sergey; Danilov, Sergey</p> <p>2018-05-01</p> <p>General ocean circulation models are not perfect. Forced with observed atmospheric fluxes they gradually drift away from measured distributions of temperature and salinity. We suggest data assimilation of absolute dynamical ocean topography (DOT) observed from space geodetic missions as an option to reduce these differences. Sea surface information of DOT is transferred into the deep ocean by defining the analysed ocean state as a weighted average of an ensemble of fully consistent model solutions using an error-subspace ensemble Kalman filter technique. Success of the technique is demonstrated by assimilation into a global configuration of the ocean circulation model FESOM over 1 year. The dynamic ocean topography data are obtained from a combination of multi-satellite altimetry and geoid measurements. The assimilation result is assessed using independent temperature and salinity analysis derived from profiling buoys of the AGRO float data set. The largest impact of the assimilation occurs at the first few analysis steps where both the model ocean topography and the steric height (i.e. temperature and salinity) are improved. The continued data assimilation over 1 year further improves the model state gradually. Deep ocean fields quickly adjust in a sustained manner: A model forecast initialized from the model state estimated by the data assimilation after only 1 month shows that improvements induced by the data assimilation remain in the model state for a long time. Even after 11 months, the modelled ocean topography and temperature fields show smaller errors than the model forecast without any data assimilation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018HESS...22.2007D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018HESS...22.2007D"><span>Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 2: numerical application</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dib, Alain; Kavvas, M. Levent</p> <p>2018-03-01</p> <p>The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26881999','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26881999"><span>An ensemble of dynamic neural network identifiers for fault detection and isolation of gas turbine engines.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Amozegar, M; Khorasani, K</p> <p>2016-04-01</p> <p>In this paper, a new approach for Fault Detection and Isolation (FDI) of gas turbine engines is proposed by developing an ensemble of dynamic neural network identifiers. For health monitoring of the gas turbine engine, its dynamics is first identified by constructing three separate or individual dynamic neural network architectures. Specifically, a dynamic multi-layer perceptron (MLP), a dynamic radial-basis function (RBF) neural network, and a dynamic support vector machine (SVM) are trained to individually identify and represent the gas turbine engine dynamics. Next, three ensemble-based techniques are developed to represent the gas turbine engine dynamics, namely, two heterogeneous ensemble models and one homogeneous ensemble model. It is first shown that all ensemble approaches do significantly improve the overall performance and accuracy of the developed system identification scheme when compared to each of the stand-alone solutions. The best selected stand-alone model (i.e., the dynamic RBF network) and the best selected ensemble architecture (i.e., the heterogeneous ensemble) in terms of their performances in achieving an accurate system identification are then selected for solving the FDI task. The required residual signals are generated by using both a single model-based solution and an ensemble-based solution under various gas turbine engine health conditions. Our extensive simulation studies demonstrate that the fault detection and isolation task achieved by using the residuals that are obtained from the dynamic ensemble scheme results in a significantly more accurate and reliable performance as illustrated through detailed quantitative confusion matrix analysis and comparative studies. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1812849B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1812849B"><span>Benefits of an ultra large and multiresolution ensemble for estimating available wind power</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik</p> <p>2016-04-01</p> <p>In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 - 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ESD.....9..135H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ESD.....9..135H"><span>Selecting a climate model subset to optimise key ensemble properties</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.</p> <p>2018-02-01</p> <p>End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..1513090D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..1513090D"><span>Interactive vs. Non-Interactive Ensembles for Weather Prediction and Climate Projection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duane, Gregory</p> <p>2013-04-01</p> <p>If the members of an ensemble of different models are allowed to interact with one another in run time, predictive skill can be improved as compared to that of any individual model or any average of indvidual model outputs. Inter-model connections in such an interactive ensemble can be trained, using historical data, so that the resulting ``supermodel" synchronizes with reality when used in weather-prediction mode, where the individual models perform data assimilation from each other (with trainable inter-model "observation error") as well as from real observations. In climate-projection mode, parameters of the individual models are changed, as might occur from an increase in GHG levels, and one obtains relevant statistical properties of the new supermodel attractor. In simple cases, it has been shown that training of the inter-model connections with the old parameter values gives a supermodel that is still predictive when the parameter values are changed. Here we inquire as to the circumstances under which supermodel performance can be expected to exceed that of the customary weighted average of model outputs. We consider a supermodel formed from quasigeostrophic channel models with different forcing coefficients, and introduce an effective training scheme for the inter-model connections. We show that the blocked-zonal index cycle is reproduced better by the supermodel than by any non-interactive ensemble in the extreme case where the forcing coefficients of the different models are very large or very small. With realistic differences in forcing coefficients, as would be representative of actual differences among IPCC-class models, the usual linearity assumption is justified and a weighted average of model outputs is adequate. It is therefore hypothesized that supermodeling is likely to be useful in situations where there are qualitative model differences, as arising from sub-gridscale parameterizations, that affect overall model behavior. Otherwise the usual ex post facto averaging will probably suffice. Previous results from an ENSO-prediction supermodel [Kirtman et al.] are re-examined in light of the hypothesis about the importance of qualitative inter-model differences.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2901495','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2901495"><span>Dynamic clustering threshold reduces conformer ensemble size while maintaining a biologically relevant ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yongye, Austin B.; Bender, Andreas</p> <p>2010-01-01</p> <p>Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged-RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged-RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1–4), medium (5–9) and high (10–15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments. Electronic supplementary material The online version of this article (doi:10.1007/s10822-010-9365-1) contains supplementary material, which is available to authorized users. PMID:20499135</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3575305','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3575305"><span>A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2012-01-01</p> <p>Background Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? Results The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Conclusion Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway. PMID:23216969</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23216969','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23216969"><span>A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Günther, Oliver P; Chen, Virginia; Freue, Gabriela Cohen; Balshaw, Robert F; Tebbutt, Scott J; Hollander, Zsuzsanna; Takhar, Mandeep; McMaster, W Robert; McManus, Bruce M; Keown, Paul A; Ng, Raymond T</p> <p>2012-12-08</p> <p>Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1916810O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1916810O"><span>Total probabilities of ensemble runoff forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian</p> <p>2017-04-01</p> <p>Ensemble forecasting has a long history from meteorological modelling, as an indication of the uncertainty of the forecasts. However, it is necessary to calibrate and post-process the ensembles as the they often exhibit both bias and dispersion errors. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters varying in space and time, while giving a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, which makes it unsuitable for our purpose. Our post-processing method of the ensembles is developed in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu), where we are making forecasts for whole Europe, and based on observations from around 700 catchments. As the target is flood forecasting, we are also more interested in improving the forecast skill for high-flows rather than in a good prediction of the entire flow regime. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different meteorological forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to estimate the total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but we are adding a spatial penalty in the calibration process to force a spatial correlation of the parameters. The penalty takes distance, stream-connectivity and size of the catchment areas into account. This can in some cases have a slight negative impact on the calibration error, but avoids large differences between parameters of nearby locations, whether stream connected or not. The spatial calibration also makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ChA%26A..41..430Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ChA%26A..41..430Y"><span>Ensemble Pulsar Time Scale</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yin, Dong-shan; Gao, Yu-ping; Zhao, Shu-hong</p> <p>2017-07-01</p> <p>Millisecond pulsars can generate another type of time scale that is totally independent of the atomic time scale, because the physical mechanisms of the pulsar time scale and the atomic time scale are quite different from each other. Usually the pulsar timing observations are not evenly sampled, and the internals between two data points range from several hours to more than half a month. Further more, these data sets are sparse. All this makes it difficult to generate an ensemble pulsar time scale. Hence, a new algorithm to calculate the ensemble pulsar time scale is proposed. Firstly, a cubic spline interpolation is used to densify the data set, and make the intervals between data points uniform. Then, the Vondrak filter is employed to smooth the data set, and get rid of the high-frequency noises, and finally the weighted average method is adopted to generate the ensemble pulsar time scale. The newly released NANOGRAV (North American Nanohertz Observatory for Gravitational Waves) 9-year data set is used to generate the ensemble pulsar time scale. This data set includes the 9-year observational data of 37 millisecond pulsars observed by the 100-meter Green Bank telescope and the 305-meter Arecibo telescope. It is found that the algorithm used in this paper can reduce effectively the influence caused by the noises in pulsar timing residuals, and improve the long-term stability of the ensemble pulsar time scale. Results indicate that the long-term (> 1 yr) stability of the ensemble pulsar time scale is better than 3.4 × 10-15.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12366212','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12366212"><span>Fracture of disordered solids in compression as a critical phenomenon. I. Statistical mechanics formalism.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Toussaint, Renaud; Pride, Steven R</p> <p>2002-09-01</p> <p>This is the first of a series of three articles that treats fracture localization as a critical phenomenon. This first article establishes a statistical mechanics based on ensemble averages when fluctuations through time play no role in defining the ensemble. Ensembles are obtained by dividing a huge rock sample into many mesoscopic volumes. Because rocks are a disordered collection of grains in cohesive contact, we expect that once shear strain is applied and cracks begin to arrive in the system, the mesoscopic volumes will have a wide distribution of different crack states. These mesoscopic volumes are the members of our ensembles. We determine the probability of observing a mesoscopic volume to be in a given crack state by maximizing Shannon's measure of the emergent-crack disorder subject to constraints coming from the energy balance of brittle fracture. The laws of thermodynamics, the partition function, and the quantification of temperature are obtained for such cracking systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22413556-bayesian-network-ensemble-multivariate-strategy-predict-radiation-pneumonitis-risk','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22413556-bayesian-network-ensemble-multivariate-strategy-predict-radiation-pneumonitis-risk"><span>Bayesian network ensemble as a multivariate strategy to predict radiation pneumonitis risk</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Lee, Sangkyu, E-mail: sangkyu.lee@mail.mcgill.ca; Ybarra, Norma; Jeyaseelan, Krishinima</p> <p>2015-05-15</p> <p>Purpose: Prediction of radiation pneumonitis (RP) has been shown to be challenging due to the involvement of a variety of factors including dose–volume metrics and radiosensitivity biomarkers. Some of these factors are highly correlated and might affect prediction results when combined. Bayesian network (BN) provides a probabilistic framework to represent variable dependencies in a directed acyclic graph. The aim of this study is to integrate the BN framework and a systems’ biology approach to detect possible interactions among RP risk factors and exploit these relationships to enhance both the understanding and prediction of RP. Methods: The authors studied 54 nonsmall-cellmore » lung cancer patients who received curative 3D-conformal radiotherapy. Nineteen RP events were observed (common toxicity criteria for adverse events grade 2 or higher). Serum concentration of the following four candidate biomarkers were measured at baseline and midtreatment: alpha-2-macroglobulin, angiotensin converting enzyme (ACE), transforming growth factor, interleukin-6. Dose-volumetric and clinical parameters were also included as covariates. Feature selection was performed using a Markov blanket approach based on the Koller–Sahami filter. The Markov chain Monte Carlo technique estimated the posterior distribution of BN graphs built from the observed data of the selected variables and causality constraints. RP probability was estimated using a limited number of high posterior graphs (ensemble) and was averaged for the final RP estimate using Bayes’ rule. A resampling method based on bootstrapping was applied to model training and validation in order to control under- and overfit pitfalls. Results: RP prediction power of the BN ensemble approach reached its optimum at a size of 200. The optimized performance of the BN model recorded an area under the receiver operating characteristic curve (AUC) of 0.83, which was significantly higher than multivariate logistic regression (0.77), mean heart dose (0.69), and a pre-to-midtreatment change in ACE (0.66). When RP prediction was made only with pretreatment information, the AUC ranged from 0.76 to 0.81 depending on the ensemble size. Bootstrap validation of graph features in the ensemble quantified confidence of association between variables in the graphs where ten interactions were statistically significant. Conclusions: The presented BN methodology provides the flexibility to model hierarchical interactions between RP covariates, which is applied to probabilistic inference on RP. The authors’ preliminary results demonstrate that such framework combined with an ensemble method can possibly improve prediction of RP under real-life clinical circumstances such as missing data or treatment plan adaptation.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28549952','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28549952"><span>Drug-target interaction prediction using ensemble learning and dimensionality reduction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong</p> <p>2017-10-01</p> <p>Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.5829L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.5829L"><span>Smart climate ensemble exploring approaches: the example of climate impacts on air pollution in Europe.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lemaire, Vincent; Colette, Augustin; Menut, Laurent</p> <p>2016-04-01</p> <p>Because of its sensitivity to weather patterns, climate change will have an impact on air pollution so that, in the future, a climate penalty could jeopardize the expected efficiency of air pollution mitigation measures. A common method to assess the impact of climate on air quality consists in implementing chemistry-transport models forced by climate projections. However, at present, such impact assessment lack multi-model ensemble approaches to address uncertainties because of the substantial computing cost. Therefore, as a preliminary step towards exploring large climate ensembles with air quality models, we developed an ensemble exploration technique in order to point out the climate models that should be investigated in priority. By using a training dataset from a deterministic projection of climate and air quality over Europe, we identified the main meteorological drivers of air quality for 8 regions in Europe and developed statistical models that could be used to estimate future air pollutant concentrations. Applying this statistical model to the whole EuroCordex ensemble of climate projection, we find a climate penalty for six subregions out of eight (Eastern Europe, France, Iberian Peninsula, Mid Europe and Northern Italy). On the contrary, a climate benefit for PM2.5 was identified for three regions (Eastern Europe, Mid Europe and Northern Italy). The uncertainty of this statistical model challenges limits however the confidence we can attribute to associated quantitative projections. This technique allows however selecting a subset of relevant regional climate model members that should be used in priority for future deterministic projections to propose an adequate coverage of uncertainties. We are thereby proposing a smart ensemble exploration strategy that can also be used for other impacts studies beyond air quality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28625737','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28625737"><span>Insights into the conformations and dynamics of intrinsically disordered proteins using single-molecule fluorescence.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gomes, Gregory-Neal; Gradinaru, Claudiu C</p> <p>2017-11-01</p> <p>Most proteins are not static structures, but many of them are found in a dynamic state, exchanging conformations on various time scales as a key aspect of their biological function. An entire spectrum of structural disorder exists in proteins and obtaining a satisfactory quantitative description of these states remains a challenge. Single-molecule fluorescence spectroscopy techniques are uniquely suited for this task, by measuring conformations without ensemble averaging and kinetics without interference from asynchronous processes. In this paper we review some of the recent successes in applying single-molecule fluorescence to different disordered protein systems, including interactions with their cellular targets and self-aggregation processes. We also discuss the implementation of computational methods and polymer physics models that are essential for inferring global dimension parameters for these proteins from smFRET data. Regarding future directions; 3- or 4-color FRET methods can provide multiple distances within a disordered ensemble simultaneously. In addition, integrating complementary experimental data from smFRET, NMR and SAXS will provide meaningful constraints for molecular simulations and will lead to more accurate structural representations of disordered proteins. This article is part of a Special Issue entitled: Biophysics in Canada, edited by Lewis Kay, John Baenziger, Albert Berghuis and Peter Tieleman. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900006893','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900006893"><span>Context dependent anti-aliasing image reconstruction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Beaudet, Paul R.; Hunt, A.; Arlia, N.</p> <p>1989-01-01</p> <p>Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160014672','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160014672"><span>Global Ocean Evaporation Increases Since 1960 in Climate Reanalyses: How Accurate Are They?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Robertson, Franklin R.; Roberts, Jason B.; Bosilovich, Michael G.</p> <p>2016-01-01</p> <p>AGCMs w/ Specified SSTs (AMIPs) GEOS-5, ERA-20CM Ensembles Incorporate best historical estimates of SST, sea ice, radiative forcing Atmospheric "weather noise" is inconsistent with specified SST. Instantaneous Sfc fluxes can be wrong sign (e.g. Indian Ocean Monsoon, high latitude oceans). Averaging over ensemble members helps isolate SST-forced signal. Reduced Observational Reanalyses: NOAA 20CR V2C, ERA-20C, JRA-55C Incorporate observed Sfc Press (20CR), Marine Winds (ERA-20C) and rawinsondes (JRA-55C) to recover much of true synoptic or weather w/o shock of new sat obs. Comprehensive Reanalyses (MERRA-2) Full suite of observational constraints- both conventional and remote sensing. But... substantial uncertainties owing to evolving satellite observing system. Multi-source Statistically Blended OAFlux, LargeYeager Blend reanalysis, satellite, and ocean buoy information. While climatological biases are removed, non-physical trends or variations in components remain. Satellite Retrievals GSSTF3, SeaFlux, HOAPS3... Global coverage. Retrieved near sfc wind speed, & humidity used with SST to drive accurate bulk aerodynamic flux estimates. Satellite inter-calibration, spacecraft pointing variations crucial. Short record ( late 1987-present). In situ Measurements ICOADS, IVAD, Res Cruises VOS and buoys offer direct measurements. Sparse data coverage (esp south of 30S. Changes in measurement techniques (e.g. shipboard anemometer height).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28268573','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28268573"><span>Deep neural ensemble for retinal vessel segmentation in fundus images towards achieving label-free angiography.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lahiri, A; Roy, Abhijit Guha; Sheet, Debdoot; Biswas, Prabir Kumar</p> <p>2016-08-01</p> <p>Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member autoencoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708. Comparison with other major algorithms substantiates the high efficacy of our model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26389618','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26389618"><span>Sensory processing patterns predict the integration of information held in visual working memory.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lowe, Matthew X; Stevenson, Ryan A; Wilson, Kristin E; Ouslis, Natasha E; Barense, Morgan D; Cant, Jonathan S; Ferber, Susanne</p> <p>2016-02-01</p> <p>Given the limited resources of visual working memory, multiple items may be remembered as an averaged group or ensemble. As a result, local information may be ill-defined, but these ensemble representations provide accurate diagnostics of the natural world by combining gist information with item-level information held in visual working memory. Some neurodevelopmental disorders are characterized by sensory processing profiles that predispose individuals to avoid or seek-out sensory stimulation, fundamentally altering their perceptual experience. Here, we report such processing styles will affect the computation of ensemble statistics in the general population. We identified stable adult sensory processing patterns to demonstrate that individuals with low sensory thresholds who show a greater proclivity to engage in active response strategies to prevent sensory overstimulation are less likely to integrate mean size information across a set of similar items and are therefore more likely to be biased away from the mean size representation of an ensemble display. We therefore propose the study of ensemble processing should extend beyond the statistics of the display, and should also consider the statistics of the observer. (PsycINFO Database Record (c) 2016 APA, all rights reserved).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28443645','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28443645"><span>Electronegativity determination of individual surface atoms by atomic force microscopy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Onoda, Jo; Ondráček, Martin; Jelínek, Pavel; Sugimoto, Yoshiaki</p> <p>2017-04-26</p> <p>Electronegativity is a fundamental concept in chemistry. Despite its importance, the experimental determination has been limited only to ensemble-averaged techniques. Here, we report a methodology to evaluate the electronegativity of individual surface atoms by atomic force microscopy. By measuring bond energies on the surface atoms using different tips, we find characteristic linear relations between the bond energies of different chemical species. We show that the linear relation can be rationalized by Pauling's equation for polar covalent bonds. This opens the possibility to characterize the electronegativity of individual surface atoms. Moreover, we demonstrate that the method is sensitive to variation of the electronegativity of given atomic species on a surface due to different chemical environments. Our findings open up ways of analysing surface chemical reactivity at the atomic scale.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5414035','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5414035"><span>Electronegativity determination of individual surface atoms by atomic force microscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Onoda, Jo; Ondráček, Martin; Jelínek, Pavel; Sugimoto, Yoshiaki</p> <p>2017-01-01</p> <p>Electronegativity is a fundamental concept in chemistry. Despite its importance, the experimental determination has been limited only to ensemble-averaged techniques. Here, we report a methodology to evaluate the electronegativity of individual surface atoms by atomic force microscopy. By measuring bond energies on the surface atoms using different tips, we find characteristic linear relations between the bond energies of different chemical species. We show that the linear relation can be rationalized by Pauling's equation for polar covalent bonds. This opens the possibility to characterize the electronegativity of individual surface atoms. Moreover, we demonstrate that the method is sensitive to variation of the electronegativity of given atomic species on a surface due to different chemical environments. Our findings open up ways of analysing surface chemical reactivity at the atomic scale. PMID:28443645</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18002695','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18002695"><span>Adaptive noise canceling of electrocardiogram artifacts in single channel electroencephalogram.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cho, Sung Pil; Song, Mi Hye; Park, Young Cheol; Choi, Ho Seon; Lee, Kyoung Joung</p> <p>2007-01-01</p> <p>A new method for estimating and eliminating electrocardiogram (ECG) artifacts from single channel scalp electroencephalogram (EEG) is proposed. The proposed method consists of emphasis of QRS complex from EEG using least squares acceleration (LSA) filter, generation of synchronized pulse with R-peak and ECG artifacts estimation and elimination using adaptive filter. The performance of the proposed method was evaluated using simulated and real EEG recordings, we found that the ECG artifacts were successfully estimated and eliminated in comparison with the conventional multi-channel techniques, which are independent component analysis (ICA) and ensemble average (EA) method. From this we can conclude that the proposed method is useful for the detecting and eliminating the ECG artifacts from single channel EEG and simple to use for ambulatory/portable EEG monitoring system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhyA..487..215S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhyA..487..215S"><span>Generalized ensemble theory with non-extensive statistics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shen, Ke-Ming; Zhang, Ben-Wei; Wang, En-Ke</p> <p>2017-12-01</p> <p>The non-extensive canonical ensemble theory is reconsidered with the method of Lagrange multipliers by maximizing Tsallis entropy, with the constraint that the normalized term of Tsallis' q -average of physical quantities, the sum ∑ pjq, is independent of the probability pi for Tsallis parameter q. The self-referential problem in the deduced probability and thermal quantities in non-extensive statistics is thus avoided, and thermodynamical relationships are obtained in a consistent and natural way. We also extend the study to the non-extensive grand canonical ensemble theory and obtain the q-deformed Bose-Einstein distribution as well as the q-deformed Fermi-Dirac distribution. The theory is further applied to the generalized Planck law to demonstrate the distinct behaviors of the various generalized q-distribution functions discussed in literature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17911916','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17911916"><span>Ensemble stump classifiers and gene expression signatures in lung cancer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Frey, Lewis; Edgerton, Mary; Fisher, Douglas; Levy, Shawn</p> <p>2007-01-01</p> <p>Microarray data sets for cancer tumor tissue generally have very few samples, each sample having thousands of probes (i.e., continuous variables). The sparsity of samples makes it difficult for machine learning techniques to discover probes relevant to the classification of tumor tissue. By combining data from different platforms (i.e., data sources), data sparsity is reduced, but this typically requires normalizing data from the different platforms, which can be non-trivial. This paper proposes a variant on the idea of ensemble learners to circumvent the need for normalization. To facilitate comprehension we build ensembles of very simple classifiers known as decision stumps--decision trees of one test each. The Ensemble Stump Classifier (ESC) identifies an mRNA signature having three probes and high accuracy for distinguishing between adenocarcinoma and squamous cell carcinoma of the lung across four data sets. In terms of accuracy, ESC outperforms a decision tree classifier on all four data sets, outperforms ensemble decision trees on three data sets, and simple stump classifiers on two data sets.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1019490','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1019490"><span>A Proposed Methodology to Classify Frontier Capital Markets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2011-07-31</p> <p>but because it is the surest route to our common good.” -Inaugural Speech by President Barack Obama, Jan 2009 This project involves basic...machine learning. The algorithm consists of a unique binary classifier mechanism that combines three methods: k-Nearest Neighbors ( kNN ), ensemble...Through kNN Ensemble Classification Techniques E. Capital Market Classification Based on Capital Flows and Trading Architecture F. Horizontal</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=machine+AND+learning&pg=2&id=EJ848773','ERIC'); return false;" href="https://eric.ed.gov/?q=machine+AND+learning&pg=2&id=EJ848773"><span>Dropout Prediction in E-Learning Courses through the Combination of Machine Learning Techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Lykourentzou, Ioanna; Giannoukos, Ioannis; Nikolopoulos, Vassilis; Mpardis, George; Loumos, Vassili</p> <p>2009-01-01</p> <p>In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5662244','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5662244"><span>Bayesian refinement of protein structures and ensembles against SAXS data using molecular dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shevchuk, Roman; Hub, Jochen S.</p> <p>2017-01-01</p> <p>Small-angle X-ray scattering is an increasingly popular technique used to detect protein structures and ensembles in solution. However, the refinement of structures and ensembles against SAXS data is often ambiguous due to the low information content of SAXS data, unknown systematic errors, and unknown scattering contributions from the solvent. We offer a solution to such problems by combining Bayesian inference with all-atom molecular dynamics simulations and explicit-solvent SAXS calculations. The Bayesian formulation correctly weights the SAXS data versus prior physical knowledge, it quantifies the precision or ambiguity of fitted structures and ensembles, and it accounts for unknown systematic errors due to poor buffer matching. The method further provides a probabilistic criterion for identifying the number of states required to explain the SAXS data. The method is validated by refining ensembles of a periplasmic binding protein against calculated SAXS curves. Subsequently, we derive the solution ensembles of the eukaryotic chaperone heat shock protein 90 (Hsp90) against experimental SAXS data. We find that the SAXS data of the apo state of Hsp90 is compatible with a single wide-open conformation, whereas the SAXS data of Hsp90 bound to ATP or to an ATP-analogue strongly suggest heterogenous ensembles of a closed and a wide-open state. PMID:29045407</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24174539','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24174539"><span>pE-DB: a database of structural ensembles of intrinsically disordered and of unfolded proteins.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Varadi, Mihaly; Kosol, Simone; Lebrun, Pierre; Valentini, Erica; Blackledge, Martin; Dunker, A Keith; Felli, Isabella C; Forman-Kay, Julie D; Kriwacki, Richard W; Pierattelli, Roberta; Sussman, Joel; Svergun, Dmitri I; Uversky, Vladimir N; Vendruscolo, Michele; Wishart, David; Wright, Peter E; Tompa, Peter</p> <p>2014-01-01</p> <p>The goal of pE-DB (http://pedb.vib.be) is to serve as an openly accessible database for the deposition of structural ensembles of intrinsically disordered proteins (IDPs) and of denatured proteins based on nuclear magnetic resonance spectroscopy, small-angle X-ray scattering and other data measured in solution. Owing to the inherent flexibility of IDPs, solution techniques are particularly appropriate for characterizing their biophysical properties, and structural ensembles in agreement with these data provide a convenient tool for describing the underlying conformational sampling. Database entries consist of (i) primary experimental data with descriptions of the acquisition methods and algorithms used for the ensemble calculations, and (ii) the structural ensembles consistent with these data, provided as a set of models in a Protein Data Bank format. PE-DB is open for submissions from the community, and is intended as a forum for disseminating the structural ensembles and the methodologies used to generate them. While the need to represent the IDP structures is clear, methods for determining and evaluating the structural ensembles are still evolving. The availability of the pE-DB database is expected to promote the development of new modeling methods and leads to a better understanding of how function arises from disordered states.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19740018956','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19740018956"><span>On the error probability of general tree and trellis codes with applications to sequential decoding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Johannesson, R.</p> <p>1973-01-01</p> <p>An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ClDy...40.1841F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ClDy...40.1841F"><span>Assessment of a stochastic downscaling methodology in generating an ensemble of hourly future climate time series</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fatichi, S.; Ivanov, V. Y.; Caporali, E.</p> <p>2013-04-01</p> <p>This study extends a stochastic downscaling methodology to generation of an ensemble of hourly time series of meteorological variables that express possible future climate conditions at a point-scale. The stochastic downscaling uses general circulation model (GCM) realizations and an hourly weather generator, the Advanced WEather GENerator (AWE-GEN). Marginal distributions of factors of change are computed for several climate statistics using a Bayesian methodology that can weight GCM realizations based on the model relative performance with respect to a historical climate and a degree of disagreement in projecting future conditions. A Monte Carlo technique is used to sample the factors of change from their respective marginal distributions. As a comparison with traditional approaches, factors of change are also estimated by averaging GCM realizations. With either approach, the derived factors of change are applied to the climate statistics inferred from historical observations to re-evaluate parameters of the weather generator. The re-parameterized generator yields hourly time series of meteorological variables that can be considered to be representative of future climate conditions. In this study, the time series are generated in an ensemble mode to fully reflect the uncertainty of GCM projections, climate stochasticity, as well as uncertainties of the downscaling procedure. Applications of the methodology in reproducing future climate conditions for the periods of 2000-2009, 2046-2065 and 2081-2100, using the period of 1962-1992 as the historical baseline are discussed for the location of Firenze (Italy). The inferences of the methodology for the period of 2000-2009 are tested against observations to assess reliability of the stochastic downscaling procedure in reproducing statistics of meteorological variables at different time scales.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010APExp...3i2801K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010APExp...3i2801K"><span>Optical Rabi Oscillations in a Quantum Dot Ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kujiraoka, Mamiko; Ishi-Hayase, Junko; Akahane, Kouichi; Yamamoto, Naokatsu; Ema, Kazuhiro; Sasaki, Masahide</p> <p>2010-09-01</p> <p>We have investigated Rabi oscillations of exciton polarization in a self-assembled InAs quantum dot ensemble. The four-wave mixing signals measured as a function of the average of the pulse area showed the large in-plane anisotropy and nonharmonic oscillations. The experimental results can be well reproduced by a two-level model calculation including three types of inhomogeneities without any fitting parameter. The large anisotropy can be well explained by the anisotropic dipole moments. We also find that the nonharmonic behaviors partly originate from the polarization interference.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24853864','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24853864"><span>A random matrix approach to credit risk.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas</p> <p>2014-01-01</p> <p>We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4031172','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4031172"><span>A Random Matrix Approach to Credit Risk</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Guhr, Thomas</p> <p>2014-01-01</p> <p>We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA478634','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA478634"><span>ensembleBMA: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2007-08-15</p> <p>library is used to allow addition of the legend and map outline to the plot. > bluescale <- function(n) hsv (4/6, s = seq(from = 1 /8, to = 1 , length = n...v = 1 ) > plotBMAforecast( probFreeze290104, lon=srftGridData$lon, lat =srftGridData$ lat , type="image", col=bluescale(100)) > title("Probability of...probPrecip130103) # used to determine zlim in plots [ 1 ] 0.02832709 0.99534860 > plotBMAforecast( probPrecip130103[,Ŕ"], lon=prcpGridData$lon, lat</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.H13M..04A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.H13M..04A"><span>Improving Flood Prediction By the Assimilation of Satellite Soil Moisture in Poorly Monitored Catchments.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alvarez-Garreton, C. D.; Ryu, D.; Western, A. W.; Crow, W. T.; Su, C. H.; Robertson, D. E.</p> <p>2014-12-01</p> <p>Flood prediction in poorly monitored catchments is among the greatest challenges faced by hydrologists. To address this challenge, an increasing number of studies in the last decade have explored methods to integrate various existing observations from ground and satellites. One approach in particular, is the assimilation of satellite soil moisture (SM-DA) into rainfall-runoff models. The rationale is that satellite soil moisture (SSM) can be used to correct model soil water states, enabling more accurate prediction of catchment response to precipitation and thus better streamflow. However, there is still no consensus on the most effective SM-DA scheme and how this might depend on catchment scale, climate characteristics, runoff mechanisms, model and SSM products used, etc. In this work, an operational SM-DA scheme was set up in the poorly monitored, large (>40,000 km2), semi-arid Warrego catchment situated in eastern Australia. We assimilated passive and active SSM products into the probability distributed model (PDM) using an ensemble Kalman filter. We explored factors influencing the SM-DA framework, including relatively new techniques to remove model-observation bias, estimate observation errors and represent model errors. Furthermore, we explored the advantages of accounting for the spatial distribution of forcing and channel routing processes within the catchment by implementing and comparing lumped and semi-distributed model setups. Flood prediction is improved by SM-DA (Figure), with a 30% reduction of the average root-mean-squared difference of the ensemble prediction, a 20% reduction of the false alarm ratio and a 40% increase of the ensemble mean Nash-Sutcliffe efficiency. SM-DA skill does not significantly change with different observation error assumptions, but the skill strongly depends on the observational bias correction technique used, and more importantly, on the performance of the open-loop model before assimilation. Our findings imply that proper pre-processing of SSM is important for the efficacy of the SM-DA and assimilation performance is critically affected by the quality of model calibration. We therefore recommend focusing efforts on these two factors, while further evaluating the trade-offs between model complexity and data availability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJT....38..149S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJT....38..149S"><span>Establishment of a New National Reference Ensemble of Water Triple Point Cells</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Senn, Remo</p> <p>2017-10-01</p> <p>The results of the Bilateral Comparison EURAMET.T-K3.5 (w/VSL, The Netherlands) with the goal to link Switzerland's ITS-90 realization (Ar to Al) to the latest key comparisons gave strong indications for a discrepancy in the realization of the triple point of water. Due to the age of the cells of about twenty years, it was decided to replace the complete reference ensemble with new "state-of-the-art" cells. Three new water triple point cells from three different suppliers were purchased, as well as a new maintenance bath for an additional improvement of the realization. In several loops measurements were taken, each cell of both ensembles intercompared, and the deviations and characteristics determined. The measurements show a significant lower average value of the old ensemble of 0.59 ± 0.25 mK (k=2) in comparison with the new one. Likewise, the behavior of the old cells is very unstable with a drift downward during the realization of the triple point. Based on these results the impact of the new ensemble on the ITS-90 realization from Ar to Al was calculated and set in the context to performed calibrations and their related uncertainties in the past. This paper presents the instrumentation, cells, measurement procedure, results, uncertainties and impact of the new national reference ensemble of water triple point cells on the current ITS-90 realization in Switzerland.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUSM.H31A..08K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUSM.H31A..08K"><span>Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kasiviswanathan, K.; Sudheer, K.</p> <p>2013-05-01</p> <p>Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22261648-stochastic-dynamics-small-ensembles-non-processive-molecular-motors-parallel-cluster-model','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22261648-stochastic-dynamics-small-ensembles-non-processive-molecular-motors-parallel-cluster-model"><span>Stochastic dynamics of small ensembles of non-processive molecular motors: The parallel cluster model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Erdmann, Thorsten; Albert, Philipp J.; Schwarz, Ulrich S.</p> <p>2013-11-07</p> <p>Non-processive molecular motors have to work together in ensembles in order to generate appreciable levels of force or movement. In skeletal muscle, for example, hundreds of myosin II molecules cooperate in thick filaments. In non-muscle cells, by contrast, small groups with few tens of non-muscle myosin II motors contribute to essential cellular processes such as transport, shape changes, or mechanosensing. Here we introduce a detailed and analytically tractable model for this important situation. Using a three-state crossbridge model for the myosin II motor cycle and exploiting the assumptions of fast power stroke kinetics and equal load sharing between motors inmore » equivalent states, we reduce the stochastic reaction network to a one-step master equation for the binding and unbinding dynamics (parallel cluster model) and derive the rules for ensemble movement. We find that for constant external load, ensemble dynamics is strongly shaped by the catch bond character of myosin II, which leads to an increase of the fraction of bound motors under load and thus to firm attachment even for small ensembles. This adaptation to load results in a concave force-velocity relation described by a Hill relation. For external load provided by a linear spring, myosin II ensembles dynamically adjust themselves towards an isometric state with constant average position and load. The dynamics of the ensembles is now determined mainly by the distribution of motors over the different kinds of bound states. For increasing stiffness of the external spring, there is a sharp transition beyond which myosin II can no longer perform the power stroke. Slow unbinding from the pre-power-stroke state protects the ensembles against detachment.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNG33A0195L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNG33A0195L"><span>Multi-objective optimization for generating a weighted multi-model ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, H.</p> <p>2017-12-01</p> <p>Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5839517','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5839517"><span>Avoided climate impacts of urban and rural heat and cold waves over the U.S. using large climate model ensembles for RCP8.5 and RCP4.5</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Anderson, G.B.; Jones, B.; McGinnis, S.A.; Sanderson, B.</p> <p>2015-01-01</p> <p>Previous studies examining future changes in heat/cold waves using climate model ensembles have been limited to grid cell-average quantities. Here, we make use of an urban parameterization in the Community Earth System Model (CESM) that represents the urban heat island effect, which can exacerbate extreme heat but may ameliorate extreme cold in urban relative to rural areas. Heat/cold wave characteristics are derived for U.S. regions from a bias-corrected CESM 30-member ensemble for climate outcomes driven by the RCP8.5 forcing scenario and a 15-member ensemble driven by RCP4.5. Significant differences are found between urban and grid cell-average heat/cold wave characteristics. Most notably, urban heat waves for 1981–2005 are more intense than grid cell-average by 2.1°C (southeast) to 4.6°C (southwest), while cold waves are less intense. We assess the avoided climate impacts of urban heat/cold waves in 2061–2080 when following the lower forcing scenario. Urban heat wave days per year increase from 6 in 1981–2005 to up to 92 (southeast) in RCP8.5. Following RCP4.5 reduces heat wave days by about 50%. Large avoided impacts are demonstrated for individual communities; e.g., the longest heat wave for Houston in RCP4.5 is 38 days while in RCP8.5 there is one heat wave per year that is longer than a month with some lasting the entire summer. Heat waves also start later in the season in RCP4.5 (earliest are in early May) than RCP8.5 (mid-April), compared to 1981–2005 (late May). In some communities, cold wave events decrease from 2 per year for 1981–2005 to one-in-five year events in RCP4.5 and one-in-ten year events in RCP8.5. PMID:29520121</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018IJTP...57..570L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018IJTP...57..570L"><span>Robust Deterministic Controlled Phase-Flip Gate and Controlled-Not Gate Based on Atomic Ensembles Embedded in Double-Sided Optical Cavities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, A.-Peng; Cheng, Liu-Yong; Guo, Qi; Zhang, Shou</p> <p>2018-02-01</p> <p>We first propose a scheme for controlled phase-flip gate between a flying photon qubit and the collective spin wave (magnon) of an atomic ensemble assisted by double-sided cavity quantum systems. Then we propose a deterministic controlled-not gate on magnon qubits with parity-check building blocks. Both the gates can be accomplished with 100% success probability in principle. Atomic ensemble is employed so that light-matter coupling is remarkably improved by collective enhancement. We assess the performance of the gates and the results show that they can be faithfully constituted with current experimental techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26736200','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26736200"><span>Causal network in a deafferented non-human primate brain.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Balasubramanian, Karthikeyan; Takahashi, Kazutaka; Hatsopoulos, Nicholas G</p> <p>2015-01-01</p> <p>De-afferented/efferented neural ensembles can undergo causal changes when interfaced to neuroprosthetic devices. These changes occur via recruitment or isolation of neurons, alterations in functional connectivity within the ensemble and/or changes in the role of neurons, i.e., excitatory/inhibitory. In this work, emergence of a causal network and changes in the dynamics are demonstrated for a deafferented brain region exposed to BMI (brain-machine interface) learning. The BMI was controlling a robot for reach-and-grasp behavior. And, the motor cortical regions used for the BMI were deafferented due to chronic amputation, and ensembles of neurons were decoded for velocity control of the multi-DOF robot. A generalized linear model-framework based Granger causality (GLM-GC) technique was used in estimating the ensemble connectivity. Model selection was based on the AIC (Akaike Information Criterion).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015CoPhC.188....1G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015CoPhC.188....1G"><span>Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gilbreth, C. N.; Alhassid, Y.</p> <p>2015-03-01</p> <p>Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4148204','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4148204"><span>Enhancing coronary Wave Intensity Analysis robustness by high order central finite differences</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rivolo, Simone; Asrress, Kaleab N.; Chiribiri, Amedeo; Sammut, Eva; Wesolowski, Roman; Bloch, Lars Ø.; Grøndal, Anne K.; Hønge, Jesper L.; Kim, Won Y.; Marber, Michael; Redwood, Simon; Nagel, Eike; Smith, Nicolas P.; Lee, Jack</p> <p>2014-01-01</p> <p>Background Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. Studies have identified WIA-derived indices that are closely correlated with several disease processes and predictive of functional recovery following myocardial infarction. The cWIA clinical application has, however, been limited by technical challenges including a lack of standardization across different studies and the derived indices' sensitivity to the processing parameters. Specifically, a critical step in WIA is the noise removal for evaluation of derivatives of the acquired signals, typically performed by applying a Savitzky–Golay filter, to reduce the high frequency acquisition noise. Methods The impact of the filter parameter selection on cWIA output, and on the derived clinical metrics (integral areas and peaks of the major waves), is first analysed. The sensitivity analysis is performed either by using the filter as a differentiator to calculate the signals' time derivative or by applying the filter to smooth the ensemble-averaged waveforms. Furthermore, the power-spectrum of the ensemble-averaged waveforms contains little high-frequency components, which motivated us to propose an alternative approach to compute the time derivatives of the acquired waveforms using a central finite difference scheme. Results and Conclusion The cWIA output and consequently the derived clinical metrics are significantly affected by the filter parameters, irrespective of its use as a smoothing filter or a differentiator. The proposed approach is parameter-free and, when applied to the 10 in-vivo human datasets and the 50 in-vivo animal datasets, enhances the cWIA robustness by significantly reducing the outcome variability (by 60%). PMID:25187852</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMPP22A..04P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMPP22A..04P"><span>Multi-centennial upper-ocean heat content reconstruction using online data assimilation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Perkins, W. A.; Hakim, G. J.</p> <p>2017-12-01</p> <p>The Last Millennium Reanalysis (LMR) provides an advanced paleoclimate ensemble data assimilation framework for multi-variate climate field reconstructions over the Common Era. Although reconstructions in this framework with full Earth system models remain prohibitively expensive, recent work has shown improved ensemble reconstruction validation using computationally inexpensive linear inverse models (LIMs). Here we leverage these techniques in pursuit of a new multi-centennial field reconstruction of upper-ocean heat content (OHC), synthesizing model dynamics with observational constraints from proxy records. OHC is an important indicator of internal climate variability and responds to planetary energy imbalances. Therefore, a consistent extension of the OHC record in time will help inform aspects of low-frequency climate variability. We use the Community Climate System Model version 4 (CCSM4) and Max Planck Institute (MPI) last millennium simulations to derive the LIMs, and the PAGES2K v.2.0 proxy database to perform annually resolved reconstructions of upper-OHC, surface air temperature, and wind stress over the last 500 years. Annual OHC reconstructions and uncertainties for both the global mean and regional basins are compared against observational and reanalysis data. We then investigate differences in dynamical behavior at decadal and longer time scales between the reconstruction and simulations in the last-millennium Coupled Model Intercomparison Project version 5 (CMIP5). Preliminary investigation of 1-year forecast skill for an OHC-only LIM shows largely positive spatial grid point local anomaly correlations (LAC) with a global average LAC of 0.37. Compared to 1-year OHC persistence forecast LAC (global average LAC of 0.30), the LIM outperforms the persistence forecasts in the tropical Indo-Pacific region, the equatorial Atlantic, and in certain regions near the Antarctic Circumpolar Current. In other regions, the forecast correlations are less than the persistence case but still positive overall.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007AdWR...30.1371D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007AdWR...30.1371D"><span>Multi-model ensemble hydrologic prediction using Bayesian model averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh</p> <p>2007-05-01</p> <p>Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26455882','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26455882"><span>A hybrid method for classifying cognitive states from fMRI data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Parida, S; Dehuri, S; Cho, S-B; Cacha, L A; Poznanski, R R</p> <p>2015-09-01</p> <p>Functional magnetic resonance imaging (fMRI) makes it possible to detect brain activities in order to elucidate cognitive-states. The complex nature of fMRI data requires under-standing of the analyses applied to produce possible avenues for developing models of cognitive state classification and improving brain activity prediction. While many models of classification task of fMRI data analysis have been developed, in this paper, we present a novel hybrid technique through combining the best attributes of genetic algorithms (GAs) and ensemble decision tree technique that consistently outperforms all other methods which are being used for cognitive-state classification. Specifically, this paper illustrates the combined effort of decision-trees ensemble and GAs for feature selection through an extensive simulation study and discusses the classification performance with respect to fMRI data. We have shown that our proposed method exhibits significant reduction of the number of features with clear edge classification accuracy over ensemble of decision-trees.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014APS..MARF28002S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014APS..MARF28002S"><span>Quantum memory operations in a flux qubit - spin ensemble hybrid system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Saito, S.; Zhu, X.; Amsuss, R.; Matsuzaki, Y.; Kakuyanagi, K.; Shimo-Oka, T.; Mizuochi, N.; Nemoto, K.; Munro, W. J.; Semba, K.</p> <p>2014-03-01</p> <p>Superconducting quantum bits (qubits) are one of the most promising candidates for a future large-scale quantum processor. However for larger scale realizations the currently reported coherence times of these macroscopic objects (superconducting qubits) has not yet reached those of microscopic systems (electron spins, nuclear spins, etc). In this context, a superconductor-spin ensemble hybrid system has attracted considerable attention. The spin ensemble could operate as a quantum memory for superconducting qubits. We have experimentally demonstrated quantum memory operations in a superconductor-diamond hybrid system. An excited state and a superposition state prepared in the flux qubit can be transferred to, stored in and retrieved from the NV spin ensemble in diamond. From these experiments, we have found the coherence time of the spin ensemble is limited by the inhomogeneous broadening of the electron spin (4.4 MHz) and by the hyperfine coupling to nitrogen nuclear spins (2.3 MHz). In the future, spin echo techniques could eliminate these effects and elongate the coherence time. Our results are a significant first step in utilizing the spin ensemble as long-lived quantum memory for superconducting flux qubits. This work was supported by the FIRST program and NICT.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040081366&hterms=seasonal+forecast&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dseasonal%2Bforecast','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040081366&hterms=seasonal+forecast&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dseasonal%2Bforecast"><span>Alternative Approaches to Land Initialization for Seasonal Precipitation and Temperature Forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Koster, Randal; Suarez, Max; Liu, Ping; Jambor, Urszula</p> <p>2004-01-01</p> <p>The seasonal prediction system of the NASA Global Modeling and Assimilation Office is used to generate ensembles of summer forecasts utilizing realistic soil moisture initialization. To derive the realistic land states, we drive offline the system's land model with realistic meteorological forcing over the period 1979-1993 (in cooperation with the Global Land Data Assimilation System project at GSFC) and then extract the state variables' values on the chosen forecast start dates. A parallel series of forecast ensembles is performed with a random (though climatologically consistent) set of land initial conditions; by comparing the two sets of ensembles, we can isolate the impact of land initialization on forecast skill from that of the imposed SSTs. The base initialization experiment is supplemented with several forecast ensembles that use alternative initialization techniques. One ensemble addresses the impact of minimizing climate drift in the system through the scaling of the initial conditions, and another is designed to isolate the importance of the precipitation signal from that of all other signals in the antecedent offline forcing. A third ensemble includes a more realistic initialization of the atmosphere along with the land initialization. The impact of each variation on forecast skill is quantified.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1813618O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1813618O"><span>Total probabilities of ensemble runoff forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian</p> <p>2016-04-01</p> <p>Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative impact on the calibration error, but makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22168686','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22168686"><span>Boiling point determination using adiabatic Gibbs ensemble Monte Carlo simulations: application to metals described by embedded-atom potentials.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gelb, Lev D; Chakraborty, Somendra Nath</p> <p>2011-12-14</p> <p>The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase. © 2011 American Institute of Physics</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19350911','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19350911"><span>Simulating ensembles of source water quality using a K-nearest neighbor resampling approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Towler, Erin; Rajagopalan, Balaji; Seidel, Chad; Summers, R Scott</p> <p>2009-03-01</p> <p>Climatological, geological, and water management factors can cause significant variability in surface water quality. As drinking water quality standards become more stringent, the ability to quantify the variability of source water quality becomes more important for decision-making and planning in water treatment for regulatory compliance. However, paucity of long-term water quality data makes it challenging to apply traditional simulation techniques. To overcome this limitation, we have developed and applied a robust nonparametric K-nearest neighbor (K-nn) bootstrap approach utilizing the United States Environmental Protection Agency's Information Collection Rule (ICR) data. In this technique, first an appropriate "feature vector" is formed from the best available explanatory variables. The nearest neighbors to the feature vector are identified from the ICR data and are resampled using a weight function. Repetition of this results in water quality ensembles, and consequently the distribution and the quantification of the variability. The main strengths of the approach are its flexibility, simplicity, and the ability to use a large amount of spatial data with limited temporal extent to provide water quality ensembles for any given location. We demonstrate this approach by applying it to simulate monthly ensembles of total organic carbon for two utilities in the U.S. with very different watersheds and to alkalinity and bromide at two other U.S. utilities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFMGC51A0736C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFMGC51A0736C"><span>Simulation of an ensemble of future climate time series with an hourly weather generator</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Caporali, E.; Fatichi, S.; Ivanov, V. Y.; Kim, J.</p> <p>2010-12-01</p> <p>There is evidence that climate change is occurring in many regions of the world. The necessity of climate change predictions at the local scale and fine temporal resolution is thus warranted for hydrological, ecological, geomorphological, and agricultural applications that can provide thematic insights into the corresponding impacts. Numerous downscaling techniques have been proposed to bridge the gap between the spatial scales adopted in General Circulation Models (GCM) and regional analyses. Nevertheless, the time and spatial resolutions obtained as well as the type of meteorological variables may not be sufficient for detailed studies of climate change effects at the local scales. In this context, this study presents a stochastic downscaling technique that makes use of an hourly weather generator to simulate time series of predicted future climate. Using a Bayesian approach, the downscaling procedure derives distributions of factors of change for several climate statistics from a multi-model ensemble of GCMs. Factors of change are sampled from their distributions using a Monte Carlo technique to entirely account for the probabilistic information obtained with the Bayesian multi-model ensemble. Factors of change are subsequently applied to the statistics derived from observations to re-evaluate the parameters of the weather generator. The weather generator can reproduce a wide set of climate variables and statistics over a range of temporal scales, from extremes, to the low-frequency inter-annual variability. The final result of such a procedure is the generation of an ensemble of hourly time series of meteorological variables that can be considered as representative of future climate, as inferred from GCMs. The generated ensemble of scenarios also accounts for the uncertainty derived from multiple GCMs used in downscaling. Applications of the procedure in reproducing present and future climates are presented for different locations world-wide: Tucson (AZ), Detroit (MI), and Firenze (Italy). The stochastic downscaling is carried out with eight GCMs from the CMIP3 multi-model dataset (IPCC 4AR, A1B scenario).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22465632-ensemble-type-numerical-uncertainty-information-from-single-model-integrations','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22465632-ensemble-type-numerical-uncertainty-information-from-single-model-integrations"><span>Ensemble-type numerical uncertainty information from single model integrations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter</p> <p>2015-07-01</p> <p>We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70192058','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70192058"><span>Characterizing sources of uncertainty from global climate models and downscaling techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Wootten, Adrienne; Terando, Adam; Reich, Brian J.; Boyles, Ryan; Semazzi, Fred</p> <p>2017-01-01</p> <p>In recent years climate model experiments have been increasingly oriented towards providing information that can support local and regional adaptation to the expected impacts of anthropogenic climate change. This shift has magnified the importance of downscaling as a means to translate coarse-scale global climate model (GCM) output to a finer scale that more closely matches the scale of interest. Applying this technique, however, introduces a new source of uncertainty into any resulting climate model ensemble. Here we present a method, based on a previously established variance decomposition method, to partition and quantify the uncertainty in climate model ensembles that is attributable to downscaling. We apply the method to the Southeast U.S. using five downscaled datasets that represent both statistical and dynamical downscaling techniques. The combined ensemble is highly fragmented, in that only a small portion of the complete set of downscaled GCMs and emission scenarios are typically available. The results indicate that the uncertainty attributable to downscaling approaches ~20% for large areas of the Southeast U.S. for precipitation and ~30% for extreme heat days (> 35°C) in the Appalachian Mountains. However, attributable quantities are significantly lower for time periods when the full ensemble is considered but only a sub-sample of all models are available, suggesting that overconfidence could be a serious problem in studies that employ a single set of downscaled GCMs. We conclude with recommendations to advance the design of climate model experiments so that the uncertainty that accrues when downscaling is employed is more fully and systematically considered.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27838826','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27838826"><span>Pairwise Classifier Ensemble with Adaptive Sub-Classifiers for fMRI Pattern Analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, Eunwoo; Park, HyunWook</p> <p>2017-02-01</p> <p>The multi-voxel pattern analysis technique is applied to fMRI data for classification of high-level brain functions using pattern information distributed over multiple voxels. In this paper, we propose a classifier ensemble for multiclass classification in fMRI analysis, exploiting the fact that specific neighboring voxels can contain spatial pattern information. The proposed method converts the multiclass classification to a pairwise classifier ensemble, and each pairwise classifier consists of multiple sub-classifiers using an adaptive feature set for each class-pair. Simulated and real fMRI data were used to verify the proposed method. Intra- and inter-subject analyses were performed to compare the proposed method with several well-known classifiers, including single and ensemble classifiers. The comparison results showed that the proposed method can be generally applied to multiclass classification in both simulations and real fMRI analyses.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26844300','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26844300"><span>Metainference: A Bayesian inference method for heterogeneous systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele</p> <p>2016-01-01</p> <p>Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1918455D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1918455D"><span>Synchronized Trajectories in a Climate "Supermodel"</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duane, Gregory; Schevenhoven, Francine; Selten, Frank</p> <p>2017-04-01</p> <p>Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930062803&hterms=Chimera&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DChimera','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930062803&hterms=Chimera&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DChimera"><span>Effects of bleed-hole geometry and plenum pressure on three-dimensional shock-wave/boundary-layer/bleed interactions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chyu, Wei J.; Rimlinger, Mark J.; Shih, Tom I.-P.</p> <p>1993-01-01</p> <p>A numerical study was performed to investigate 3D shock-wave/boundary-layer interactions on a flat plate with bleed through one or more circular holes that vent into a plenum. This study was focused on how bleed-hole geometry and pressure ratio across bleed holes affect the bleed rate and the physics of the flow in the vicinity of the holes. The aspects of the bleed-hole geometry investigated include angle of bleed hole and the number of bleed holes. The plenum/freestream pressure ratios investigated range from 0.3 to 1.7. This study is based on the ensemble-averaged, 'full compressible' Navier-Stokes (N-S) equations closed by the Baldwin-Lomax algebraic turbulence model. Solutions to the ensemble-averaged N-S equations were obtained by an implicit finite-volume method using the partially-split, two-factored algorithm of Steger on an overlapping Chimera grid.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/989792','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/989792"><span>Optimized nested Markov chain Monte Carlo sampling: theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D</p> <p>2009-01-01</p> <p>Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples ofmore » the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25516108','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25516108"><span>Differences in single and aggregated nanoparticle plasmon spectroscopy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Singh, Pushkar; Deckert-Gaudig, Tanja; Schneidewind, Henrik; Kirsch, Konstantin; van Schrojenstein Lantman, Evelien M; Weckhuysen, Bert M; Deckert, Volker</p> <p>2015-02-07</p> <p>Vibrational spectroscopy usually provides structural information averaged over many molecules. We report a larger peak position variation and reproducibly smaller FWHM of TERS spectra compared to SERS spectra indicating that the number of molecules excited in a TERS experiment is extremely low. Thus, orientational averaging effects are suppressed and micro ensembles are investigated. This is shown for a thiophenol molecule adsorbed on Au nanoplates and nanoparticles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5434667','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5434667"><span>A new method for determining the optimal lagged ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>DelSole, T.; Tippett, M. K.; Pegion, K.</p> <p>2017-01-01</p> <p>Abstract We propose a general methodology for determining the lagged ensemble that minimizes the mean square forecast error. The MSE of a lagged ensemble is shown to depend only on a quantity called the cross‐lead error covariance matrix, which can be estimated from a short hindcast data set and parameterized in terms of analytic functions of time. The resulting parameterization allows the skill of forecasts to be evaluated for an arbitrary ensemble size and initialization frequency. Remarkably, the parameterization also can estimate the MSE of a burst ensemble simply by taking the limit of an infinitely small interval between initialization times. This methodology is applied to forecasts of the Madden Julian Oscillation (MJO) from version 2 of the Climate Forecast System version 2 (CFSv2). For leads greater than a week, little improvement is found in the MJO forecast skill when ensembles larger than 5 days are used or initializations greater than 4 times per day. We find that if the initialization frequency is too infrequent, important structures of the lagged error covariance matrix are lost. Lastly, we demonstrate that the forecast error at leads ≥10 days can be reduced by optimally weighting the lagged ensemble members. The weights are shown to depend only on the cross‐lead error covariance matrix. While the methodology developed here is applied to CFSv2, the technique can be easily adapted to other forecast systems. PMID:28580050</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AnGeo..34..347T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AnGeo..34..347T"><span>Three-model ensemble wind prediction in southern Italy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo</p> <p>2016-03-01</p> <p>Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29284916','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29284916"><span>Comparison of Basic and Ensemble Data Mining Methods in Predicting 5-Year Survival of Colorectal Cancer Patients.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pourhoseingholi, Mohamad Amin; Kheirian, Sedigheh; Zali, Mohammad Reza</p> <p>2017-12-01</p> <p>Colorectal cancer (CRC) is one of the most common malignancies and cause of cancer mortality worldwide. Given the importance of predicting the survival of CRC patients and the growing use of data mining methods, this study aims to compare the performance of models for predicting 5-year survival of CRC patients using variety of basic and ensemble data mining methods. The CRC dataset from The Shahid Beheshti University of Medical Sciences Research Center for Gastroenterology and Liver Diseases were used for prediction and comparative study of the base and ensemble data mining techniques. Feature selection methods were used to select predictor attributes for classification. The WEKA toolkit and MedCalc software were respectively utilized for creating and comparing the models. The obtained results showed that the predictive performance of developed models was altogether high (all greater than 90%). Overall, the performance of ensemble models was higher than that of basic classifiers and the best result achieved by ensemble voting model in terms of area under the ROC curve (AUC= 0.96). AUC Comparison of models showed that the ensemble voting method significantly outperformed all models except for two methods of Random Forest (RF) and Bayesian Network (BN) considered the overlapping 95% confidence intervals. This result may indicate high predictive power of these two methods along with ensemble voting for predicting 5-year survival of CRC patients.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JChPh.147u4110M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JChPh.147u4110M"><span>On the non-stationary generalized Langevin equation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Meyer, Hugues; Voigtmann, Thomas; Schilling, Tanja</p> <p>2017-12-01</p> <p>In molecular dynamics simulations and single molecule experiments, observables are usually measured along dynamic trajectories and then averaged over an ensemble ("bundle") of trajectories. Under stationary conditions, the time-evolution of such averages is described by the generalized Langevin equation. By contrast, if the dynamics is not stationary, it is not a priori clear which form the equation of motion for an averaged observable has. We employ the formalism of time-dependent projection operator techniques to derive the equation of motion for a non-equilibrium trajectory-averaged observable as well as for its non-stationary auto-correlation function. The equation is similar in structure to the generalized Langevin equation but exhibits a time-dependent memory kernel as well as a fluctuating force that implicitly depends on the initial conditions of the process. We also derive a relation between this memory kernel and the autocorrelation function of the fluctuating force that has a structure similar to a fluctuation-dissipation relation. In addition, we show how the choice of the projection operator allows us to relate the Taylor expansion of the memory kernel to data that are accessible in MD simulations and experiments, thus allowing us to construct the equation of motion. As a numerical example, the procedure is applied to Brownian motion initialized in non-equilibrium conditions and is shown to be consistent with direct measurements from simulations.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NHESS..17.1795P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NHESS..17.1795P"><span>Revisiting the synoptic-scale predictability of severe European winter storms using ECMWF ensemble reforecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pantillon, Florian; Knippertz, Peter; Corsmeier, Ulrich</p> <p>2017-10-01</p> <p>New insights into the synoptic-scale predictability of 25 severe European winter storms of the 1995-2015 period are obtained using the homogeneous ensemble reforecast dataset from the European Centre for Medium-Range Weather Forecasts. The predictability of the storms is assessed with different metrics including (a) the track and intensity to investigate the storms' dynamics and (b) the Storm Severity Index to estimate the impact of the associated wind gusts. The storms are well predicted by the whole ensemble up to 2-4 days ahead. At longer lead times, the number of members predicting the observed storms decreases and the ensemble average is not clearly defined for the track and intensity. The Extreme Forecast Index and Shift of Tails are therefore computed from the deviation of the ensemble from the model climate. Based on these indices, the model has some skill in forecasting the area covered by extreme wind gusts up to 10 days, which indicates a clear potential for early warnings. However, large variability is found between the individual storms. The poor predictability of outliers appears related to their physical characteristics such as explosive intensification or small size. Longer datasets with more cases would be needed to further substantiate these points.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A33B2345G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A33B2345G"><span>Single Aerosol Particle Studies Using Optical Trapping Raman And Cavity Ringdown Spectroscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gong, Z.; Wang, C.; Pan, Y. L.; Videen, G.</p> <p>2017-12-01</p> <p>Due to the physical and chemical complexity of aerosol particles and the interdisciplinary nature of aerosol science that involves physics, chemistry, and biology, our knowledge of aerosol particles is rather incomplete; our current understanding of aerosol particles is limited by averaged (over size, composition, shape, and orientation) and/or ensemble (over time, size, and multi-particles) measurements. Physically, single aerosol particles are the fundamental units of any large aerosol ensembles. Chemically, single aerosol particles carry individual chemical components (properties and constituents) in particle ensemble processes. Therefore, the study of single aerosol particles can bridge the gap between aerosol ensembles and bulk/surface properties and provide a hierarchical progression from a simple benchmark single-component system to a mixed-phase multicomponent system. A single aerosol particle can be an effective reactor to study heterogeneous surface chemistry in multiple phases. Latest technological advances provide exciting new opportunities to study single aerosol particles and to further develop single aerosol particle instrumentation. We present updates on our recent studies of single aerosol particles optically trapped in air using the optical-trapping Raman and cavity ringdown spectroscopy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000Natur.405..567L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000Natur.405..567L"><span>Cortical ensemble activity increasingly predicts behaviour outcomes during learning of a motor task</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Laubach, Mark; Wessberg, Johan; Nicolelis, Miguel A. L.</p> <p>2000-06-01</p> <p>When an animal learns to make movements in response to different stimuli, changes in activity in the motor cortex seem to accompany and underlie this learning. The precise nature of modifications in cortical motor areas during the initial stages of motor learning, however, is largely unknown. Here we address this issue by chronically recording from neuronal ensembles located in the rat motor cortex, throughout the period required for rats to learn a reaction-time task. Motor learning was demonstrated by a decrease in the variance of the rats' reaction times and an increase in the time the animals were able to wait for a trigger stimulus. These behavioural changes were correlated with a significant increase in our ability to predict the correct or incorrect outcome of single trials based on three measures of neuronal ensemble activity: average firing rate, temporal patterns of firing, and correlated firing. This increase in prediction indicates that an association between sensory cues and movement emerged in the motor cortex as the task was learned. Such modifications in cortical ensemble activity may be critical for the initial learning of motor tasks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000093260','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000093260"><span>Decimated Input Ensembles for Improved Generalization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tumer, Kagan; Oza, Nikunj C.; Norvig, Peter (Technical Monitor)</p> <p>1999-01-01</p> <p>Recently, many researchers have demonstrated that using classifier ensembles (e.g., averaging the outputs of multiple classifiers before reaching a classification decision) leads to improved performance for many difficult generalization problems. However, in many domains there are serious impediments to such "turnkey" classification accuracy improvements. Most notable among these is the deleterious effect of highly correlated classifiers on the ensemble performance. One particular solution to this problem is generating "new" training sets by sampling the original one. However, with finite number of patterns, this causes a reduction in the training patterns each classifier sees, often resulting in considerably worsened generalization performance (particularly for high dimensional data domains) for each individual classifier. Generally, this drop in the accuracy of the individual classifier performance more than offsets any potential gains due to combining, unless diversity among classifiers is actively promoted. In this work, we introduce a method that: (1) reduces the correlation among the classifiers; (2) reduces the dimensionality of the data, thus lessening the impact of the 'curse of dimensionality'; and (3) improves the classification performance of the ensemble.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4103595','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4103595"><span>CABS-flex predictions of protein flexibility compared with NMR ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian</p> <p>2014-01-01</p> <p>Motivation: Identification of flexible regions of protein structures is important for understanding of their biological functions. Recently, we have developed a fast approach for predicting protein structure fluctuations from a single protein model: the CABS-flex. CABS-flex was shown to be an efficient alternative to conventional all-atom molecular dynamics (MD). In this work, we evaluate CABS-flex and MD predictions by comparison with protein structural variations within NMR ensembles. Results: Based on a benchmark set of 140 proteins, we show that the relative fluctuations of protein residues obtained from CABS-flex are well correlated to those of NMR ensembles. On average, this correlation is stronger than that between MD and NMR ensembles. In conclusion, CABS-flex is useful and complementary to MD in predicting protein regions that undergo conformational changes as well as the extent of such changes. Availability and implementation: The CABS-flex is freely available to all users at http://biocomp.chem.uw.edu.pl/CABSflex. Contact: sekmi@chem.uw.edu.pl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24735558</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24735558','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24735558"><span>CABS-flex predictions of protein flexibility compared with NMR ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian</p> <p>2014-08-01</p> <p>Identification of flexible regions of protein structures is important for understanding of their biological functions. Recently, we have developed a fast approach for predicting protein structure fluctuations from a single protein model: the CABS-flex. CABS-flex was shown to be an efficient alternative to conventional all-atom molecular dynamics (MD). In this work, we evaluate CABS-flex and MD predictions by comparison with protein structural variations within NMR ensembles. Based on a benchmark set of 140 proteins, we show that the relative fluctuations of protein residues obtained from CABS-flex are well correlated to those of NMR ensembles. On average, this correlation is stronger than that between MD and NMR ensembles. In conclusion, CABS-flex is useful and complementary to MD in predicting protein regions that undergo conformational changes as well as the extent of such changes. The CABS-flex is freely available to all users at http://biocomp.chem.uw.edu.pl/CABSflex. sekmi@chem.uw.edu.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29788510','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29788510"><span>Predicting drug-induced liver injury using ensemble learning methods and molecular fingerprints.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ai, Haixin; Chen, Wen; Zhang, Li; Huang, Liangchao; Yin, Zimo; Hu, Huan; Zhao, Qi; Zhao, Jian; Liu, Hongsheng</p> <p>2018-05-21</p> <p>Drug-induced liver injury (DILI) is a major safety concern in the drug-development process, and various methods have been proposed to predict the hepatotoxicity of compounds during the early stages of drug trials. In this study, we developed an ensemble model using three machine learning algorithms and 12 molecular fingerprints from a dataset containing 1,241 diverse compounds. The ensemble model achieved an average accuracy of 71.1±2.6%, sensitivity of 79.9±3.6%, specificity of 60.3±4.8%, and area under the receiver operating characteristic curve (AUC) of 0.764±0.026 in five-fold cross-validation and an accuracy of 84.3%, sensitivity of 86.9%, specificity of 75.4%, and AUC of 0.904 in an external validation dataset of 286 compounds collected from the Liver Toxicity Knowledge Base (LTKB). Compared with previous methods, the ensemble model achieved relatively high accuracy and sensitivity. We also identified several substructures related to DILI. In addition, we provide a web server offering access to our models (http://ccsipb.lnu.edu.cn/toxicity/HepatoPred-EL/).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EJASP2012...14Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EJASP2012...14Y"><span>A framework of multitemplate ensemble for fingerprint verification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yin, Yilong; Ning, Yanbin; Ren, Chunxiao; Liu, Li</p> <p>2012-12-01</p> <p>How to improve performance of an automatic fingerprint verification system (AFVS) is always a big challenge in biometric verification field. Recently, it becomes popular to improve the performance of AFVS using ensemble learning approach to fuse related information of fingerprints. In this article, we propose a novel framework of fingerprint verification which is based on the multitemplate ensemble method. This framework is consisted of three stages. In the first stage, enrollment stage, we adopt an effective template selection method to select those fingerprints which best represent a finger, and then, a polyhedron is created by the matching results of multiple template fingerprints and a virtual centroid of the polyhedron is given. In the second stage, verification stage, we measure the distance between the centroid of the polyhedron and a query image. In the final stage, a fusion rule is used to choose a proper distance from a distance set. The experimental results on the FVC2004 database prove the improvement on the effectiveness of the new framework in fingerprint verification. With a minutiae-based matching method, the average EER of four databases in FVC2004 drops from 10.85 to 0.88, and with a ridge-based matching method, the average EER of these four databases also decreases from 14.58 to 2.51.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22999350','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22999350"><span>Catching the engram: strategies to examine the memory trace.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sakaguchi, Masanori; Hayashi, Yasunori</p> <p>2012-09-21</p> <p>Memories are stored within neuronal ensembles in the brain. Modern genetic techniques can be used to not only visualize specific neuronal ensembles that encode memories (e.g., fear, craving) but also to selectively manipulate those neurons. These techniques are now being expanded for the study of various types of memory. In this review, we will summarize the genetic methods used to visualize and manipulate neurons involved in the representation of memory engrams. The methods will help clarify how memory is encoded, stored and processed in the brain. Furthermore, these approaches may contribute to our understanding of the pathological mechanisms associated with human memory disorders and, ultimately, may aid the development of therapeutic strategies to ameliorate these diseases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NJPh...19h3018M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NJPh...19h3018M"><span>Optical properties of an atomic ensemble coupled to a band edge of a photonic crystal waveguide</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Munro, Ewan; Kwek, Leong Chuan; Chang, Darrick E.</p> <p>2017-08-01</p> <p>We study the optical properties of an ensemble of two-level atoms coupled to a 1D photonic crystal waveguide (PCW), which mediates long-range coherent dipole-dipole interactions between the atoms. We show that the long-range interactions can dramatically alter the linear and nonlinear optical behavior, as compared to a typical atomic ensemble. In particular, in the linear regime, we find that the transmission spectrum contains multiple transmission dips, whose properties we characterize. Moreover, we show how the linear spectrum may be used to infer the number of atoms present in the system, constituting an important experimental tool in a regime where techniques for conventional ensembles break down. We also show that some of the transmission dips are associated with an effective ‘two-level’ resonance that forms due to the long-range interactions. In particular, under strong global driving and appropriate conditions, we find that the atomic ensemble is only capable of absorbing and emitting single collective excitations at a time. Our results are of direct relevance to atom-PCW experiments that should soon be realizable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1411037-internal-variability-dynamically-downscaled-climate-over-north-america','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1411037-internal-variability-dynamically-downscaled-climate-over-north-america"><span>Internal variability of a dynamically downscaled climate over North America</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wang, Jiali; Bessac, Julie; Kotamarthi, Rao</p> <p></p> <p>This study investigates the internal variability (IV) of a regional climate model, and considers the impacts of horizontal resolution and spectral nudging on the IV. A 16-member simulation ensemble was conducted using the Weather Research Forecasting model for three model configurations. Ensemble members included simulations at spatial resolutions of 50 km and 12 km without spectral nudging and simulations at a spatial resolution of 12 km with spectral nudging. All the simulations were generated over the same domain, which covered much of North America. The degree of IV was measured as the spread between the individual members of the ensemblemore » during the integration period. The IV of the 12 km simulation with spectral nudging was also compared with a future climate change simulation projected by the same model configuration. The variables investigated focus on precipitation and near-surface air temperature. While the IVs show a clear annual cycle with larger values in summer and smaller values in winter, the seasonal IV is smaller for a 50-km spatial resolution than for a 12-km resolution when nudging is not applied. Applying a nudging technique to the 12-km simulation reduces the IV by a factor of two, and produces smaller IV than the simulation at 50 km without nudging. Applying a nudging technique also changes the geographic distributions of IV in all examined variables. The IV is much smaller than the inter-annual variability at seasonal scales for regionally averaged temperature and precipitation. The IV is also smaller than the projected changes in air-temperature for the mid- and late 21st century. However, the IV is larger than the projected changes in precipitation for the mid- and late 21st century.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MSSP...93..204Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MSSP...93..204Y"><span>Hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis method for mid-frequency analysis of built-up systems with epistemic uncertainties</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yin, Shengwen; Yu, Dejie; Yin, Hui; Lü, Hui; Xia, Baizhan</p> <p>2017-09-01</p> <p>Considering the epistemic uncertainties within the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model when it is used for the response analysis of built-up systems in the mid-frequency range, the hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis (ETFE/SEA) model is established by introducing the evidence theory. Based on the hybrid ETFE/SEA model and the sub-interval perturbation technique, the hybrid Sub-interval Perturbation and Evidence Theory-based Finite Element/Statistical Energy Analysis (SIP-ETFE/SEA) approach is proposed. In the hybrid ETFE/SEA model, the uncertainty in the SEA subsystem is modeled by a non-parametric ensemble, while the uncertainty in the FE subsystem is described by the focal element and basic probability assignment (BPA), and dealt with evidence theory. Within the hybrid SIP-ETFE/SEA approach, the mid-frequency response of interest, such as the ensemble average of the energy response and the cross-spectrum response, is calculated analytically by using the conventional hybrid FE/SEA method. Inspired by the probability theory, the intervals of the mean value, variance and cumulative distribution are used to describe the distribution characteristics of mid-frequency responses of built-up systems with epistemic uncertainties. In order to alleviate the computational burdens for the extreme value analysis, the sub-interval perturbation technique based on the first-order Taylor series expansion is used in ETFE/SEA model to acquire the lower and upper bounds of the mid-frequency responses over each focal element. Three numerical examples are given to illustrate the feasibility and effectiveness of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvB..96q4436S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvB..96q4436S"><span>Robust techniques for polarization and detection of nuclear spin ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Scheuer, Jochen; Schwartz, Ilai; Müller, Samuel; Chen, Qiong; Dhand, Ish; Plenio, Martin B.; Naydenov, Boris; Jelezko, Fedor</p> <p>2017-11-01</p> <p>Highly sensitive nuclear spin detection is crucial in many scientific areas including nuclear magnetic resonance spectroscopy, magnetic resonance imaging (MRI), and quantum computing. The tiny thermal nuclear spin polarization represents a major obstacle towards this goal which may be overcome by dynamic nuclear spin polarization (DNP) methods. The latter often rely on the transfer of the thermally polarized electron spins to nearby nuclear spins, which is limited by the Boltzmann distribution of the former. Here we utilize microwave dressed states to transfer the high (>92 % ) nonequilibrium electron spin polarization of a single nitrogen-vacancy center (NV) induced by short laser pulses to the surrounding 13C carbon nuclear spins. The NV is repeatedly repolarized optically, thus providing an effectively infinite polarization reservoir. A saturation of the polarization of the nearby nuclear spins is achieved, which is confirmed by the decay of the polarization transfer signal and shows an excellent agreement with theoretical simulations. Hereby we introduce the polarization readout by polarization inversion method as a quantitative magnetization measure of the nuclear spin bath, which allows us to observe by ensemble averaging macroscopically hidden polarization dynamics like Landau-Zener-Stückelberg oscillations. Moreover, we show that using the integrated solid effect both for single- and double-quantum transitions nuclear spin polarization can be achieved even when the static magnetic field is not aligned along the NV's crystal axis. This opens a path for the application of our DNP technique to spins in and outside of nanodiamonds, enabling their application as MRI tracers. Furthermore, the methods reported here can be applied to other solid state systems where a central electron spin is coupled to a nuclear spin bath, e.g., phosphor donors in silicon and color centers in silicon carbide.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28012294','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28012294"><span>Derivation of respiration rate from ambulatory ECG and PPG using Ensemble Empirical Mode Decomposition: Comparison and fusion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Orphanidou, Christina</p> <p>2017-02-01</p> <p>A new method for extracting the respiratory rate from ECG and PPG obtained via wearable sensors is presented. The proposed technique employs Ensemble Empirical Mode Decomposition in order to identify the respiration "mode" from the noise-corrupted Heart Rate Variability/Pulse Rate Variability and Amplitude Modulation signals extracted from ECG and PPG signals. The technique was validated with respect to a Respiratory Impedance Pneumography (RIP) signal using the mean absolute and the average relative errors for a group ambulatory hospital patients. We compared approaches using single respiration-induced modulations on the ECG and PPG signals with approaches fusing the different modulations. Additionally, we investigated whether the presence of both the simultaneously recorded ECG and PPG signals provided a benefit in the overall system performance. Our method outperformed state-of-the-art ECG- and PPG-based algorithms and gave the best results over the whole database with a mean error of 1.8bpm for 1min estimates when using the fused ECG modulations, which was a relative error of 10.3%. No statistically significant differences were found when comparing the ECG-, PPG- and ECG/PPG-based approaches, indicating that the PPG can be used as a valid alternative to the ECG for applications using wearable sensors. While the presence of both the ECG and PPG signals did not provide an improvement in the estimation error, it increased the proportion of windows for which an estimate was obtained by at least 9%, indicating that the use of two simultaneously recorded signals might be desirable in high-acuity cases where an RR estimate is required more frequently. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22906711','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22906711"><span>Ensemble and single particle fluorimetric techniques in concerted action to study the diffusion and aggregation of the glycine receptor α3 isoforms in the cell plasma membrane.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Notelaers, Kristof; Smisdom, Nick; Rocha, Susana; Janssen, Daniel; Meier, Jochen C; Rigo, Jean-Michel; Hofkens, Johan; Ameloot, Marcel</p> <p>2012-12-01</p> <p>The spatio-temporal membrane behavior of glycine receptors (GlyRs) is known to be of influence on receptor homeostasis and functionality. In this work, an elaborate fluorimetric strategy was applied to study the GlyR α3K and L isoforms. Previously established differential clustering, desensitization and synaptic localization of these isoforms imply that membrane behavior is crucial in determining GlyR α3 physiology. Therefore diffusion and aggregation of homomeric α3 isoform-containing GlyRs were studied in HEK 293 cells. A unique combination of multiple diffraction-limited ensemble average methods and subdiffraction single particle techniques was used in order to achieve an integrated view of receptor properties. Static measurements of aggregation were performed with image correlation spectroscopy (ICS) and, single particle based, direct stochastic optical reconstruction microscopy (dSTORM). Receptor diffusion was measured by means of raster image correlation spectroscopy (RICS), temporal image correlation spectroscopy (TICS), fluorescence recovery after photobleaching (FRAP) and single particle tracking (SPT). The results show a significant difference in diffusion coefficient and cluster size between the isoforms. This reveals a positive correlation between desensitization and diffusion and disproves the notion that receptor aggregation is a universal mechanism for accelerated desensitization. The difference in diffusion coefficient between the clustering GlyR α3L and the non-clustering GlyR α3K cannot be explained by normal diffusion. SPT measurements indicate that the α3L receptors undergo transient trapping and directed motion, while the GlyR α3K displays mild hindered diffusion. These findings are suggestive of differential molecular interaction of the isoforms after incorporation in the membrane. Copyright © 2012 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.S31A2348P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.S31A2348P"><span>Thermal Aging of Oceanic Asthenosphere</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Paulson, E.; Jordan, T. H.</p> <p>2013-12-01</p> <p>To investigate the depth extent of mantle thermal aging beneath ocean basins, we project 3D Voigt-averaged S-velocity variations from an ensemble of global tomographic models onto a 1x1 degree age-based regionalization and average over bins delineated by equal increments in the square-root of crustal age. From comparisons among the bin-averaged S-wave profiles, we estimate age-dependent convergence depths (minimum depths where the age variations become statistically insignificant) as well as S travel times from these depths to a shallow reference surface. Using recently published techniques (Jordan & Paulson, JGR, doi:10.1002/jgrb.50263, 2013), we account for the aleatory variability in the bin-averaged S-wave profiles using the angular correlation functions of the individual tomographic models, we correct the convergence depths for vertical-smearing bias using their radial correlation functions, and we account for epistemic uncertainties through Bayesian averaging over the tomographic model ensemble. From this probabilistic analysis, we can assert with 90% confidence that the age-correlated variations in Voigt-averaged S velocities persist to depths greater than 170 km; i.e., more than 100 km below the mean depth of the G discontinuity (~70 km). Moreover, the S travel time above the convergence depth decays almost linearly with the square-root of crustal age out to 200 Ma, consistent with a half-space cooling model. Given the strong evidence that the G discontinuity approximates the lithosphere-asthenosphere boundary (LAB) beneath ocean basins, we conclude that the upper (and probably weakest) part of the oceanic asthenosphere, like the oceanic lithosphere, participates in the cooling that forms the kinematic plates, or tectosphere. In other words, the thermal boundary layer of a mature oceanic plate appears to be more than twice the thickness of its mechanical boundary layer. We do not discount the possibility that small-scale convection creates heterogeneities in the oceanic upper mantle; however, the large-scale flow evidently advects these small-scale heterogeneities along with the plates, allowing the upper part of the asthenosphere to continue cooling with lithospheric age. The dominance of this large-scale horizontal flow may be related to the high stresses associated with its channelization in a thin (~100 km) asthenosphere, as well as the possible focusing of the subtectospheric strain in a low-viscosity channel immediately above the 410-km discontinuity. These speculations aside, the observed thermal aging of oceanic asthenosphere is inconsistent with a tenet of plate tectonics, the LAB hypothesis, which states that lithospheric plates are decoupled from deeper mantle flow by a shear zone in the upper part of the asthenosphere.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19800016101','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19800016101"><span>A Scanning laser-velocimeter technique for measuring two-dimensional wake-vortex velocity distributions. [Langley Vortex Research Facility</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gartrell, L. R.; Rhodes, D. B.</p> <p>1980-01-01</p> <p>A rapid scanning two dimensional laser velocimeter (LV) has been used to measure simultaneously the vortex vertical and axial velocity distributions in the Langley Vortex Research Facility. This system utilized a two dimensional Bragg cell for removing flow direction ambiguity by translating the optical frequency for each velocity component, which was separated by band-pass filters. A rotational scan mechanism provided an incremental rapid scan to compensate for the large displacement of the vortex with time. The data were processed with a digital counter and an on-line minicomputer. Vaporized kerosene (0.5 micron to 5 micron particle sizes) was used for flow visualization and LV scattering centers. The overall measured mean-velocity uncertainity is less than 2 percent. These measurements were obtained from ensemble averaging of individual realizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27034973','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27034973"><span>An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ranganayaki, V; Deepa, S N</p> <p>2016-01-01</p> <p>Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28716511','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28716511"><span>An ensemble predictive modeling framework for breast cancer classification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nagarajan, Radhakrishnan; Upreti, Meenakshi</p> <p>2017-12-01</p> <p>Molecular changes often precede clinical presentation of diseases and can be useful surrogates with potential to assist in informed clinical decision making. Recent studies have demonstrated the usefulness of modeling approaches such as classification that can predict the clinical outcomes from molecular expression profiles. While useful, a majority of these approaches implicitly use all molecular markers as features in the classification process often resulting in sparse high-dimensional projection of the samples often comparable to that of the sample size. In this study, a variant of the recently proposed ensemble classification approach is used for predicting good and poor-prognosis breast cancer samples from their molecular expression profiles. In contrast to traditional single and ensemble classifiers, the proposed approach uses multiple base classifiers with varying feature sets obtained from two-dimensional projection of the samples in conjunction with a majority voting strategy for predicting the class labels. In contrast to our earlier implementation, base classifiers in the ensembles are chosen based on maximal sensitivity and minimal redundancy by choosing only those with low average cosine distance. The resulting ensemble sets are subsequently modeled as undirected graphs. Performance of four different classification algorithms is shown to be better within the proposed ensemble framework in contrast to using them as traditional single classifier systems. Significance of a subset of genes with high-degree centrality in the network abstractions across the poor-prognosis samples is also discussed. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013WRR....49.6744H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013WRR....49.6744H"><span>Simultaneous calibration of ensemble river flow predictions over an entire range of lead times</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hemri, S.; Fundel, F.; Zappa, M.</p> <p>2013-10-01</p> <p>Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>