Sample records for local ensemble transform

  1. Canonical-ensemble state-averaged complete active space self-consistent field (SA-CASSCF) strategy for problems with more diabatic than adiabatic states: Charge-bond resonance in monomethine cyanines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, Seth, E-mail: seth.olsen@uq.edu.au

    2015-01-28

    This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed (“microcanonical”) SA-CASSCF ensembles, self-consistency is invariant tomore » any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with “more diabatic than adiabatic” states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse “temperature,” unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler’s hydrol blue. The diabatic CASVB representation is shown to vary weakly for “temperatures” corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.« less

  2. Canonical-ensemble state-averaged complete active space self-consistent field (SA-CASSCF) strategy for problems with more diabatic than adiabatic states: charge-bond resonance in monomethine cyanines.

    PubMed

    Olsen, Seth

    2015-01-28

    This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed ("microcanonical") SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with "more diabatic than adiabatic" states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse "temperature," unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler's hydrol blue. The diabatic CASVB representation is shown to vary weakly for "temperatures" corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.

  3. The Local Ensemble Transform Kalman Filter with the Weather Research and Forecasting Model: Experiments with Real Observations

    NASA Astrophysics Data System (ADS)

    Miyoshi, Takemasa; Kunii, Masaru

    2012-03-01

    The local ensemble transform Kalman filter (LETKF) is implemented with the Weather Research and Forecasting (WRF) model, and real observations are assimilated to assess the newly-developed WRF-LETKF system. The WRF model is a widely-used mesoscale numerical weather prediction model, and the LETKF is an ensemble Kalman filter (EnKF) algorithm particularly efficient in parallel computer architecture. This study aims to provide the basis of future research on mesoscale data assimilation using the WRF-LETKF system, an additional testbed to the existing EnKF systems with the WRF model used in the previous studies. The particular LETKF system adopted in this study is based on the system initially developed in 2004 and has been continuously improved through theoretical studies and wide applications to many kinds of dynamical models including realistic geophysical models. Most recent and important improvements include an adaptive covariance inflation scheme which considers the spatial and temporal inhomogeneity of inflation parameters. Experiments show that the LETKF successfully assimilates real observations and that adaptive inflation is advantageous. Additional experiments with various ensemble sizes show that using more ensemble members improves the analyses consistently.

  4. Mass Conservation and Positivity Preservation with Ensemble-type Kalman Filter Algorithms

    NASA Technical Reports Server (NTRS)

    Janjic, Tijana; McLaughlin, Dennis B.; Cohn, Stephen E.; Verlaan, Martin

    2013-01-01

    Maintaining conservative physical laws numerically has long been recognized as being important in the development of numerical weather prediction (NWP) models. In the broader context of data assimilation, concerted efforts to maintain conservation laws numerically and to understand the significance of doing so have begun only recently. In order to enforce physically based conservation laws of total mass and positivity in the ensemble Kalman filter, we incorporate constraints to ensure that the filter ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. We show that the analysis steps of ensemble transform Kalman filter (ETKF) algorithm and ensemble Kalman filter algorithm (EnKF) can conserve the mass integral, but do not preserve positivity. Further, if localization is applied or if negative values are simply set to zero, then the total mass is not conserved either. In order to ensure mass conservation, a projection matrix that corrects for localization effects is constructed. In order to maintain both mass conservation and positivity preservation through the analysis step, we construct a data assimilation algorithms based on quadratic programming and ensemble Kalman filtering. Mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate constraints. Some simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. The results show clear improvements in both analyses and forecasts, particularly in the presence of localized features. Behavior of the algorithm is also tested in presence of model error.

  5. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  6. Mesoscale data assimilation for a local severe rainfall event with the NHM-LETKF system

    NASA Astrophysics Data System (ADS)

    Kunii, M.

    2013-12-01

    This study aims to improve forecasts of local severe weather events through data assimilation and ensemble forecasting approaches. Here, the local ensemble transform Kalman filter (LETKF) is implemented with the Japan Meteorological Agency's nonhydrostatic model (NHM). The newly developed NHM-LETKF contains an adaptive inflation scheme and a spatial covariance localization scheme with physical distance. One-way nested analysis in which a finer-resolution LETKF is conducted by using the outputs of an outer model also becomes feasible. These new contents should enhance the potential of the LETKF for convective scale events. The NHM-LETKF is applied to a local severe rainfall event in Japan in 2012. Comparison of the root mean square errors between the model first guess and analysis reveals that the system assimilates observations appropriately. Analysis ensemble spreads indicate a significant increase around the time torrential rainfall occurred, which would imply an increase in the uncertainty of environmental fields. Forecasts initialized with LETKF analyses successfully capture intense rainfalls, suggesting that the system can work effectively for local severe weather. Investigation of probabilistic forecasts by ensemble forecasting indicates that this could become a reliable data source for decision making in the future. A one-way nested data assimilation scheme is also tested. The experiment results demonstrate that assimilation with a finer-resolution model provides an advantage in the quantitative precipitation forecasting of local severe weather conditions.

  7. A comparison of breeding and ensemble transform vectors for global ensemble generation

    NASA Astrophysics Data System (ADS)

    Deng, Guo; Tian, Hua; Li, Xiaoli; Chen, Jing; Gong, Jiandong; Jiao, Meiyan

    2012-02-01

    To compare the initial perturbation techniques using breeding vectors and ensemble transform vectors, three ensemble prediction systems using both initial perturbation methods but with different ensemble member sizes based on the spectral model T213/L31 are constructed at the National Meteorological Center, China Meteorological Administration (NMC/CMA). A series of ensemble verification scores such as forecast skill of the ensemble mean, ensemble resolution, and ensemble reliability are introduced to identify the most important attributes of ensemble forecast systems. The results indicate that the ensemble transform technique is superior to the breeding vector method in light of the evaluation of anomaly correlation coefficient (ACC), which is a deterministic character of the ensemble mean, the root-mean-square error (RMSE) and spread, which are of probabilistic attributes, and the continuous ranked probability score (CRPS) and its decomposition. The advantage of the ensemble transform approach is attributed to its orthogonality among ensemble perturbations as well as its consistence with the data assimilation system. Therefore, this study may serve as a reference for configuration of the best ensemble prediction system to be used in operation.

  8. Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods

    NASA Astrophysics Data System (ADS)

    Thurin, J.; Brossier, R.; Métivier, L.

    2017-12-01

    Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012

  9. Data Assimilation in the ADAPT Photospheric Flux Transport Model

    DOE PAGES

    Hickmann, Kyle S.; Godinez, Humberto C.; Henney, Carl J.; ...

    2015-03-17

    Global maps of the solar photospheric magnetic flux are fundamental drivers for simulations of the corona and solar wind and therefore are important predictors of geoeffective events. However, observations of the solar photosphere are only made intermittently over approximately half of the solar surface. The Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model uses localized ensemble Kalman filtering techniques to adjust a set of photospheric simulations to agree with the available observations. At the same time, this information is propagated to areas of the simulation that have not been observed. ADAPT implements a local ensemble transform Kalman filter (LETKF)more » to accomplish data assimilation, allowing the covariance structure of the flux-transport model to influence assimilation of photosphere observations while eliminating spurious correlations between ensemble members arising from a limited ensemble size. We give a detailed account of the implementation of the LETKF into ADAPT. Advantages of the LETKF scheme over previously implemented assimilation methods are highlighted.« less

  10. Multi-Model Ensemble Approaches to Data Assimilation Using the 4D-Local Ensemble Transform Kalman Filter

    DTIC Science & Technology

    2013-09-30

    accuracy of the analysis . Root mean square difference ( RMSD ) is much smaller for RIP than for either Simple Ocean Data Assimilation or Incremental... Analysis Update globally for temperature as well as salinity. Regionally the same results were found, with only one exception in which the salinity RMSD ...short-term forecast using a numerical model with the observations taken within the forecast time window. The resulting state is the so-called “ analysis

  11. A mesoscale hybrid data assimilation system based on the JMA nonhydrostatic model

    NASA Astrophysics Data System (ADS)

    Ito, K.; Kunii, M.; Kawabata, T. T.; Saito, K. K.; Duc, L. L.

    2015-12-01

    This work evaluates the potential of a hybrid ensemble Kalman filter and four-dimensional variational (4D-Var) data assimilation system for predicting severe weather events from a deterministic point of view. This hybrid system is an adjoint-based 4D-Var system using a background error covariance matrix constructed from the mixture of a so-called NMC method and perturbations in a local ensemble transform Kalman filter data assimilation system, both of which are based on the Japan Meteorological Agency nonhydrostatic model. To construct the background error covariance matrix, we investigated two types of schemes. One is a spatial localization scheme and the other is neighboring ensemble approach, which regards the result at a horizontally spatially shifted point in each ensemble member as that obtained from a different realization of ensemble simulation. An assimilation of a pseudo single-observation located to the north of a tropical cyclone (TC) yielded an analysis increment of wind and temperature physically consistent with what is expected for a mature TC in both hybrid systems, whereas an analysis increment in a 4D-Var system using a static background error covariance distorted a structure of the mature TC. Real data assimilation experiments applied to 4 TCs and 3 local heavy rainfall events showed that hybrid systems and EnKF provided better initial conditions than the NMC-based 4D-Var, both for predicting the intensity and track forecast of TCs and for the location and amount of local heavy rainfall events.

  12. A simple new filter for nonlinear high-dimensional data assimilation

    NASA Astrophysics Data System (ADS)

    Tödter, Julian; Kirchgessner, Paul; Ahrens, Bodo

    2015-04-01

    The ensemble Kalman filter (EnKF) and its deterministic variants, mostly square root filters such as the ensemble transform Kalman filter (ETKF), represent a popular alternative to variational data assimilation schemes and are applied in a wide range of operational and research activities. Their forecast step employs an ensemble integration that fully respects the nonlinear nature of the analyzed system. In the analysis step, they implicitly assume the prior state and observation errors to be Gaussian. Consequently, in nonlinear systems, the analysis mean and covariance are biased, and these filters remain suboptimal. In contrast, the fully nonlinear, non-Gaussian particle filter (PF) only relies on Bayes' theorem, which guarantees an exact asymptotic behavior, but because of the so-called curse of dimensionality it is exposed to weight collapse. This work shows how to obtain a new analysis ensemble whose mean and covariance exactly match the Bayesian estimates. This is achieved by a deterministic matrix square root transformation of the forecast ensemble, and subsequently a suitable random rotation that significantly contributes to filter stability while preserving the required second-order statistics. The forecast step remains as in the ETKF. The proposed algorithm, which is fairly easy to implement and computationally efficient, is referred to as the nonlinear ensemble transform filter (NETF). The properties and performance of the proposed algorithm are investigated via a set of Lorenz experiments. They indicate that such a filter formulation can increase the analysis quality, even for relatively small ensemble sizes, compared to other ensemble filters in nonlinear, non-Gaussian scenarios. Furthermore, localization enhances the potential applicability of this PF-inspired scheme in larger-dimensional systems. Finally, the novel algorithm is coupled to a large-scale ocean general circulation model. The NETF is stable, behaves reasonably and shows a good performance with a realistic ensemble size. The results confirm that, in principle, it can be applied successfully and as simple as the ETKF in high-dimensional problems without further modifications of the algorithm, even though it is only based on the particle weights. This proves that the suggested method constitutes a useful filter for nonlinear, high-dimensional data assimilation, and is able to overcome the curse of dimensionality even in deterministic systems.

  13. Products of random matrices from fixed trace and induced Ginibre ensembles

    NASA Astrophysics Data System (ADS)

    Akemann, Gernot; Cikovic, Milan

    2018-05-01

    We investigate the microcanonical version of the complex induced Ginibre ensemble, by introducing a fixed trace constraint for its second moment. Like for the canonical Ginibre ensemble, its complex eigenvalues can be interpreted as a two-dimensional Coulomb gas, which are now subject to a constraint and a modified, collective confining potential. Despite the lack of determinantal structure in this fixed trace ensemble, we compute all its density correlation functions at finite matrix size and compare to a fixed trace ensemble of normal matrices, representing a different Coulomb gas. Our main tool of investigation is the Laplace transform, that maps back the fixed trace to the induced Ginibre ensemble. Products of random matrices have been used to study the Lyapunov and stability exponents for chaotic dynamical systems, where the latter are based on the complex eigenvalues of the product matrix. Because little is known about the universality of the eigenvalue distribution of such product matrices, we then study the product of m induced Ginibre matrices with a fixed trace constraint—which are clearly non-Gaussian—and M  ‑  m such Ginibre matrices without constraint. Using an m-fold inverse Laplace transform, we obtain a concise result for the spectral density of such a mixed product matrix at finite matrix size, for arbitrary fixed m and M. Very recently local and global universality was proven by the authors and their coworker for a more general, single elliptic fixed trace ensemble in the bulk of the spectrum. Here, we argue that the spectral density of mixed products is in the same universality class as the product of M independent induced Ginibre ensembles.

  14. Hierarchical ensemble of global and local classifiers for face recognition.

    PubMed

    Su, Yu; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2009-08-01

    In the literature of psychophysics and neurophysiology, many studies have shown that both global and local features are crucial for face representation and recognition. This paper proposes a novel face recognition method which exploits both global and local discriminative features. In this method, global features are extracted from the whole face images by keeping the low-frequency coefficients of Fourier transform, which we believe encodes the holistic facial information, such as facial contour. For local feature extraction, Gabor wavelets are exploited considering their biological relevance. After that, Fisher's linear discriminant (FLD) is separately applied to the global Fourier features and each local patch of Gabor features. Thus, multiple FLD classifiers are obtained, each embodying different facial evidences for face recognition. Finally, all these classifiers are combined to form a hierarchical ensemble classifier. We evaluate the proposed method using two large-scale face databases: FERET and FRGC version 2.0. Experiments show that the results of our method are impressively better than the best known results with the same evaluation protocol.

  15. Spatiotemporal characterization of Ensemble Prediction Systems - the Mean-Variance of Logarithms (MVL) diagram

    NASA Astrophysics Data System (ADS)

    Gutiérrez, J. M.; Primo, C.; Rodríguez, M. A.; Fernández, J.

    2008-02-01

    We present a novel approach to characterize and graphically represent the spatiotemporal evolution of ensembles using a simple diagram. To this aim we analyze the fluctuations obtained as differences between each member of the ensemble and the control. The lognormal character of these fluctuations suggests a characterization in terms of the first two moments of the logarithmic transformed values. On one hand, the mean is associated with the exponential growth in time. On the other hand, the variance accounts for the spatial correlation and localization of fluctuations. In this paper we introduce the MVL (Mean-Variance of Logarithms) diagram to intuitively represent the interplay and evolution of these two quantities. We show that this diagram uncovers useful information about the spatiotemporal dynamics of the ensemble. Some universal features of the diagram are also described, associated either with the nonlinear system or with the ensemble method and illustrated using both toy models and numerical weather prediction systems.

  16. Generalized thermalization for integrable system under quantum quench.

    PubMed

    Muralidharan, Sushruth; Lochan, Kinjalk; Shankaranarayanan, S

    2018-01-01

    We investigate equilibration and generalized thermalization of the quantum Harmonic chain under local quantum quench. The quench action we consider is connecting two disjoint harmonic chains of different sizes and the system jumps between two integrable settings. We verify the validity of the generalized Gibbs ensemble description for this infinite-dimensional Hilbert space system and also identify equilibration between the subsystems as in classical systems. Using Bogoliubov transformations, we show that the eigenstates of the system prior to the quench evolve toward the Gibbs Generalized Ensemble description. Eigenstates that are more delocalized (in the sense of inverse participation ratio) prior to the quench, tend to equilibrate more rapidly. Further, through the phase space properties of a generalized Gibbs ensemble and the strength of stimulated emission, we identify the necessary criterion on the initial states for such relaxation at late times and also find out the states that would potentially not be described by the generalized Gibbs ensemble description.

  17. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    NASA Astrophysics Data System (ADS)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.

  18. IASI Radiance Data Assimilation in Local Ensemble Transform Kalman Filter

    NASA Astrophysics Data System (ADS)

    Cho, K.; Hyoung-Wook, C.; Jo, Y.

    2016-12-01

    Korea institute of Atmospheric Prediction Systems (KIAPS) is developing NWP model with data assimilation systems. Local Ensemble Transform Kalman Filter (LETKF) system, one of the data assimilation systems, has been developed for KIAPS Integrated Model (KIM) based on cubed-sphere grid and has successfully assimilated real data. LETKF data assimilation system has been extended to 4D- LETKF which considers time-evolving error covariance within assimilation window and IASI radiance data assimilation using KPOP (KIAPS package for observation processing) with RTTOV (Radiative Transfer for TOVS). The LETKF system is implementing semi operational prediction including conventional (sonde, aircraft) observation and AMSU-A (Advanced Microwave Sounding Unit-A) radiance data from April. Recently, the semi operational prediction system updated radiance observations including GPS-RO, AMV, IASI (Infrared Atmospheric Sounding Interferometer) data at July. A set of simulation of KIM with ne30np4 and 50 vertical levels (of top 0.3hPa) were carried out for short range forecast (10days) within semi operation prediction LETKF system with ensemble forecast 50 members. In order to only IASI impact, our experiments used only conventional and IAIS radiance data to same semi operational prediction set. We carried out sensitivity test for IAIS thinning method (3D and 4D). IASI observation number was increased by temporal (4D) thinning and the improvement of IASI radiance data impact on the forecast skill of model will expect.

  19. Ensemble experiments using a nested LETKF system to reproduce intense vortices associated with tornadoes of 6 May 2012 in Japan

    NASA Astrophysics Data System (ADS)

    Seko, Hiromu; Kunii, Masaru; Yokota, Sho; Tsuyuki, Tadashi; Miyoshi, Takemasa

    2015-12-01

    Experiments simulating intense vortices associated with tornadoes that occurred on 6 May 2012 on the Kanto Plain, Japan, were performed with a nested local ensemble transform Kalman filter (LETKF) system. Intense vortices were reproduced by downscale experiments with a 12-member ensemble in which the initial conditions were obtained from the nested LETKF system analyses. The downscale experiments successfully generated intense vortices in three regions similar to the observed vortices, whereas only one tornado was reproduced by a deterministic forecast. The intense vorticity of the strongest tornado, which was observed in the southernmost region, was successfully reproduced by 10 of the 12 ensemble members. An examination of the results of the ensemble downscale experiments showed that the duration of intense vorticities tended to be longer when the vertical shear of the horizontal wind was larger and the lower airflow was more humid. Overall, the study results show that ensemble forecasts have the following merits: (1) probabilistic forecasts of the outbreak of intense vortices associated with tornadoes are possible; (2) the miss rate of outbreaks should decrease; and (3) environmental factors favoring outbreaks can be obtained by comparing the multiple possible scenarios of the ensemble forecasts.

  20. A Note on NCOM Temperature Forecast Error Calibration Using the Ensemble Transform

    DTIC Science & Technology

    2009-01-01

    Division Head Ruth H. Preller, 7300 Security, Code 1226 Office of Counsel,Code 1008.3 ADOR/Director NCST E. R. Franchi , 7000 Public Affairs...problem, local unbiased (correlation) and persistent errors (bias) of the Navy Coastal Ocean Modeling (NCOM) System nested in global ocean domains, are...system were made available in real-time without performing local data assimilation, though remote sensing and global data was assimilated on the

  1. Kalman filter data assimilation: targeting observations and parameter estimation.

    PubMed

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  2. Kalman filter data assimilation: Targeting observations and parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex

    2014-06-15

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less

  3. Special Features of Strain Localization and Nanodipoles of Partial Disclinations in the Region of Elastic Distortions

    NASA Astrophysics Data System (ADS)

    Tyumentsev, A. N.; Ditenberg, I. A.; Sukhanov, I. I.

    2018-02-01

    In the zones of strain localization in the region of elastic distortions and nanodipoles of partial disclinations representing the defects of elastically deformed medium, a theoretical analysis of the elastically stressed state and the energy of these defects, including the cases of their transformation into more complex ensembles of interrelated disclinations, is performed. Using the analytical results, the mechanisms of strain localization are discussed in the stages of nucleation and propagation of the bands of elastic and plastic strain localization formed in these zones (including the cases of nanocrystalline structure formation).

  4. Towards the Operational Ensemble-based Data Assimilation System for the Wave Field at the National Weather Service

    NASA Astrophysics Data System (ADS)

    Flampouris, Stylianos; Penny, Steve; Alves, Henrique

    2017-04-01

    The National Centers for Environmental Prediction (NCEP) of the National Oceanic and Atmospheric Administration (NOAA) provides the operational wave forecast for the US National Weather Service (NWS). Given the continuous efforts to improve forecast, NCEP is developing an ensemble-based data assimilation system, based on the local ensemble transform Kalman filter (LETKF), the existing operational global wave ensemble system (GWES) and on satellite and in-situ observations. While the LETKF was designed for atmospheric applications (Hunt et al 2007), and has been adapted for several ocean models (e.g. Penny 2016), this is the first time applied for oceanic waves assimilation. This new wave assimilation system provides a global estimation of the surface sea state and its approximate uncertainty. It achieves this by analyzing the 21-member ensemble of the significant wave height provided by GWES every 6h. Observations from four altimeters and all the available in-situ measurements are used in this analysis. The analysis of the significant wave height is used for initializing the next forecasting cycle; the data assimilation system is currently being tested for operational use.

  5. Hydrometeorology as an Inversion Problem: Can River Discharge Observations Improve the Atmosphere by Ensemble Data Assimilation?

    NASA Astrophysics Data System (ADS)

    Sawada, Yohei; Nakaegawa, Tosiyuki; Miyoshi, Takemasa

    2018-01-01

    We examine the potential of assimilating river discharge observations into the atmosphere by strongly coupled river-atmosphere ensemble data assimilation. The Japan Meteorological Agency's Non-Hydrostatic atmospheric Model (JMA-NHM) is first coupled with a simple rainfall-runoff model. Next, the local ensemble transform Kalman filter is used for this coupled model to assimilate the observations of the rainfall-runoff model variables into the JMA-NHM model variables. This system makes it possible to do hydrometeorology backward, i.e., to inversely estimate atmospheric conditions from the information of river flows or a flood on land surfaces. We perform a proof-of-concept Observing System Simulation Experiment, which reveals that the assimilation of river discharge observations into the atmospheric model variables can improve the skill of the short-term severe rainfall forecast.

  6. Improving precipitation forecast with hybrid 3DVar and time-lagged ensembles in a heavy rainfall event

    NASA Astrophysics Data System (ADS)

    Wang, Yuanbing; Min, Jinzhong; Chen, Yaodeng; Huang, Xiang-Yu; Zeng, Mingjian; Li, Xin

    2017-01-01

    This study evaluates the performance of three-dimensional variational (3DVar) and a hybrid data assimilation system using time-lagged ensembles in a heavy rainfall event. The time-lagged ensembles are constructed by sampling from a moving time window of 3 h along a model trajectory, which is economical and easy to implement. The proposed hybrid data assimilation system introduces flow-dependent error covariance derived from time-lagged ensemble into variational cost function without significantly increasing computational cost. Single observation tests are performed to document characteristic of the hybrid system. The sensitivity of precipitation forecasts to ensemble covariance weight and localization scale is investigated. Additionally, the TLEn-Var is evaluated and compared to the ETKF(ensemble transformed Kalman filter)-based hybrid assimilation within a continuously cycling framework, through which new hybrid analyses are produced every 3 h over 10 days. The 24 h accumulated precipitation, moisture, wind are analyzed between 3DVar and the hybrid assimilation using time-lagged ensembles. Results show that model states and precipitation forecast skill are improved by the hybrid assimilation using time-lagged ensembles compared with 3DVar. Simulation of the precipitable water and structure of the wind are also improved. Cyclonic wind increments are generated near the rainfall center, leading to an improved precipitation forecast. This study indicates that the hybrid data assimilation using time-lagged ensembles seems like a viable alternative or supplement in the complex models for some weather service agencies that have limited computing resources to conduct large size of ensembles.

  7. Residue-level global and local ensemble-ensemble comparisons of protein domains.

    PubMed

    Clark, Sarah A; Tronrud, Dale E; Karplus, P Andrew

    2015-09-01

    Many methods of protein structure generation such as NMR-based solution structure determination and template-based modeling do not produce a single model, but an ensemble of models consistent with the available information. Current strategies for comparing ensembles lose information because they use only a single representative structure. Here, we describe the ENSEMBLATOR and its novel strategy to directly compare two ensembles containing the same atoms to identify significant global and local backbone differences between them on per-atom and per-residue levels, respectively. The ENSEMBLATOR has four components: eePREP (ee for ensemble-ensemble), which selects atoms common to all models; eeCORE, which identifies atoms belonging to a cutoff-distance dependent common core; eeGLOBAL, which globally superimposes all models using the defined core atoms and calculates for each atom the two intraensemble variations, the interensemble variation, and the closest approach of members of the two ensembles; and eeLOCAL, which performs a local overlay of each dipeptide and, using a novel measure of local backbone similarity, reports the same four variations as eeGLOBAL. The combination of eeGLOBAL and eeLOCAL analyses identifies the most significant differences between ensembles. We illustrate the ENSEMBLATOR's capabilities by showing how using it to analyze NMR ensembles and to compare NMR ensembles with crystal structures provides novel insights compared to published studies. One of these studies leads us to suggest that a "consistency check" of NMR-derived ensembles may be a useful analysis step for NMR-based structure determinations in general. The ENSEMBLATOR 1.0 is available as a first generation tool to carry out ensemble-ensemble comparisons. © 2015 The Protein Society.

  8. Residue-level global and local ensemble-ensemble comparisons of protein domains

    PubMed Central

    Clark, Sarah A; Tronrud, Dale E; Andrew Karplus, P

    2015-01-01

    Many methods of protein structure generation such as NMR-based solution structure determination and template-based modeling do not produce a single model, but an ensemble of models consistent with the available information. Current strategies for comparing ensembles lose information because they use only a single representative structure. Here, we describe the ENSEMBLATOR and its novel strategy to directly compare two ensembles containing the same atoms to identify significant global and local backbone differences between them on per-atom and per-residue levels, respectively. The ENSEMBLATOR has four components: eePREP (ee for ensemble-ensemble), which selects atoms common to all models; eeCORE, which identifies atoms belonging to a cutoff-distance dependent common core; eeGLOBAL, which globally superimposes all models using the defined core atoms and calculates for each atom the two intraensemble variations, the interensemble variation, and the closest approach of members of the two ensembles; and eeLOCAL, which performs a local overlay of each dipeptide and, using a novel measure of local backbone similarity, reports the same four variations as eeGLOBAL. The combination of eeGLOBAL and eeLOCAL analyses identifies the most significant differences between ensembles. We illustrate the ENSEMBLATOR's capabilities by showing how using it to analyze NMR ensembles and to compare NMR ensembles with crystal structures provides novel insights compared to published studies. One of these studies leads us to suggest that a “consistency check” of NMR-derived ensembles may be a useful analysis step for NMR-based structure determinations in general. The ENSEMBLATOR 1.0 is available as a first generation tool to carry out ensemble-ensemble comparisons. PMID:26032515

  9. Generalized ensemble method applied to study systems with strong first order transitions

    DOE PAGES

    Malolepsza, E.; Kim, J.; Keyes, T.

    2015-09-28

    At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub, where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM).more » This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. Lastly, the method is illustrated in a study of the very strong solid/liquid transition in water.« less

  10. Generalized ensemble method applied to study systems with strong first order transitions

    NASA Astrophysics Data System (ADS)

    Małolepsza, E.; Kim, J.; Keyes, T.

    2015-09-01

    At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub [1], where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM). This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. The method is illustrated in a study of the very strong solid/liquid transition in water.

  11. Comparison of initial perturbation methods for the mesoscale ensemble prediction system of the Meteorological Research Institute for the WWRP Beijing 2008 Olympics Research and Development Project (B08RDP)

    NASA Astrophysics Data System (ADS)

    Saito, Kazuo; Hara, Masahiro; Kunii, Masaru; Seko, Hiromu; Yamaguchi, Munehiko

    2011-05-01

    Different initial perturbation methods for the mesoscale ensemble prediction were compared by the Meteorological Research Institute (MRI) as a part of the intercomparison of mesoscale ensemble prediction systems (EPSs) of the World Weather Research Programme (WWRP) Beijing 2008 Olympics Research and Development Project (B08RDP). Five initial perturbation methods for mesoscale ensemble prediction were developed for B08RDP and compared at MRI: (1) a downscaling method of the Japan Meteorological Agency (JMA)'s operational one-week EPS (WEP), (2) a targeted global model singular vector (GSV) method, (3) a mesoscale model singular vector (MSV) method based on the adjoint model of the JMA non-hydrostatic model (NHM), (4) a mesoscale breeding growing mode (MBD) method based on the NHM forecast and (5) a local ensemble transform (LET) method based on the local ensemble transform Kalman filter (LETKF) using NHM. These perturbation methods were applied to the preliminary experiments of the B08RDP Tier-1 mesoscale ensemble prediction with a horizontal resolution of 15 km. To make the comparison easier, the same horizontal resolution (40 km) was employed for the three mesoscale model-based initial perturbation methods (MSV, MBD and LET). The GSV method completely outperformed the WEP method, confirming the advantage of targeting in mesoscale EPS. The GSV method generally performed well with regard to root mean square errors of the ensemble mean, large growth rates of ensemble spreads throughout the 36-h forecast period, and high detection rates and high Brier skill scores (BSSs) for weak rains. On the other hand, the mesoscale model-based initial perturbation methods showed good detection rates and BSSs for intense rains. The MSV method showed a rapid growth in the ensemble spread of precipitation up to a forecast time of 6 h, which suggests suitability of the mesoscale SV for short-range EPSs, but the initial large growth of the perturbation did not last long. The performance of the MBD method was good for ensemble prediction of intense rain with a relatively small computing cost. The LET method showed similar characteristics to the MBD method, but the spread and growth rate were slightly smaller and the relative operating characteristic area skill score and BSS did not surpass those of MBD. These characteristic features of the five methods were confirmed by checking the evolution of the total energy norms and their growth rates. Characteristics of the initial perturbations obtained by four methods (GSV, MSV, MBD and LET) were examined for the case of a synoptic low-pressure system passing over eastern China. With GSV and MSV, the regions of large spread were near the low-pressure system, but with MSV, the distribution was more concentrated on the mesoscale disturbance. On the other hand, large-spread areas were observed southwest of the disturbance in MBD and LET. The horizontal pattern of LET perturbation was similar to that of MBD, but the amplitude of the LET perturbation reflected the observation density.

  12. Rain radar measurement error estimation using data assimilation in an advection-based nowcasting system

    NASA Astrophysics Data System (ADS)

    Merker, Claire; Ament, Felix; Clemens, Marco

    2017-04-01

    The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.

  13. Analyzing the carbon cycle with the local ensemble transform Kalman filter, online transport model and real observation data

    NASA Astrophysics Data System (ADS)

    Maki, T.; Sekiyama, T. T.; Shibata, K.; Miyazaki, K.; Miyoshi, T.; Yamada, K.; Yokoo, Y.; Iwasaki, T.

    2011-12-01

    In the current carbon cycle analysis, inverse modeling plays an important role. However, it requires enormous computational resources when we deal with more flux regions and more observations. The local ensemble transform Kalman filter (LETKF) is an alternative approach to reduce such problems. We constructed a carbon cycle analysis system with the LETKF and MRI (Meteorological Research Institute) online transport model (MJ98-CDTM). In MJ98-CDTM, an off-line transport model (CDTM) is directly coupled with the MRI/JMA GCM (MJ98). We further improved vertical transport processes in MJ98-CDTM from previous study. The LETKF includes enhanced features such as smoother to assimilate future observations, adaptive inflation and bias correction scheme. In this study, we use CO2 observations of surface data (continuous and flask), aircraft data (CONTRAIL) and satellite data (GOSAT), although we plan to assimilate AIRS tropospheric CO2 data. We developed a quality control system. We estimated 3-day-mean CO2 flux at a resolution of T42. Here, only CO2 concentrations and fluxes are analyzed whereas meteorological fields are nudged by the Japanese reanalysis (JCDAS). The horizontal localization length scale and assimilation window are chosen to be 1000 km and 3 days, respectively. The results indicate that the assimilation system works properly, better than free transport model run when we validate with independent CO2 concentration observational data and CO2 analysis data.

  14. Conservation of Mass and Preservation of Positivity with Ensemble-Type Kalman Filter Algorithms

    NASA Technical Reports Server (NTRS)

    Janjic, Tijana; Mclaughlin, Dennis; Cohn, Stephen E.; Verlaan, Martin

    2014-01-01

    This paper considers the incorporation of constraints to enforce physically based conservation laws in the ensemble Kalman filter. In particular, constraints are used to ensure that the ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. In certain situations filtering algorithms such as the ensemble Kalman filter (EnKF) and ensemble transform Kalman filter (ETKF) yield updated ensembles that conserve mass but are negative, even though the actual states must be nonnegative. In such situations if negative values are set to zero, or a log transform is introduced, the total mass will not be conserved. In this study, mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate non-negativity constraints. Simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. In two examples, an update that includes a non-negativity constraint is able to properly describe the transport of a sharp feature (e.g., a triangle or cone). A number of implementation questions still need to be addressed, particularly the need to develop a computationally efficient quadratic programming update for large ensemble.

  15. Locally Weighted Ensemble Clustering.

    PubMed

    Huang, Dong; Wang, Chang-Dong; Lai, Jian-Huang

    2018-05-01

    Due to its ability to combine multiple base clusterings into a probably better and more robust clustering, the ensemble clustering technique has been attracting increasing attention in recent years. Despite the significant success, one limitation to most of the existing ensemble clustering methods is that they generally treat all base clusterings equally regardless of their reliability, which makes them vulnerable to low-quality base clusterings. Although some efforts have been made to (globally) evaluate and weight the base clusterings, yet these methods tend to view each base clustering as an individual and neglect the local diversity of clusters inside the same base clustering. It remains an open problem how to evaluate the reliability of clusters and exploit the local diversity in the ensemble to enhance the consensus performance, especially, in the case when there is no access to data features or specific assumptions on data distribution. To address this, in this paper, we propose a novel ensemble clustering approach based on ensemble-driven cluster uncertainty estimation and local weighting strategy. In particular, the uncertainty of each cluster is estimated by considering the cluster labels in the entire ensemble via an entropic criterion. A novel ensemble-driven cluster validity measure is introduced, and a locally weighted co-association matrix is presented to serve as a summary for the ensemble of diverse clusters. With the local diversity in ensembles exploited, two novel consensus functions are further proposed. Extensive experiments on a variety of real-world datasets demonstrate the superiority of the proposed approach over the state-of-the-art.

  16. Post-processing ECMWF precipitation and temperature ensemble reforecasts for operational hydrologic forecasting at various spatial scales

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Reggiani, P.; Weerts, A. H.

    2013-09-01

    The ECMWF temperature and precipitation ensemble reforecasts are evaluated for biases in the mean, spread and forecast probabilities, and how these biases propagate to streamflow ensemble forecasts. The forcing ensembles are subsequently post-processed to reduce bias and increase skill, and to investigate whether this leads to improved streamflow ensemble forecasts. Multiple post-processing techniques are used: quantile-to-quantile transform, linear regression with an assumption of bivariate normality and logistic regression. Both the raw and post-processed ensembles are run through a hydrologic model of the river Rhine to create streamflow ensembles. The results are compared using multiple verification metrics and skill scores: relative mean error, Brier skill score and its decompositions, mean continuous ranked probability skill score and its decomposition, and the ROC score. Verification of the streamflow ensembles is performed at multiple spatial scales: relatively small headwater basins, large tributaries and the Rhine outlet at Lobith. The streamflow ensembles are verified against simulated streamflow, in order to isolate the effects of biases in the forcing ensembles and any improvements therein. The results indicate that the forcing ensembles contain significant biases, and that these cascade to the streamflow ensembles. Some of the bias in the forcing ensembles is unconditional in nature; this was resolved by a simple quantile-to-quantile transform. Improvements in conditional bias and skill of the forcing ensembles vary with forecast lead time, amount, and spatial scale, but are generally moderate. The translation to streamflow forecast skill is further muted, and several explanations are considered, including limitations in the modelling of the space-time covariability of the forcing ensembles and the presence of storages.

  17. A new transform for the analysis of complex fractionated atrial electrograms

    PubMed Central

    2011-01-01

    Background Representation of independent biophysical sources using Fourier analysis can be inefficient because the basis is sinusoidal and general. When complex fractionated atrial electrograms (CFAE) are acquired during atrial fibrillation (AF), the electrogram morphology depends on the mix of distinct nonsinusoidal generators. Identification of these generators using efficient methods of representation and comparison would be useful for targeting catheter ablation sites to prevent arrhythmia reinduction. Method A data-driven basis and transform is described which utilizes the ensemble average of signal segments to identify and distinguish CFAE morphologic components and frequencies. Calculation of the dominant frequency (DF) of actual CFAE, and identification of simulated independent generator frequencies and morphologies embedded in CFAE, is done using a total of 216 recordings from 10 paroxysmal and 10 persistent AF patients. The transform is tested versus Fourier analysis to detect spectral components in the presence of phase noise and interference. Correspondence is shown between ensemble basis vectors of highest power and corresponding synthetic drivers embedded in CFAE. Results The ensemble basis is orthogonal, and efficient for representation of CFAE components as compared with Fourier analysis (p ≤ 0.002). When three synthetic drivers with additive phase noise and interference were decomposed, the top three peaks in the ensemble power spectrum corresponded to the driver frequencies more closely as compared with top Fourier power spectrum peaks (p ≤ 0.005). The synthesized drivers with phase noise and interference were extractable from their corresponding ensemble basis with a mean error of less than 10%. Conclusions The new transform is able to efficiently identify CFAE features using DF calculation and by discerning morphologic differences. Unlike the Fourier transform method, it does not distort CFAE signals prior to analysis, and is relatively robust to jitter in periodic events. Thus the ensemble method can provide a useful alternative for quantitative characterization of CFAE during clinical study. PMID:21569421

  18. Downscaling RCP8.5 daily temperatures and precipitation in Ontario using localized ensemble optimal interpolation (EnOI) and bias correction

    NASA Astrophysics Data System (ADS)

    Deng, Ziwang; Liu, Jinliang; Qiu, Xin; Zhou, Xiaolan; Zhu, Huaiping

    2017-10-01

    A novel method for daily temperature and precipitation downscaling is proposed in this study which combines the Ensemble Optimal Interpolation (EnOI) and bias correction techniques. For downscaling temperature, the day to day seasonal cycle of high resolution temperature of the NCEP climate forecast system reanalysis (CFSR) is used as background state. An enlarged ensemble of daily temperature anomaly relative to this seasonal cycle and information from global climate models (GCMs) are used to construct a gain matrix for each calendar day. Consequently, the relationship between large and local-scale processes represented by the gain matrix will change accordingly. The gain matrix contains information of realistic spatial correlation of temperature between different CFSR grid points, between CFSR grid points and GCM grid points, and between different GCM grid points. Therefore, this downscaling method keeps spatial consistency and reflects the interaction between local geographic and atmospheric conditions. Maximum and minimum temperatures are downscaled using the same method. For precipitation, because of the non-Gaussianity issue, a logarithmic transformation is used to daily total precipitation prior to conducting downscaling. Cross validation and independent data validation are used to evaluate this algorithm. Finally, data from a 29-member ensemble of phase 5 of the Coupled Model Intercomparison Project (CMIP5) GCMs are downscaled to CFSR grid points in Ontario for the period from 1981 to 2100. The results show that this method is capable of generating high resolution details without changing large scale characteristics. It results in much lower absolute errors in local scale details at most grid points than simple spatial downscaling methods. Biases in the downscaled data inherited from GCMs are corrected with a linear method for temperatures and distribution mapping for precipitation. The downscaled ensemble projects significant warming with amplitudes of 3.9 and 6.5 °C for 2050s and 2080s relative to 1990s in Ontario, respectively; Cooling degree days and hot days will significantly increase over southern Ontario and heating degree days and cold days will significantly decrease in northern Ontario. Annual total precipitation will increase over Ontario and heavy precipitation events will increase as well. These results are consistent with conclusions in many other studies in the literature.

  19. Harnessing Orbital Debris to Sense the Space Environment

    NASA Astrophysics Data System (ADS)

    Mutschler, S.; Axelrad, P.; Matsuo, T.

    A key requirement for accurate space situational awareness (SSA) is knowledge of the non-conservative forces that act on space objects. These effects vary temporally and spatially, driven by the dynamical behavior of space weather. Existing SSA algorithms adjust space weather models based on observations of calibration satellites. However, lack of sufficient data and mismodeling of non-conservative forces cause inaccuracies in space object motion prediction. The uncontrolled nature of debris makes it particularly sensitive to the variations in space weather. Our research takes advantage of this behavior by inverting observations of debris objects to infer the space environment parameters causing their motion. In addition, this research will produce more accurate predictions of the motion of debris objects. The hypothesis of this research is that it is possible to utilize a "cluster" of debris objects, objects within relatively close proximity of each other, to sense their local environment. We focus on deriving parameters of an atmospheric density model to more precisely predict the drag force on LEO objects. An Ensemble Kalman Filter (EnKF) is used for assimilation; the prior ensemble to the posterior ensemble is transformed during the measurement update in a manner that does not require inversion of large matrices. A prior ensemble is utilized to empirically determine the nonlinear relationship between measurements and density parameters. The filter estimates an extended state that includes position and velocity of the debris object, and atmospheric density parameters. The density is parameterized as a grid of values, distributed by latitude and local sidereal time over a spherical shell encompassing Earth. This research focuses on LEO object motion, but it can also be extended to additional orbital regimes for observation and refinement of magnetic field and solar radiation models. An observability analysis of the proposed approach is presented in terms of the measurement cadence necessary to estimate the local space environment.

  20. Local ensemble transform Kalman filter for ionospheric data assimilation: Observation influence analysis during a geomagnetic storm event

    NASA Astrophysics Data System (ADS)

    Durazo, Juan A.; Kostelich, Eric J.; Mahalov, Alex

    2017-09-01

    We propose a targeted observation strategy, based on the influence matrix diagnostic, that optimally selects where additional observations may be placed to improve ionospheric forecasts. This strategy is applied in data assimilation observing system experiments, where synthetic electron density vertical profiles, which represent those of Constellation Observing System for Meteorology, Ionosphere, and Climate/Formosa satellite 3, are assimilated into the Thermosphere-Ionosphere-Electrodynamics General Circulation Model using the local ensemble transform Kalman filter during the 26 September 2011 geomagnetic storm. During each analysis step, the observation vector is augmented with five synthetic vertical profiles optimally placed to target electron density errors, using our targeted observation strategy. Forecast improvement due to assimilation of augmented vertical profiles is measured with the root-mean-square error (RMSE) of analyzed electron density, averaged over 600 km regions centered around the augmented vertical profile locations. Assimilating vertical profiles with targeted locations yields about 60%-80% reduction in electron density RMSE, compared to a 15% average reduction when assimilating randomly placed vertical profiles. Assimilating vertical profiles whose locations target the zonal component of neutral winds (Un) yields on average a 25% RMSE reduction in Un estimates, compared to a 2% average improvement obtained with randomly placed vertical profiles. These results demonstrate that our targeted strategy can improve data assimilation efforts during extreme events by detecting regions where additional observations would provide the largest benefit to the forecast.

  1. Ocean state and uncertainty forecasts using HYCOM with Local Ensemble Transfer Kalman Filter (LETKF)

    NASA Astrophysics Data System (ADS)

    Wei, Mozheng; Hogan, Pat; Rowley, Clark; Smedstad, Ole-Martin; Wallcraft, Alan; Penny, Steve

    2017-04-01

    An ensemble forecast system based on the US Navy's operational HYCOM using Local Ensemble Transfer Kalman Filter (LETKF) technology has been developed for ocean state and uncertainty forecasts. One of the advantages is that the best possible initial analysis states for the HYCOM forecasts are provided by the LETKF which assimilates the operational observations using ensemble method. The background covariance during this assimilation process is supplied with the ensemble, thus it avoids the difficulty of developing tangent linear and adjoint models for 4D-VAR from the complicated hybrid isopycnal vertical coordinate in HYCOM. Another advantage is that the ensemble system provides the valuable uncertainty estimate corresponding to every state forecast from HYCOM. Uncertainty forecasts have been proven to be critical for the downstream users and managers to make more scientifically sound decisions in numerical prediction community. In addition, ensemble mean is generally more accurate and skilful than the single traditional deterministic forecast with the same resolution. We will introduce the ensemble system design and setup, present some results from 30-member ensemble experiment, and discuss scientific, technical and computational issues and challenges, such as covariance localization, inflation, model related uncertainties and sensitivity to the ensemble size.

  2. Minimalist ensemble algorithms for genome-wide protein localization prediction.

    PubMed

    Lin, Jhih-Rong; Mondal, Ananda Mohan; Liu, Rong; Hu, Jianjun

    2012-07-03

    Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi.

  3. Minimalist ensemble algorithms for genome-wide protein localization prediction

    PubMed Central

    2012-01-01

    Background Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. Results This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. Conclusions We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi. PMID:22759391

  4. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  5. 4D Hybrid Ensemble-Variational Data Assimilation for the NCEP GFS: Outer Loops and Variable Transforms

    NASA Astrophysics Data System (ADS)

    Kleist, D. T.; Ide, K.; Mahajan, R.; Thomas, C.

    2014-12-01

    The use of hybrid error covariance models has become quite popular for numerical weather prediction (NWP). One such method for incorporating localized covariances from an ensemble within the variational framework utilizes an augmented control variable (EnVar), and has been implemented in the operational NCEP data assimilation system (GSI). By taking the existing 3D EnVar algorithm in GSI and allowing for four-dimensional ensemble perturbations, coupled with the 4DVAR infrastructure already in place, a 4D EnVar capability has been developed. The 4D EnVar algorithm has a few attractive qualities relative to 4DVAR, including the lack of need for tangent-linear and adjoint model as well as reduced computational cost. Preliminary results using real observations have been encouraging, showing forecast improvements nearly as large as were found in moving from 3DVAR to hybrid 3D EnVar. 4D EnVar is the method of choice for the next generation assimilation system for use with the operational NCEP global model, the global forecast system (GFS). The use of an outer-loop has long been the method of choice for 4DVar data assimilation to help address nonlinearity. An outer loop involves the re-running of the (deterministic) background forecast from the updated initial condition at the beginning of the assimilation window, and proceeding with another inner loop minimization. Within 4D EnVar, a similar procedure can be adopted since the solver evaluates a 4D analysis increment throughout the window, consistent with the valid times of the 4D ensemble perturbations. In this procedure, the ensemble perturbations are kept fixed and centered about the updated background state. This is analogous to the quasi-outer loop idea developed for the EnKF. Here, we present results for both toy model and real NWP systems demonstrating the impact from incorporating outer loops to address nonlinearity within the 4D EnVar context. The appropriate amplitudes for observation and background error covariances in subsequent outer loops will be explored. Lastly, variable transformations on the ensemble perturbations will be utilized to help address issues of non-Gaussianity. This may be particularly important for variables that clearly have non-Gaussian error characteristics such as water vapor and cloud condensate.

  6. Assessing the Predictability of Convection using Ensemble Data Assimilation of Simulated Radar Observations in an LETKF system

    NASA Astrophysics Data System (ADS)

    Lange, Heiner; Craig, George

    2014-05-01

    This study uses the Local Ensemble Transform Kalman Filter (LETKF) to perform storm-scale Data Assimilation of simulated Doppler radar observations into the non-hydrostatic, convection-permitting COSMO model. In perfect model experiments (OSSEs), it is investigated how the limited predictability of convective storms affects precipitation forecasts. The study compares a fine analysis scheme with small RMS errors to a coarse scheme that allows for errors in position, shape and occurrence of storms in the ensemble. The coarse scheme uses superobservations, a coarser grid for analysis weights, a larger localization radius and larger observation error that allow a broadening of the Gaussian error statistics. Three hour forecasts of convective systems (with typical lifetimes exceeding 6 hours) from the detailed analyses of the fine scheme are found to be advantageous to those of the coarse scheme during the first 1-2 hours, with respect to the predicted storm positions. After 3 hours in the convective regime used here, the forecast quality of the two schemes appears indiscernible, judging by RMSE and verification methods for rain-fields and objects. It is concluded that, for operational assimilation systems, the analysis scheme might not necessarily need to be detailed to the grid scale of the model. Depending on the forecast lead time, and on the presence of orographic or synoptic forcing that enhance the predictability of storm occurrences, analyses from a coarser scheme might suffice.

  7. Efficient data assimilation algorithm for bathymetry application

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, H.; Lee, J. H.; Farthing, M.; Hesser, T.; Kitanidis, P. K.; Darve, E. F.

    2017-12-01

    Information on the evolving state of the nearshore zone bathymetry is crucial to shoreline management, recreational safety, and naval operations. The high cost and complex logistics of using ship-based surveys for bathymetry estimation have encouraged the use of remote sensing techniques. Data assimilation methods combine the remote sensing data and nearshore hydrodynamic models to estimate the unknown bathymetry and the corresponding uncertainties. In particular, several recent efforts have combined Kalman Filter-based techniques such as ensembled-based Kalman filters with indirect video-based observations to address the bathymetry inversion problem. However, these methods often suffer from ensemble collapse and uncertainty underestimation. Here, the Compressed State Kalman Filter (CSKF) method is used to estimate the bathymetry based on observed wave celerity. In order to demonstrate the accuracy and robustness of the CSKF method, we consider twin tests with synthetic observations of wave celerity, while the bathymetry profiles are chosen based on surveys taken by the U.S. Army Corps of Engineer Field Research Facility (FRF) in Duck, NC. The first test case is a bathymetry estimation problem for a spatially smooth and temporally constant bathymetry profile. The second test case is a bathymetry estimation problem for a temporally evolving bathymetry from a smooth to a non-smooth profile. For both problems, we compare the results of CSKF with those obtained by the local ensemble transform Kalman filter (LETKF), which is a popular ensemble-based Kalman filter method.

  8. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-05-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  9. Sensitivity to perturbations and quantum phase transitions.

    PubMed

    Wisniacki, D A; Roncaglia, A J

    2013-05-01

    The local density of states or its Fourier transform, usually called fidelity amplitude, are important measures of quantum irreversibility due to imperfect evolution. In this Rapid Communication we study both quantities in a paradigmatic many body system, the Dicke Hamiltonian, where a single-mode bosonic field interacts with an ensemble of N two-level atoms. This model exhibits a quantum phase transition in the thermodynamic limit, while for finite instances the system undergoes a transition from quasi-integrability to quantum chaotic. We show that the width of the local density of states clearly points out the imprints of the transition from integrability to chaos but no trace remains of the quantum phase transition. The connection with the decay of the fidelity amplitude is also established.

  10. Ocean Predictability and Uncertainty Forecasts Using Local Ensemble Transfer Kalman Filter (LETKF)

    NASA Astrophysics Data System (ADS)

    Wei, M.; Hogan, P. J.; Rowley, C. D.; Smedstad, O. M.; Wallcraft, A. J.; Penny, S. G.

    2017-12-01

    Ocean predictability and uncertainty are studied with an ensemble system that has been developed based on the US Navy's operational HYCOM using the Local Ensemble Transfer Kalman Filter (LETKF) technology. One of the advantages of this method is that the best possible initial analysis states for the HYCOM forecasts are provided by the LETKF which assimilates operational observations using ensemble method. The background covariance during this assimilation process is implicitly supplied with the ensemble avoiding the difficult task of developing tangent linear and adjoint models out of HYCOM with the complicated hybrid isopycnal vertical coordinate for 4D-VAR. The flow-dependent background covariance from the ensemble will be an indispensable part in the next generation hybrid 4D-Var/ensemble data assimilation system. The predictability and uncertainty for the ocean forecasts are studied initially for the Gulf of Mexico. The results are compared with another ensemble system using Ensemble Transfer (ET) method which has been used in the Navy's operational center. The advantages and disadvantages are discussed.

  11. Data Assimilation and Predictability Studies on Typhoon Sinlaku (2008) Using the WRF-LETKF System

    NASA Astrophysics Data System (ADS)

    Miyoshi, T.; Kunii, M.

    2011-12-01

    Data assimilation and predictability studies on Tropical Cyclones with a particular focus on intensity forecasts are performed with the newly-developed Local Ensemble Transform Kalman Filter (LETKF) system with the WRF model. Taking advantage of intensive observations of the internationally collaborated T-PARC (THORPEX Pacific Asian Regional Campaign) project, we focus on Typhoon Sinlaku (2008) which intensified rapidly before making landfall to Taiwan. This study includes a number of data assimilation experiments, higher-resolution forecasts, and sensitivity analysis which quantifies impacts of observations on forecasts. This presentation includes latest achievements up to the time of the conference.

  12. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-12-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  13. Ensemble Bayesian forecasting system Part I: Theory and algorithms

    NASA Astrophysics Data System (ADS)

    Herr, Henry D.; Krzysztofowicz, Roman

    2015-05-01

    The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.

  14. Muscle activation described with a differential equation model for large ensembles of locally coupled molecular motors.

    PubMed

    Walcott, Sam

    2014-10-01

    Molecular motors, by turning chemical energy into mechanical work, are responsible for active cellular processes. Often groups of these motors work together to perform their biological role. Motors in an ensemble are coupled and exhibit complex emergent behavior. Although large motor ensembles can be modeled with partial differential equations (PDEs) by assuming that molecules function independently of their neighbors, this assumption is violated when motors are coupled locally. It is therefore unclear how to describe the ensemble behavior of the locally coupled motors responsible for biological processes such as calcium-dependent skeletal muscle activation. Here we develop a theory to describe locally coupled motor ensembles and apply the theory to skeletal muscle activation. The central idea is that a muscle filament can be divided into two phases: an active and an inactive phase. Dynamic changes in the relative size of these phases are described by a set of linear ordinary differential equations (ODEs). As the dynamics of the active phase are described by PDEs, muscle activation is governed by a set of coupled ODEs and PDEs, building on previous PDE models. With comparison to Monte Carlo simulations, we demonstrate that the theory captures the behavior of locally coupled ensembles. The theory also plausibly describes and predicts muscle experiments from molecular to whole muscle scales, suggesting that a micro- to macroscale muscle model is within reach.

  15. Signal enhancement based on complex curvelet transform and complementary ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Dong, Lieqian; Wang, Deying; Zhang, Yimeng; Zhou, Datong

    2017-09-01

    Signal enhancement is a necessary step in seismic data processing. In this paper we utilize the complementary ensemble empirical mode decomposition (CEEMD) and complex curvelet transform (CCT) methods to separate signal from random noise further to improve the signal to noise (S/N) ratio. Firstly, the original data with noise is decomposed into a series of intrinsic mode function (IMF) profiles with the aid of CEEMD. Then the IMFs with noise are transformed into CCT domain. By choosing different thresholds which are based on the noise level difference of each IMF profile, the noise in original data can be suppressed. Finally, we illustrate the effectiveness of the approach by simulated and field datasets.

  16. A study of regional-scale aerosol assimilation using a Stretch-NICAM

    NASA Astrophysics Data System (ADS)

    Misawa, S.; Dai, T.; Schutgens, N.; Nakajima, T.

    2013-12-01

    Although aerosol is considered to be harmful to human health and it became a social issue, aerosol models and emission inventories include large uncertainties. In recent studies, data assimilation is applied to aerosol simulation to get more accurate aerosol field and emission inventory. Most of these studies, however, are carried out only on global scale, and there are only a few researches about regional scale aerosol assimilation. In this study, we have created and verified an aerosol assimilation system on regional scale, in hopes to reduce an error associated with the aerosol emission inventory. Our aerosol assimilation system has been developed using an atmospheric climate model, NICAM (Non-hydrostaric ICosahedral Atmospheric Model; Satoh et al., 2008) with a stretch grid system and coupled with an aerosol transport model, SPRINTARS (Takemura et al., 2000). Also, this assimilation system is based on local ensemble transform Kalman filter (LETKF). To validate this system, we used a simulated observational data by adding some artificial errors to the surface aerosol fields constructed by Stretch-NICAM-SPRINTARS. We also included a small perturbation in original emission inventory. This assimilation with modified observational data and emission inventory was performed in Kanto-plane region around Tokyo, Japan, and the result indicates the system reducing a relative error of aerosol concentration by 20%. Furthermore, we examined a sensitivity of the aerosol assimilation system by varying the number of total ensemble (5, 10 and 15 ensembles) and local patch (domain) size (radius of 50km, 100km and 200km), both of which are the tuning parameters in LETKF. The result of the assimilation with different ensemble number 5, 10 and 15 shows that the larger the number of ensemble is, the smaller the relative error become. This is consistent with ensemble Kalman filter theory and imply that this assimilation system works properly. Also we found that assimilation system does not work well in a case of 200km radius, while a domain of 50km radius is less efficient than when domain of 100km radius is used.Therefore, we expect that the optimized size lies somewhere between 50km to 200km. We will show a real analysis of real data from suspended particle matter (SPM) network in the Kanto-plane region.

  17. The NRL relocatable ocean/acoustic ensemble forecast system

    NASA Astrophysics Data System (ADS)

    Rowley, C.; Martin, P.; Cummings, J.; Jacobs, G.; Coelho, E.; Bishop, C.; Hong, X.; Peggion, G.; Fabre, J.

    2009-04-01

    A globally relocatable regional ocean nowcast/forecast system has been developed to support rapid implementation of new regional forecast domains. The system is in operational use at the Naval Oceanographic Office for a growing number of regional and coastal implementations. The new system is the basis for an ocean acoustic ensemble forecast and adaptive sampling capability. We present an overview of the forecast system and the ocean ensemble and adaptive sampling methods. The forecast system consists of core ocean data analysis and forecast modules, software for domain configuration, surface and boundary condition forcing processing, and job control, and global databases for ocean climatology, bathymetry, tides, and river locations and transports. The analysis component is the Navy Coupled Ocean Data Assimilation (NCODA) system, a 3D multivariate optimum interpolation system that produces simultaneous analyses of temperature, salinity, geopotential, and vector velocity using remotely-sensed SST, SSH, and sea ice concentration, plus in situ observations of temperature, salinity, and currents from ships, buoys, XBTs, CTDs, profiling floats, and autonomous gliders. The forecast component is the Navy Coastal Ocean Model (NCOM). The system supports one-way nesting and multiple assimilation methods. The ensemble system uses the ensemble transform technique with error variance estimates from the NCODA analysis to represent initial condition error. Perturbed surface forcing or an atmospheric ensemble is used to represent errors in surface forcing. The ensemble transform Kalman filter is used to assess the impact of adaptive observations on future analysis and forecast uncertainty for both ocean and acoustic properties.

  18. Real-time prediction of hand trajectory by ensembles of cortical neurons in primates

    NASA Astrophysics Data System (ADS)

    Wessberg, Johan; Stambaugh, Christopher R.; Kralik, Jerald D.; Beck, Pamela D.; Laubach, Mark; Chapin, John K.; Kim, Jung; Biggs, S. James; Srinivasan, Mandayam A.; Nicolelis, Miguel A. L.

    2000-11-01

    Signals derived from the rat motor cortex can be used for controlling one-dimensional movements of a robot arm. It remains unknown, however, whether real-time processing of cortical signals can be employed to reproduce, in a robotic device, the kind of complex arm movements used by primates to reach objects in space. Here we recorded the simultaneous activity of large populations of neurons, distributed in the premotor, primary motor and posterior parietal cortical areas, as non-human primates performed two distinct motor tasks. Accurate real-time predictions of one- and three-dimensional arm movement trajectories were obtained by applying both linear and nonlinear algorithms to cortical neuronal ensemble activity recorded from each animal. In addition, cortically derived signals were successfully used for real-time control of robotic devices, both locally and through the Internet. These results suggest that long-term control of complex prosthetic robot arm movements can be achieved by simple real-time transformations of neuronal population signals derived from multiple cortical areas in primates.

  19. Ensemble control of Kondo screening in molecular adsorbates

    DOE PAGES

    Maughan, Bret; Zahl, Percy; Sutter, Peter; ...

    2017-04-06

    Switching the magnetic properties of organic semiconductors on a metal surface has thus far largely been limited to molecule-by-molecule tip-induced transformations in scanned probe experiments. Here we demonstrate with molecular resolution that collective control of activated Kondo screening can be achieved in thin-films of the organic semiconductor titanyl phthalocyanine on Cu(110) to obtain tunable concentrations of Kondo impurities. Using low-temperature scanning tunneling microscopy and spectroscopy, we show that a thermally activated molecular distortion dramatically shifts surface–molecule coupling and enables ensemble-level control of Kondo screening in the interfacial spin system. This is accompanied by the formation of a temperature-dependent Abrikosov–Suhl–Kondo resonancemore » in the local density of states of the activated molecules. This enables coverage-dependent control over activation to the Kondo screening state. Finally, our study thus advances the versatility of molecular switching for Kondo physics and opens new avenues for scalable bottom-up tailoring of the electronic structure and magnetic texture of organic semiconductor interfaces at the nanoscale.« less

  20. Stress-stress fluctuation formula for elastic constants in the NPT ensemble

    NASA Astrophysics Data System (ADS)

    Lips, Dominik; Maass, Philipp

    2018-05-01

    Several fluctuation formulas are available for calculating elastic constants from equilibrium correlation functions in computer simulations, but the ones available for simulations at constant pressure exhibit slow convergence properties and cannot be used for the determination of local elastic constants. To overcome these drawbacks, we derive a stress-stress fluctuation formula in the NPT ensemble based on known expressions in the NVT ensemble. We validate the formula in the NPT ensemble by calculating elastic constants for the simple nearest-neighbor Lennard-Jones crystal and by comparing the results with those obtained in the NVT ensemble. For both local and bulk elastic constants we find an excellent agreement between the simulated data in the two ensembles. To demonstrate the usefulness of the formula, we apply it to determine the elastic constants of a simulated lipid bilayer.

  1. Ensemble perception of color in autistic adults.

    PubMed

    Maule, John; Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna

    2017-05-01

    Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839-851. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  2. Mixture models for protein structure ensembles.

    PubMed

    Hirsch, Michael; Habeck, Michael

    2008-10-01

    Protein structure ensembles provide important insight into the dynamics and function of a protein and contain information that is not captured with a single static structure. However, it is not clear a priori to what extent the variability within an ensemble is caused by internal structural changes. Additional variability results from overall translations and rotations of the molecule. And most experimental data do not provide information to relate the structures to a common reference frame. To report meaningful values of intrinsic dynamics, structural precision, conformational entropy, etc., it is therefore important to disentangle local from global conformational heterogeneity. We consider the task of disentangling local from global heterogeneity as an inference problem. We use probabilistic methods to infer from the protein ensemble missing information on reference frames and stable conformational sub-states. To this end, we model a protein ensemble as a mixture of Gaussian probability distributions of either entire conformations or structural segments. We learn these models from a protein ensemble using the expectation-maximization algorithm. Our first model can be used to find multiple conformers in a structure ensemble. The second model partitions the protein chain into locally stable structural segments or core elements and less structured regions typically found in loops. Both models are simple to implement and contain only a single free parameter: the number of conformers or structural segments. Our models can be used to analyse experimental ensembles, molecular dynamics trajectories and conformational change in proteins. The Python source code for protein ensemble analysis is available from the authors upon request.

  3. Ensemble perception of color in autistic adults

    PubMed Central

    Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna

    2016-01-01

    Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839–851. © 2016 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research PMID:27874263

  4. Simultaneous assimilation of AIRS Xco2 and meteorological observations in a carbon climate model with an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Liu, Junjie; Fung, Inez; Kalnay, Eugenia; Kang, Ji-Sun; Olsen, Edward T.; Chen, Luke

    2012-03-01

    This study is our first step toward the generation of 6 hourly 3-D CO2 fields that can be used to validate CO2 forecast models by combining CO2 observations from multiple sources using ensemble Kalman filtering. We discuss a procedure to assimilate Atmospheric Infrared Sounder (AIRS) column-averaged dry-air mole fraction of CO2 (Xco2) in conjunction with meteorological observations with the coupled Local Ensemble Transform Kalman Filter (LETKF)-Community Atmospheric Model version 3.5. We examine the impact of assimilating AIRS Xco2 observations on CO2 fields by comparing the results from the AIRS-run, which assimilates both AIRS Xco2 and meteorological observations, to those from the meteor-run, which only assimilates meteorological observations. We find that assimilating AIRS Xco2 results in a surface CO2 seasonal cycle and the N-S surface gradient closer to the observations. When taking account of the CO2 uncertainty estimation from the LETKF, the CO2 analysis brackets the observed seasonal cycle. Verification against independent aircraft observations shows that assimilating AIRS Xco2 improves the accuracy of the CO2 vertical profiles by about 0.5-2 ppm depending on location and altitude. The results show that the CO2 analysis ensemble spread at AIRS Xco2 space is between 0.5 and 2 ppm, and the CO2 analysis ensemble spread around the peak level of the averaging kernels is between 1 and 2 ppm. This uncertainty estimation is consistent with the magnitude of the CO2 analysis error verified against AIRS Xco2 observations and the independent aircraft CO2 vertical profiles.

  5. Predicting Flow Reversals in a Computational Fluid Dynamics Simulated Thermosyphon Using Data Assimilation.

    PubMed

    Reagan, Andrew J; Dubief, Yves; Dodds, Peter Sheridan; Danforth, Christopher M

    2016-01-01

    A thermal convection loop is a annular chamber filled with water, heated on the bottom half and cooled on the top half. With sufficiently large forcing of heat, the direction of fluid flow in the loop oscillates chaotically, dynamics analogous to the Earth's weather. As is the case for state-of-the-art weather models, we only observe the statistics over a small region of state space, making prediction difficult. To overcome this challenge, data assimilation (DA) methods, and specifically ensemble methods, use the computational model itself to estimate the uncertainty of the model to optimally combine these observations into an initial condition for predicting the future state. Here, we build and verify four distinct DA methods, and then, we perform a twin model experiment with the computational fluid dynamics simulation of the loop using the Ensemble Transform Kalman Filter (ETKF) to assimilate observations and predict flow reversals. We show that using adaptively shaped localized covariance outperforms static localized covariance with the ETKF, and allows for the use of less observations in predicting flow reversals. We also show that a Dynamic Mode Decomposition (DMD) of the temperature and velocity fields recovers the low dimensional system underlying reversals, finding specific modes which together are predictive of reversal direction.

  6. Predicting Flow Reversals in a Computational Fluid Dynamics Simulated Thermosyphon Using Data Assimilation

    PubMed Central

    Reagan, Andrew J.; Dubief, Yves; Dodds, Peter Sheridan; Danforth, Christopher M.

    2016-01-01

    A thermal convection loop is a annular chamber filled with water, heated on the bottom half and cooled on the top half. With sufficiently large forcing of heat, the direction of fluid flow in the loop oscillates chaotically, dynamics analogous to the Earth’s weather. As is the case for state-of-the-art weather models, we only observe the statistics over a small region of state space, making prediction difficult. To overcome this challenge, data assimilation (DA) methods, and specifically ensemble methods, use the computational model itself to estimate the uncertainty of the model to optimally combine these observations into an initial condition for predicting the future state. Here, we build and verify four distinct DA methods, and then, we perform a twin model experiment with the computational fluid dynamics simulation of the loop using the Ensemble Transform Kalman Filter (ETKF) to assimilate observations and predict flow reversals. We show that using adaptively shaped localized covariance outperforms static localized covariance with the ETKF, and allows for the use of less observations in predicting flow reversals. We also show that a Dynamic Mode Decomposition (DMD) of the temperature and velocity fields recovers the low dimensional system underlying reversals, finding specific modes which together are predictive of reversal direction. PMID:26849061

  7. Satellite Data Assimilation within KIAPS-LETKF system

    NASA Astrophysics Data System (ADS)

    Jo, Y.; Lee, S., Sr.; Cho, K.

    2016-12-01

    Korea Institute of Atmospheric Prediction Systems (KIAPS) has been developing an ensemble data assimilation system using four-dimensional local ensemble transform kalman filter (LETKF; Hunt et al., 2007) within KIAPS Integrated Model (KIM), referred to as "KIAPS-LETKF". KIAPS-LETKF system was successfully evaluated with various Observing System Simulation Experiments (OSSEs) with NCAR Community Atmospheric Model - Spectral Element (Kang et al., 2013), which has fully unstructured quadrilateral meshes based on the cubed-sphere grid as the same grid system of KIM. Recently, assimilation of real observations has been conducted within the KIAPS-LETKF system with four-dimensional covariance functions over the 6-hr assimilation window. Then, conventional (e.g., sonde, aircraft, and surface) and satellite (e.g., AMSU-A, IASI, GPS-RO, and AMV) observations have been provided by the KIAPS Package for Observation Processing (KPOP). Wind speed prediction was found most beneficial due to ingestion of AMV and for the temperature prediction the improvement in assimilation is mostly due to ingestion of AMSU-A and IASI. However, some degradation in the simulation of the GPS-RO is presented in the upper stratosphere, even though GPS-RO leads positive impacts on the analysis and forecasts. We plan to test the bias correction method and several vertical localization strategies for radiance observations to improve analysis and forecast impacts.

  8. Rapid sampling of local minima in protein energy surface and effective reduction through a multi-objective filter

    PubMed Central

    2013-01-01

    Background Many problems in protein modeling require obtaining a discrete representation of the protein conformational space as an ensemble of conformations. In ab-initio structure prediction, in particular, where the goal is to predict the native structure of a protein chain given its amino-acid sequence, the ensemble needs to satisfy energetic constraints. Given the thermodynamic hypothesis, an effective ensemble contains low-energy conformations which are similar to the native structure. The high-dimensionality of the conformational space and the ruggedness of the underlying energy surface currently make it very difficult to obtain such an ensemble. Recent studies have proposed that Basin Hopping is a promising probabilistic search framework to obtain a discrete representation of the protein energy surface in terms of local minima. Basin Hopping performs a series of structural perturbations followed by energy minimizations with the goal of hopping between nearby energy minima. This approach has been shown to be effective in obtaining conformations near the native structure for small systems. Recent work by us has extended this framework to larger systems through employment of the molecular fragment replacement technique, resulting in rapid sampling of large ensembles. Methods This paper investigates the algorithmic components in Basin Hopping to both understand and control their effect on the sampling of near-native minima. Realizing that such an ensemble is reduced before further refinement in full ab-initio protocols, we take an additional step and analyze the quality of the ensemble retained by ensemble reduction techniques. We propose a novel multi-objective technique based on the Pareto front to filter the ensemble of sampled local minima. Results and conclusions We show that controlling the magnitude of the perturbation allows directly controlling the distance between consecutively-sampled local minima and, in turn, steering the exploration towards conformations near the native structure. For the minimization step, we show that the addition of Metropolis Monte Carlo-based minimization is no more effective than a simple greedy search. Finally, we show that the size of the ensemble of sampled local minima can be effectively and efficiently reduced by a multi-objective filter to obtain a simpler representation of the probed energy surface. PMID:24564970

  9. Ensemble density variational methods with self- and ghost-interaction-corrected functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pastorczak, Ewa; Pernal, Katarzyna, E-mail: pernalk@gmail.com

    2014-05-14

    Ensemble density functional theory (DFT) offers a way of predicting excited-states energies of atomic and molecular systems without referring to a density response function. Despite a significant theoretical work, practical applications of the proposed approximations have been scarce and they do not allow for a fair judgement of the potential usefulness of ensemble DFT with available functionals. In the paper, we investigate two forms of ensemble density functionals formulated within ensemble DFT framework: the Gross, Oliveira, and Kohn (GOK) functional proposed by Gross et al. [Phys. Rev. A 37, 2809 (1988)] alongside the orbital-dependent eDFT form of the functional introducedmore » by Nagy [J. Phys. B 34, 2363 (2001)] (the acronym eDFT proposed in analogy to eHF – ensemble Hartree-Fock method). Local and semi-local ground-state density functionals are employed in both approaches. Approximate ensemble density functionals contain not only spurious self-interaction but also the so-called ghost-interaction which has no counterpart in the ground-state DFT. We propose how to correct the GOK functional for both kinds of interactions in approximations that go beyond the exact-exchange functional. Numerical applications lead to a conclusion that functionals free of the ghost-interaction by construction, i.e., eDFT, yield much more reliable results than approximate self- and ghost-interaction-corrected GOK functional. Additionally, local density functional corrected for self-interaction employed in the eDFT framework yields excitations energies of the accuracy comparable to that of the uncorrected semi-local eDFT functional.« less

  10. Laser transit anemometer software development program

    NASA Technical Reports Server (NTRS)

    Abbiss, John B.

    1989-01-01

    Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.

  11. Fidelity under isospectral perturbations: a random matrix study

    NASA Astrophysics Data System (ADS)

    Leyvraz, F.; García, A.; Kohler, H.; Seligman, T. H.

    2013-07-01

    The set of Hamiltonians generated by all unitary transformations from a single Hamiltonian is the largest set of isospectral Hamiltonians we can form. Taking advantage of the fact that the unitary group can be generated from Hermitian matrices we can take the ones generated by the Gaussian unitary ensemble with a small parameter as small perturbations. Similarly, the transformations generated by Hermitian antisymmetric matrices from orthogonal matrices form isospectral transformations among symmetric matrices. Based on this concept we can obtain the fidelity decay of a system that decays under a random isospectral perturbation with well-defined properties regarding time-reversal invariance. If we choose the Hamiltonian itself also from a classical random matrix ensemble, then we obtain solutions in terms of form factors in the limit of large matrices.

  12. Biased Metropolis Sampling for Rugged Free Energy Landscapes

    NASA Astrophysics Data System (ADS)

    Berg, Bernd A.

    2003-11-01

    Metropolis simulations of all-atom models of peptides (i.e. small proteins) are considered. Inspired by the funnel picture of Bryngelson and Wolyness, a transformation of the updating probabilities of the dihedral angles is defined, which uses probability densities from a higher temperature to improve the algorithmic performance at a lower temperature. The method is suitable for canonical as well as for generalized ensemble simulations. A simple approximation to the full transformation is tested at room temperature for Met-Enkephalin in vacuum. Integrated autocorrelation times are found to be reduced by factors close to two and a similar improvement due to generalized ensemble methods enters multiplicatively.

  13. Short-Circuit Fault Detection and Classification Using Empirical Wavelet Transform and Local Energy for Electric Transmission Line.

    PubMed

    Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin

    2017-09-16

    In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF₂) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach.

  14. Short-Circuit Fault Detection and Classification Using Empirical Wavelet Transform and Local Energy for Electric Transmission Line

    PubMed Central

    Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin

    2017-01-01

    In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF2) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach. PMID:28926953

  15. Intelligent Ensemble Forecasting System of Stock Market Fluctuations Based on Symetric and Asymetric Wavelet Functions

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim; Boukadoum, Mounir

    2015-08-01

    We present a new ensemble system for stock market returns prediction where continuous wavelet transform (CWT) is used to analyze return series and backpropagation neural networks (BPNNs) for processing CWT-based coefficients, determining the optimal ensemble weights, and providing final forecasts. Particle swarm optimization (PSO) is used for finding optimal weights and biases for each BPNN. To capture symmetry/asymmetry in the underlying data, three wavelet functions with different shapes are adopted. The proposed ensemble system was tested on three Asian stock markets: The Hang Seng, KOSPI, and Taiwan stock market data. Three statistical metrics were used to evaluate the forecasting accuracy; including, mean of absolute errors (MAE), root mean of squared errors (RMSE), and mean of absolute deviations (MADs). Experimental results showed that our proposed ensemble system outperformed the individual CWT-ANN models each with different wavelet function. In addition, the proposed ensemble system outperformed the conventional autoregressive moving average process. As a result, the proposed ensemble system is suitable to capture symmetry/asymmetry in financial data fluctuations for better prediction accuracy.

  16. Transition-Metal Chalcogenide/Graphene Ensembles for Light-Induced Energy Applications.

    PubMed

    Kagkoura, Antonia; Skaltsas, Theodosis; Tagmatarchis, Nikos

    2017-09-21

    Recently, nanomaterials that harvest solar energy and convert it to other forms of energy are of great interest. In this context, transition metal chalcogenides (TMCs) have recently been in the spotlight due to their optoelectronic properties that render them potential candidates mainly in energy conversion applications. Integration of TMCs onto a strong electron-accepting material, such as graphene, yielding novel TMC/graphene ensembles is of high significance, since photoinduced charge-transfer phenomena, leading to intra-ensemble charge separation, may occur. In this review, we highlight the utility of TMC/graphene ensembles, with a specific focus on latest trends in applications, while their synthetic routes are also discussed. In fact, TMC/graphene ensembles are photocatalytically active and superior as compared to intact TMCs analogues, when examined toward photocatalytic H 2 evolution, dye degradation and redox transformations of organic compounds. Moreover, TMC/graphene ensembles have shown excellent prospect when employed in photovoltaics and biosensing applications. Finally, the future prospects of such materials are outlined. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Performance Analysis of Local Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tong, Xin T.

    2018-03-01

    Ensemble Kalman filter (EnKF) is an important data assimilation method for high-dimensional geophysical systems. Efficient implementation of EnKF in practice often involves the localization technique, which updates each component using only information within a local radius. This paper rigorously analyzes the local EnKF (LEnKF) for linear systems and shows that the filter error can be dominated by the ensemble covariance, as long as (1) the sample size exceeds the logarithmic of state dimension and a constant that depends only on the local radius; (2) the forecast covariance matrix admits a stable localized structure. In particular, this indicates that with small system and observation noises, the filter error will be accurate in long time even if the initialization is not. The analysis also reveals an intrinsic inconsistency caused by the localization technique, and a stable localized structure is necessary to control this inconsistency. While this structure is usually taken for granted for the operation of LEnKF, it can also be rigorously proved for linear systems with sparse local observations and weak local interactions. These theoretical results are also validated by numerical implementation of LEnKF on a simple stochastic turbulence in two dynamical regimes.

  18. Complete ensemble local mean decomposition with adaptive noise and its application to fault diagnosis for rolling bearings

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin

    2018-06-01

    Mode mixing resulting from intermittent signals is an annoying problem associated with the local mean decomposition (LMD) method. Based on noise-assisted approach, ensemble local mean decomposition (ELMD) method alleviates the mode mixing issue of LMD to some degree. However, the product functions (PFs) produced by ELMD often contain considerable residual noise, and thus a relatively large number of ensemble trials are required to eliminate the residual noise. Furthermore, since different realizations of Gaussian white noise are added to the original signal, different trials may generate different number of PFs, making it difficult to take ensemble mean. In this paper, a novel method is proposed called complete ensemble local mean decomposition with adaptive noise (CELMDAN) to solve these two problems. The method adds a particular and adaptive noise at every decomposition stage for each trial. Moreover, a unique residue is obtained after separating each PF, and the obtained residue is used as input for the next stage. Two simulated signals are analyzed to illustrate the advantages of CELMDAN in comparison to ELMD and CEEMDAN. To further demonstrate the efficiency of CELMDAN, the method is applied to diagnose faults for rolling bearings in an experimental case and an engineering case. The diagnosis results indicate that CELMDAN can extract more fault characteristic information with less interference than ELMD.

  19. Toward an Accurate Theoretical Framework for Describing Ensembles for Proteins under Strongly Denaturing Conditions

    PubMed Central

    Tran, Hoang T.; Pappu, Rohit V.

    2006-01-01

    Our focus is on an appropriate theoretical framework for describing highly denatured proteins. In high concentrations of denaturants, proteins behave like polymers in a good solvent and ensembles for denatured proteins can be modeled by ignoring all interactions except excluded volume (EV) effects. To assay conformational preferences of highly denatured proteins, we quantify a variety of properties for EV-limit ensembles of 23 two-state proteins. We find that modeled denatured proteins can be best described as follows. Average shapes are consistent with prolate ellipsoids. Ensembles are characterized by large correlated fluctuations. Sequence-specific conformational preferences are restricted to local length scales that span five to nine residues. Beyond local length scales, chain properties follow well-defined power laws that are expected for generic polymers in the EV limit. The average available volume is filled inefficiently, and cavities of all sizes are found within the interiors of denatured proteins. All properties characterized from simulated ensembles match predictions from rigorous field theories. We use our results to resolve between conflicting proposals for structure in ensembles for highly denatured states. PMID:16766618

  20. Modelling machine ensembles with discrete event dynamical system theory

    NASA Technical Reports Server (NTRS)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  1. Information flow in an atmospheric model and data assimilation

    NASA Astrophysics Data System (ADS)

    Yoon, Young-noh

    2011-12-01

    Weather forecasting consists of two processes, model integration and analysis (data assimilation). During the model integration, the state estimate produced by the analysis evolves to the next cycle time according to the atmospheric model to become the background estimate. The analysis then produces a new state estimate by combining the background state estimate with new observations, and the cycle repeats. In an ensemble Kalman filter, the probability distribution of the state estimate is represented by an ensemble of sample states, and the covariance matrix is calculated using the ensemble of sample states. We perform numerical experiments on toy atmospheric models introduced by Lorenz in 2005 to study the information flow in an atmospheric model in conjunction with ensemble Kalman filtering for data assimilation. This dissertation consists of two parts. The first part of this dissertation is about the propagation of information and the use of localization in ensemble Kalman filtering. If we can perform data assimilation locally by considering the observations and the state variables only near each grid point, then we can reduce the number of ensemble members necessary to cover the probability distribution of the state estimate, reducing the computational cost for the data assimilation and the model integration. Several localized versions of the ensemble Kalman filter have been proposed. Although tests applying such schemes have proven them to be extremely promising, a full basic understanding of the rationale and limitations of localization is currently lacking. We address these issues and elucidate the role played by chaotic wave dynamics in the propagation of information and the resulting impact on forecasts. The second part of this dissertation is about ensemble regional data assimilation using joint states. Assuming that we have a global model and a regional model of higher accuracy defined in a subregion inside the global region, we propose a data assimilation scheme that produces the analyses for the global and the regional model simultaneously, considering forecast information from both models. We show that our new data assimilation scheme produces better results both in the subregion and the global region than the data assimilation scheme that produces the analyses for the global and the regional model separately.

  2. Using ensemble models to identify and apportion heavy metal pollution sources in agricultural soils on a local scale.

    PubMed

    Wang, Qi; Xie, Zhiyi; Li, Fangbai

    2015-11-01

    This study aims to identify and apportion multi-source and multi-phase heavy metal pollution from natural and anthropogenic inputs using ensemble models that include stochastic gradient boosting (SGB) and random forest (RF) in agricultural soils on the local scale. The heavy metal pollution sources were quantitatively assessed, and the results illustrated the suitability of the ensemble models for the assessment of multi-source and multi-phase heavy metal pollution in agricultural soils on the local scale. The results of SGB and RF consistently demonstrated that anthropogenic sources contributed the most to the concentrations of Pb and Cd in agricultural soils in the study region and that SGB performed better than RF. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. On the Local Equivalence Between the Canonical and the Microcanonical Ensembles for Quantum Spin Systems

    NASA Astrophysics Data System (ADS)

    Tasaki, Hal

    2018-06-01

    We study a quantum spin system on the d-dimensional hypercubic lattice Λ with N=L^d sites with periodic boundary conditions. We take an arbitrary translation invariant short-ranged Hamiltonian. For this system, we consider both the canonical ensemble with inverse temperature β _0 and the microcanonical ensemble with the corresponding energy U_N(β _0) . For an arbitrary self-adjoint operator \\hat{A} whose support is contained in a hypercubic block B inside Λ , we prove that the expectation values of \\hat{A} with respect to these two ensembles are close to each other for large N provided that β _0 is sufficiently small and the number of sites in B is o(N^{1/2}) . This establishes the equivalence of ensembles on the level of local states in a large but finite system. The result is essentially that of Brandao and Cramer (here restricted to the case of the canonical and the microcanonical ensembles), but we prove improved estimates in an elementary manner. We also review and prove standard results on the thermodynamic limits of thermodynamic functions and the equivalence of ensembles in terms of thermodynamic functions. The present paper assumes only elementary knowledge on quantum statistical mechanics and quantum spin systems.

  4. Eye pupil detection system using an ensemble of regression forest and fast radial symmetry transform with a near infrared camera

    NASA Astrophysics Data System (ADS)

    Jeong, Mira; Nam, Jae-Yeal; Ko, Byoung Chul

    2017-09-01

    In this paper, we focus on pupil center detection in various video sequences that include head poses and changes in illumination. To detect the pupil center, we first find four eye landmarks in each eye by using cascade local regression based on a regression forest. Based on the rough location of the pupil, a fast radial symmetric transform is applied using the previously found pupil location to rearrange the fine pupil center. As the final step, the pupil displacement is estimated between the previous frame and the current frame to maintain the level of accuracy against a false locating result occurring in a particular frame. We generated a new face dataset, called Keimyung University pupil detection (KMUPD), with infrared camera. The proposed method was successfully applied to the KMUPD dataset, and the results indicate that its pupil center detection capability is better than that of other methods and with a shorter processing time.

  5. Changing precipitation in western Europe, climate change or natural variability?

    NASA Astrophysics Data System (ADS)

    Aalbers, Emma; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart

    2017-04-01

    Multi-model RCM-GCM ensembles provide high resolution climate projections, valuable for among others climate impact assessment studies. While the application of multiple models (both GCMs and RCMs) provides a certain robustness with respect to model uncertainty, the interpretation of differences between ensemble members - the combined result of model uncertainty and natural variability of the climate system - is not straightforward. Natural variability is intrinsic to the climate system, and a potentially large source of uncertainty in climate change projections, especially for projections on the local to regional scale. To quantify the natural variability and get a robust estimate of the forced climate change response (given a certain model and forcing scenario), large ensembles of climate model simulations of the same model provide essential information. While for global climate models (GCMs) a number of such large single model ensembles exists and have been analyzed, for regional climate models (RCMs) the number and size of single model ensembles is limited, and the predictability of the forced climate response at the local to regional scale is still rather uncertain. We present a regional downscaling of a 16-member single model ensemble over western Europe and the Alps at a resolution of 0.11 degrees (˜12km), similar to the highest resolution EURO-CORDEX simulations. This 16-member ensemble was generated by the GCM EC-EARTH, which was downscaled with the RCM RACMO for the period 1951-2100. This single model ensemble has been investigated in terms of the ensemble mean response (our estimate of the forced climate response), as well as the difference between the ensemble members, which measures natural variability. We focus on the response in seasonal mean and extreme precipitation (seasonal maxima and extremes with a return period up to 20 years) for the near to far future. For most precipitation indices we can reliably determine the climate change signal, given the applied model chain and forcing scenario. However, the analysis also shows how limited the information in single ensemble members is on the local scale forced climate response, even for high levels of global warming when the forced response has emerged from natural variability. Analysis and application of multi-model ensembles like EURO-CORDEX should go hand-in-hand with single model ensembles, like the one presented here, to be able to correctly interpret the fine-scale information in terms of a forced signal and random noise due to natural variability.

  6. Cell-cell bioelectrical interactions and local heterogeneities in genetic networks: a model for the stabilization of single-cell states and multicellular oscillations.

    PubMed

    Cervera, Javier; Manzanares, José A; Mafe, Salvador

    2018-04-04

    Genetic networks operate in the presence of local heterogeneities in single-cell transcription and translation rates. Bioelectrical networks and spatio-temporal maps of cell electric potentials can influence multicellular ensembles. Could cell-cell bioelectrical interactions mediated by intercellular gap junctions contribute to the stabilization of multicellular states against local genetic heterogeneities? We theoretically analyze this question on the basis of two well-established experimental facts: (i) the membrane potential is a reliable read-out of the single-cell electrical state and (ii) when the cells are coupled together, their individual cell potentials can be influenced by ensemble-averaged electrical potentials. We propose a minimal biophysical model for the coupling between genetic and bioelectrical networks that associates the local changes occurring in the transcription and translation rates of an ion channel protein with abnormally low (depolarized) cell potentials. We then analyze the conditions under which the depolarization of a small region (patch) in a multicellular ensemble can be reverted by its bioelectrical coupling with the (normally polarized) neighboring cells. We show also that the coupling between genetic and bioelectric networks of non-excitable cells, modulated by average electric potentials at the multicellular ensemble level, can produce oscillatory phenomena. The simulations show the importance of single-cell potentials characteristic of polarized and depolarized states, the relative sizes of the abnormally polarized patch and the rest of the normally polarized ensemble, and intercellular coupling.

  7. Nonuniform fluids in the grand canonical ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Percus, J.K.

    1982-01-01

    Nonuniform simple classical fluids are considered quite generally. The grand canonical ensemble is particularly suitable, conceptually, in the leading approximation of local thermodynamics, which figuratively divides the system into approximately uniform spatial subsystems. The procedure is reviewed by which this approach is systematically corrected for slowly varying density profiles, and a model is suggested that carries the correction into the domain of local fluctuations. The latter is assessed for substrate bounded fluids, as well as for two-phase interfaces. The peculiarities of the grand ensemble in a two-phase region stem from the inherent very large number fluctuations. A primitive model showsmore » how these are quenched in the canonical ensemble. This is taken advantage of by applying the Kac-Siegert representation of the van der Waals decomposition with petit canonical corrections, to the two-phase regime.« less

  8. An optimized ensemble local mean decomposition method for fault detection of mechanical components

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Li, Zhixiong; Hu, Chao; Chen, Shuai; Wang, Jianguo; Zhang, Xiaogang

    2017-03-01

    Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error (Relative RMSE) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE, corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions.

  9. Sparsity guided empirical wavelet transform for fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Zhao, Yang; Yi, Cai; Tsui, Kwok-Leung; Lin, Jianhui

    2018-02-01

    Rolling element bearings are widely used in various industrial machines, such as electric motors, generators, pumps, gearboxes, railway axles, turbines, and helicopter transmissions. Fault diagnosis of rolling element bearings is beneficial to preventing any unexpected accident and reducing economic loss. In the past years, many bearing fault detection methods have been developed. Recently, a new adaptive signal processing method called empirical wavelet transform attracts much attention from readers and engineers and its applications to bearing fault diagnosis have been reported. The main problem of empirical wavelet transform is that Fourier segments required in empirical wavelet transform are strongly dependent on the local maxima of the amplitudes of the Fourier spectrum of a signal, which connotes that Fourier segments are not always reliable and effective if the Fourier spectrum of the signal is complicated and overwhelmed by heavy noises and other strong vibration components. In this paper, sparsity guided empirical wavelet transform is proposed to automatically establish Fourier segments required in empirical wavelet transform for fault diagnosis of rolling element bearings. Industrial bearing fault signals caused by single and multiple railway axle bearing defects are used to verify the effectiveness of the proposed sparsity guided empirical wavelet transform. Results show that the proposed method can automatically discover Fourier segments required in empirical wavelet transform and reveal single and multiple railway axle bearing defects. Besides, some comparisons with three popular signal processing methods including ensemble empirical mode decomposition, the fast kurtogram and the fast spectral correlation are conducted to highlight the superiority of the proposed method.

  10. Assimilation of MODIS Dark Target and Deep Blue Observations in the Dust Aerosol Component of NMMB-MONARCH version 1.0

    NASA Technical Reports Server (NTRS)

    Di Tomaso, Enza; Schutgens, Nick A. J.; Jorba, Oriol; Perez Garcia-Pando, Carlos

    2017-01-01

    A data assimilation capability has been built for the NMMB-MONARCH chemical weather prediction system, with a focus on mineral dust, a prominent type of aerosol. An ensemble-based Kalman filter technique (namely the local ensemble transform Kalman filter - LETKF) has been utilized to optimally combine model background and satellite retrievals. Our implementation of the ensemble is based on known uncertainties in the physical parametrizations of the dust emission scheme. Experiments showed that MODIS AOD retrievals using the Dark Target algorithm can help NMMB-MONARCH to better characterize atmospheric dust. This is particularly true for the analysis of the dust outflow in the Sahel region and over the African Atlantic coast. The assimilation of MODIS AOD retrievals based on the Deep Blue algorithm has a further positive impact in the analysis downwind from the strongest dust sources of the Sahara and in the Arabian Peninsula. An analysis-initialized forecast performs better (lower forecast error and higher correlation with observations) than a standard forecast, with the exception of underestimating dust in the long-range Atlantic transport and degradation of the temporal evolution of dust in some regions after day 1. Particularly relevant is the improved forecast over the Sahara throughout the forecast range thanks to the assimilation of Deep Blue retrievals over areas not easily covered by other observational datasets.The present study on mineral dust is a first step towards data assimilation with a complete aerosol prediction system that includes multiple aerosol species.

  11. Assimilation of MODIS Dark Target and Deep Blue observations in the dust aerosol component of NMMB-MONARCH version 1.0

    NASA Astrophysics Data System (ADS)

    Di Tomaso, Enza; Schutgens, Nick A. J.; Jorba, Oriol; Pérez García-Pando, Carlos

    2017-03-01

    A data assimilation capability has been built for the NMMB-MONARCH chemical weather prediction system, with a focus on mineral dust, a prominent type of aerosol. An ensemble-based Kalman filter technique (namely the local ensemble transform Kalman filter - LETKF) has been utilized to optimally combine model background and satellite retrievals. Our implementation of the ensemble is based on known uncertainties in the physical parametrizations of the dust emission scheme. Experiments showed that MODIS AOD retrievals using the Dark Target algorithm can help NMMB-MONARCH to better characterize atmospheric dust. This is particularly true for the analysis of the dust outflow in the Sahel region and over the African Atlantic coast. The assimilation of MODIS AOD retrievals based on the Deep Blue algorithm has a further positive impact in the analysis downwind from the strongest dust sources of the Sahara and in the Arabian Peninsula. An analysis-initialized forecast performs better (lower forecast error and higher correlation with observations) than a standard forecast, with the exception of underestimating dust in the long-range Atlantic transport and degradation of the temporal evolution of dust in some regions after day 1. Particularly relevant is the improved forecast over the Sahara throughout the forecast range thanks to the assimilation of Deep Blue retrievals over areas not easily covered by other observational datasets. The present study on mineral dust is a first step towards data assimilation with a complete aerosol prediction system that includes multiple aerosol species.

  12. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  13. State and parameter estimation of spatiotemporally chaotic systems illustrated by an application to Rayleigh-Bénard convection.

    PubMed

    Cornick, Matthew; Hunt, Brian; Ott, Edward; Kurtuldu, Huseyin; Schatz, Michael F

    2009-03-01

    Data assimilation refers to the process of estimating a system's state from a time series of measurements (which may be noisy or incomplete) in conjunction with a model for the system's time evolution. Here we demonstrate the applicability of a recently developed data assimilation method, the local ensemble transform Kalman filter, to nonlinear, high-dimensional, spatiotemporally chaotic flows in Rayleigh-Bénard convection experiments. Using this technique we are able to extract the full temperature and velocity fields from a time series of shadowgraph measurements. In addition, we describe extensions of the algorithm for estimating model parameters. Our results suggest the potential usefulness of our data assimilation technique to a broad class of experimental situations exhibiting spatiotemporal chaos.

  14. Hidden Structural Codes in Protein Intrinsic Disorder.

    PubMed

    Borkosky, Silvia S; Camporeale, Gabriela; Chemes, Lucía B; Risso, Marikena; Noval, María Gabriela; Sánchez, Ignacio E; Alonso, Leonardo G; de Prat Gay, Gonzalo

    2017-10-17

    Intrinsic disorder is a major structural category in biology, accounting for more than 30% of coding regions across the domains of life, yet consists of conformational ensembles in equilibrium, a major challenge in protein chemistry. Anciently evolved papillomavirus genomes constitute an unparalleled case for sequence to structure-function correlation in cases in which there are no folded structures. E7, the major transforming oncoprotein of human papillomaviruses, is a paradigmatic example among the intrinsically disordered proteins. Analysis of a large number of sequences of the same viral protein allowed for the identification of a handful of residues with absolute conservation, scattered along the sequence of its N-terminal intrinsically disordered domain, which intriguingly are mostly leucine residues. Mutation of these led to a pronounced increase in both α-helix and β-sheet structural content, reflected by drastic effects on equilibrium propensities and oligomerization kinetics, and uncovers the existence of local structural elements that oppose canonical folding. These folding relays suggest the existence of yet undefined hidden structural codes behind intrinsic disorder in this model protein. Thus, evolution pinpoints conformational hot spots that could have not been identified by direct experimental methods for analyzing or perturbing the equilibrium of an intrinsically disordered protein ensemble.

  15. Toward canonical ensemble distribution from self-guided Langevin dynamics simulation

    NASA Astrophysics Data System (ADS)

    Wu, Xiongwu; Brooks, Bernard R.

    2011-04-01

    This work derives a quantitative description of the conformational distribution in self-guided Langevin dynamics (SGLD) simulations. SGLD simulations employ guiding forces calculated from local average momentums to enhance low-frequency motion. This enhancement in low-frequency motion dramatically accelerates conformational search efficiency, but also induces certain perturbations in conformational distribution. Through the local averaging, we separate properties of molecular systems into low-frequency and high-frequency portions. The guiding force effect on the conformational distribution is quantitatively described using these low-frequency and high-frequency properties. This quantitative relation provides a way to convert between a canonical ensemble and a self-guided ensemble. Using example systems, we demonstrated how to utilize the relation to obtain canonical ensemble properties and conformational distributions from SGLD simulations. This development makes SGLD not only an efficient approach for conformational searching, but also an accurate means for conformational sampling.

  16. Preservation of physical properties with Ensemble-type Kalman Filter Algorithms

    NASA Astrophysics Data System (ADS)

    Janjic, T.

    2017-12-01

    We show the behavior of the localized Ensemble Kalman filter (EnKF) with respect to preservation of positivity, conservation of mass, energy and enstrophy in toy models that conserve these properties. In order to preserve physical properties in the analysis as well as to deal with the non-Gaussianity in an EnKF framework, Janjic et al. 2014 proposed the use of physically based constraints in the analysis step to constrain the solution. In particular, constraints were used to ensure that the ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. In the study, mass and positivity were both preserved by formulating the filter update as a set of quadratic programming problems that incorporate nonnegativity constraints. Simple numerical experiments indicated that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that were more physically plausible both for individual ensemble members and for the ensemble mean. Moreover, in experiments designed to mimic the most important characteristics of convective motion, it is shown that the mass conservation- and positivity-constrained rain significantly suppresses noise seen in localized EnKF results. This is highly desirable in order to avoid spurious storms from appearing in the forecast starting from this initial condition (Lange and Craig 2014). In addition, the root mean square error is reduced for all fields and total mass of the rain is correctly simulated. Similarly, the enstrophy, divergence, as well as energy spectra can as well be strongly affected by localization radius, thinning interval, and inflation and depend on the variable that is observed (Zeng and Janjic, 2016). We constructed the ensemble data assimilation algorithm that conserves mass, total energy and enstrophy (Zeng et al., 2017). With 2D shallow water model experiments, it is found that the conservation of enstrophy within the data assimilation effectively avoids the spurious energy cascade of rotational part and thereby successfully suppresses the noise generated by the data assimilation algorithm. The 14-day deterministic and ensemble free forecast, starting from the initial condition enforced by both total energy and enstrophy constraints, produces the best prediction.

  17. A comparison between EDA-EnVar and ETKF-EnVar data assimilation techniques using radar observations at convective scales through a case study of Hurricane Ike (2008)

    NASA Astrophysics Data System (ADS)

    Shen, Feifei; Xu, Dongmei; Xue, Ming; Min, Jinzhong

    2017-07-01

    This study examines the impacts of assimilating radar radial velocity (Vr) data for the simulation of hurricane Ike (2008) with two different ensemble generation techniques in the framework of the hybrid ensemble-variational (EnVar) data assimilation system of Weather Research and Forecasting model. For the generation of ensemble perturbations we apply two techniques, the ensemble transform Kalman filter (ETKF) and the ensemble of data assimilation (EDA). For the ETKF-EnVar, the forecast ensemble perturbations are updated by the ETKF, while for the EDA-EnVar, the hybrid is employed to update each ensemble member with perturbed observations. The ensemble mean is analyzed by the hybrid method with flow-dependent ensemble covariance for both EnVar. The sensitivity of analyses and forecasts to the two applied ensemble generation techniques is investigated in our current study. It is found that the EnVar system is rather stable with different ensemble update techniques in terms of its skill on improving the analyses and forecasts. The EDA-EnVar-based ensemble perturbations are likely to include slightly less organized spatial structures than those in ETKF-EnVar, and the perturbations of the latter are constructed more dynamically. Detailed diagnostics reveal that both of the EnVar schemes not only produce positive temperature increments around the hurricane center but also systematically adjust the hurricane location with the hurricane-specific error covariance. On average, the analysis and forecast from the ETKF-EnVar have slightly smaller errors than that from the EDA-EnVar in terms of track, intensity, and precipitation forecast. Moreover, ETKF-EnVar yields better forecasts when verified against conventional observations.

  18. Skill of ENSEMBLES seasonal re-forecasts for malaria prediction in West Africa

    NASA Astrophysics Data System (ADS)

    Jones, A. E.; Morse, A. P.

    2012-12-01

    This study examines the performance of malaria-relevant climate variables from the ENSEMBLES seasonal ensemble re-forecasts for sub-Saharan West Africa, using a dynamic malaria model to transform temperature and rainfall forecasts into simulated malaria incidence and verifying these forecasts against simulations obtained by driving the malaria model with General Circulation Model-derived reanalysis. Two subregions of forecast skill are identified: the highlands of Cameroon, where low temperatures limit simulated malaria during the forecast period and interannual variability in simulated malaria is closely linked to variability in temperature, and northern Nigeria/southern Niger, where simulated malaria variability is strongly associated with rainfall variability during the peak rain months.

  19. Chimera states in an ensemble of linearly locally coupled bistable oscillators

    NASA Astrophysics Data System (ADS)

    Shchapin, D. S.; Dmitrichev, A. S.; Nekorkin, V. I.

    2017-11-01

    Chimera states in a system with linear local connections have been studied. The system is a ring ensemble of analog bistable self-excited oscillators with a resistive coupling. It has been shown that the existence of chimera states is not due to the nonidentity of oscillators and noise, which is always present in real experiments, but is due to the nonlinear dynamics of the system on invariant tori with various dimensions.

  20. Ensemble Linear Neighborhood Propagation for Predicting Subchloroplast Localization of Multi-Location Proteins.

    PubMed

    Wan, Shibiao; Mak, Man-Wai; Kung, Sun-Yuan

    2016-12-02

    In the postgenomic era, the number of unreviewed protein sequences is remarkably larger and grows tremendously faster than that of reviewed ones. However, existing methods for protein subchloroplast localization often ignore the information from these unlabeled proteins. This paper proposes a multi-label predictor based on ensemble linear neighborhood propagation (LNP), namely, LNP-Chlo, which leverages hybrid sequence-based feature information from both labeled and unlabeled proteins for predicting localization of both single- and multi-label chloroplast proteins. Experimental results on a stringent benchmark dataset and a novel independent dataset suggest that LNP-Chlo performs at least 6% (absolute) better than state-of-the-art predictors. This paper also demonstrates that ensemble LNP significantly outperforms LNP based on individual features. For readers' convenience, the online Web server LNP-Chlo is freely available at http://bioinfo.eie.polyu.edu.hk/LNPChloServer/ .

  1. Assessing a local ensemble Kalman filter: perfect model experiments with the National Centers for Environmental Prediction global model

    NASA Astrophysics Data System (ADS)

    Szunyogh, Istvan; Kostelich, Eric J.; Gyarmati, G.; Patil, D. J.; Hunt, Brian R.; Kalnay, Eugenia; Ott, Edward; Yorke, James A.

    2005-08-01

    The accuracy and computational efficiency of the recently proposed local ensemble Kalman filter (LEKF) data assimilation scheme is investigated on a state-of-the-art operational numerical weather prediction model using simulated observations. The model selected for this purpose is the T62 horizontal- and 28-level vertical-resolution version of the Global Forecast System (GFS) of the National Center for Environmental Prediction. The performance of the data assimilation system is assessed for different configurations of the LEKF scheme. It is shown that a modest size (40-member) ensemble is sufficient to track the evolution of the atmospheric state with high accuracy. For this ensemble size, the computational time per analysis is less than 9 min on a cluster of PCs. The analyses are extremely accurate in the mid-latitude storm track regions. The largest analysis errors, which are typically much smaller than the observational errors, occur where parametrized physical processes play important roles. Because these are also the regions where model errors are expected to be the largest, limitations of a real-data implementation of the ensemble-based Kalman filter may be easily mistaken for model errors. In light of these results, the importance of testing the ensemble-based Kalman filter data assimilation systems on simulated observations is stressed.

  2. Independent Metrics for Protein Backbone and Side-Chain Flexibility: Time Scales and Effects of Ligand Binding.

    PubMed

    Fuchs, Julian E; Waldner, Birgit J; Huber, Roland G; von Grafenstein, Susanne; Kramer, Christian; Liedl, Klaus R

    2015-03-10

    Conformational dynamics are central for understanding biomolecular structure and function, since biological macromolecules are inherently flexible at room temperature and in solution. Computational methods are nowadays capable of providing valuable information on the conformational ensembles of biomolecules. However, analysis tools and intuitive metrics that capture dynamic information from in silico generated structural ensembles are limited. In standard work-flows, flexibility in a conformational ensemble is represented through residue-wise root-mean-square fluctuations or B-factors following a global alignment. Consequently, these approaches relying on global alignments discard valuable information on local dynamics. Results inherently depend on global flexibility, residue size, and connectivity. In this study we present a novel approach for capturing positional fluctuations based on multiple local alignments instead of one single global alignment. The method captures local dynamics within a structural ensemble independent of residue type by splitting individual local and global degrees of freedom of protein backbone and side-chains. Dependence on residue type and size in the side-chains is removed via normalization with the B-factors of the isolated residue. As a test case, we demonstrate its application to a molecular dynamics simulation of bovine pancreatic trypsin inhibitor (BPTI) on the millisecond time scale. This allows for illustrating different time scales of backbone and side-chain flexibility. Additionally, we demonstrate the effects of ligand binding on side-chain flexibility of three serine proteases. We expect our new methodology for quantifying local flexibility to be helpful in unraveling local changes in biomolecular dynamics.

  3. Model Independence in Downscaled Climate Projections: a Case Study in the Southeast United States

    NASA Astrophysics Data System (ADS)

    Gray, G. M. E.; Boyles, R.

    2016-12-01

    Downscaled climate projections are used to deduce how the climate will change in future decades at local and regional scales. It is important to use multiple models to characterize part of the future uncertainty given the impact on adaptation decision making. This is traditionally employed through an equally-weighted ensemble of multiple GCMs downscaled using one technique. Newer practices include several downscaling techniques in an effort to increase the ensemble's representation of future uncertainty. However, this practice may be adding statistically dependent models to the ensemble. Previous research has shown a dependence problem in the GCM ensemble in multiple generations, but has not been shown in the downscaled ensemble. In this case study, seven downscaled climate projections on the daily time scale are considered: CLAREnCE10, SERAP, BCCA (CMIP5 and CMIP3 versions), Hostetler, CCR, and MACA-LIVNEH. These data represent 83 ensemble members, 44 GCMs, and two generations of GCMs. Baseline periods are compared against the University of Idaho's METDATA gridded observation dataset. Hierarchical agglomerative clustering is applied to the correlated errors to determine dependent clusters. Redundant GCMs across different downscaling techniques show the most dependence, while smaller dependence signals are detected within downscaling datasets and across generations of GCMs. These results indicate that using additional downscaled projections to increase the ensemble size must be done with care to avoid redundant GCMs and the process of downscaling may increase the dependence of those downscaled GCMs. Climate model generation does not appear dissimilar enough to be treated as two separate statistical populations for ensemble building at the local and regional scales.

  4. Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential.

    PubMed

    Edwards, James P; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel

    2018-04-01

    We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).

  5. Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential

    NASA Astrophysics Data System (ADS)

    Edwards, James P.; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel

    2018-04-01

    We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).

  6. Sequential updating of multimodal hydrogeologic parameter fields using localization and clustering techniques

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta

    2009-07-01

    Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.

  7. An Iterative Local Updating Ensemble Smoother for Estimation and Uncertainty Assessment of Hydrologic Model Parameters With Multimodal Distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Lin, Guang; Li, Weixuan; Wu, Laosheng; Zeng, Lingzao

    2018-03-01

    Ensemble smoother (ES) has been widely used in inverse modeling of hydrologic systems. However, for problems where the distribution of model parameters is multimodal, using ES directly would be problematic. One popular solution is to use a clustering algorithm to identify each mode and update the clusters with ES separately. However, this strategy may not be very efficient when the dimension of parameter space is high or the number of modes is large. Alternatively, we propose in this paper a very simple and efficient algorithm, i.e., the iterative local updating ensemble smoother (ILUES), to explore multimodal distributions of model parameters in nonlinear hydrologic systems. The ILUES algorithm works by updating local ensembles of each sample with ES to explore possible multimodal distributions. To achieve satisfactory data matches in nonlinear problems, we adopt an iterative form of ES to assimilate the measurements multiple times. Numerical cases involving nonlinearity and multimodality are tested to illustrate the performance of the proposed method. It is shown that overall the ILUES algorithm can well quantify the parametric uncertainties of complex hydrologic models, no matter whether the multimodal distribution exists.

  8. Local Dynamics of Baroclinic Waves in the Martian Atmosphere

    NASA Astrophysics Data System (ADS)

    Kavulich, M. J.; Szunyogh, I.; Gyarmati, G.; Wilson, R.

    2010-12-01

    In this presentation, the spatio-temporal evolution of baroclinic waves in the GFDL Mars GCM is investigated. The study employs diagnostic techniques that were developed to analyze the life cycles of baroclinic waves in the terrestrial atmosphere. These techniques include a Hilbert-transform-based method to extract the packets of Rossby wave envelopes at the jet level, the eddy kinetic energy equation for the full atmospheric column, and ensemble-based diagnostics. The results show that, similar to the terrestrial atmosphere, coherent westward-propagating wave packets can be detected in the Martian atmosphere. These wave packets are composed of waves of wavenumber 2 through 5, in contrast to the wavenumber 4 through 9 waves that contribute the upper-tropospheric wave packets of the terrestrial atmosphere. Additionally, as in the terrestrial atmosphere, the dominant part of the eddy kinetic energy is generated in regions of baroclinic energy conversion, which are strongly localized in both space and time. Implications of the results for predictability of the state of the Martian atmosphere are also discussed.

  9. Localization in covariance matrices of coupled heterogenous Ornstein-Uhlenbeck processes

    NASA Astrophysics Data System (ADS)

    Barucca, Paolo

    2014-12-01

    We define a random-matrix ensemble given by the infinite-time covariance matrices of Ornstein-Uhlenbeck processes at different temperatures coupled by a Gaussian symmetric matrix. The spectral properties of this ensemble are shown to be in qualitative agreement with some stylized facts of financial markets. Through the presented model formulas are given for the analysis of heterogeneous time series. Furthermore evidence for a localization transition in eigenvectors related to small and large eigenvalues in cross-correlations analysis of this model is found, and a simple explanation of localization phenomena in financial time series is provided. Finally we identify both in our model and in real financial data an inverted-bell effect in correlation between localized components and their local temperature: high- and low-temperature components are the most localized ones.

  10. Development of the NHM-LETKF regional reanalysis system assimilating conventional observations only

    NASA Astrophysics Data System (ADS)

    Fukui, S.; Iwasaki, T.; Saito, K. K.; Seko, H.; Kunii, M.

    2016-12-01

    The information about long-term high-resolution atmospheric fields is very useful for studying meso-scale responses to climate change or analyzing extreme events. We are developing a NHM-LETKF (the local ensemble transform Kalman filter with the nonhydrostatic model of the Japan Meteorological Agency (JMA)) regional reanalysis system assimilating only conventional observations that are available over about 60 years such as surface observations at observatories and upper air observations with radiosondes. The domain covers Japan and its surroundings. Before the long-term reanalysis is performed, an experiment using the system was conducted over August in 2014 in order to identify effectiveness and problems of the regional reanalysis system. In this study, we investigated the six-hour accumulated precipitations obtained by integration from the analysis fields. The reproduced precipitation was compared with the JMA's Radar/Rain-gauge Analyzed Precipitation data over Japan islands and the precipitation of JRA-55, which is used as lateral boundary conditions. The comparisons reveal the underestimation of the precipitation in the regional reanalysis. The underestimation is improved by extending the forecast time. In the regional reanalysis system, the analysis fields are derived using the ensemble mean fields, where the conflicting components among ensemble members are filtered out. Therefore, it is important to tune the inflation factor and lateral boundary perturbations not to smooth the analysis fields excessively and to consider more time to spin-up the fields. In the extended run, the underestimation still remains. This implies that the underestimation is attributed to the forecast model itself as well as the analysis scheme.

  11. Measuring excess free energies of self-assembled membrane structures.

    PubMed

    Norizoe, Yuki; Daoulas, Kostas Ch; Müller, Marcus

    2010-01-01

    Using computer simulation of a solvent-free, coarse-grained model for amphiphilic membranes, we study the excess free energy of hourglass-shaped connections (i.e., stalks) between two apposed bilayer membranes. In order to calculate the free energy by simulation in the canonical ensemble, we reversibly transfer two apposed bilayers into a configuration with a stalk in three steps. First, we gradually replace the intermolecular interactions by an external, ordering field. The latter is chosen such that the structure of the non-interacting system in this field closely resembles the structure of the original, interacting system in the absence of the external field. The absence of structural changes along this path suggests that it is reversible; a fact which is confirmed by expanded-ensemble simulations. Second, the external, ordering field is changed as to transform the non-interacting system from the apposed bilayer structure to two-bilayers connected by a stalk. The final external field is chosen such that the structure of the non-interacting system resembles the structure of the stalk in the interacting system without a field. On the third branch of the transformation path, we reversibly replace the external, ordering field by non-bonded interactions. Using expanded-ensemble techniques, the free energy change along this reversible path can be obtained with an accuracy of 10(-3)k(B)T per molecule in the n VT-ensemble. Calculating the chemical potential, we obtain the free energy of a stalk in the grandcanonical ensemble, and employing semi-grandcanonical techniques, we calculate the change of the excess free energy upon altering the molecular architecture. This computational strategy can be applied to compute the free energy of self-assembled phases in lipid and copolymer systems, and the excess free energy of defects or interfaces.

  12. Computer Experiments in Transformational Grammar; French I. Department of Computer and Communication Sciences Natural Language Studies No. 3.

    ERIC Educational Resources Information Center

    Morin, Yves Ch.

    Described in this paper is the implementation of Querido's French grammar ("Grammaire I, Description transformationelle d'un sous-ensemble du Francais," 1969) on the computer system for transformational grammar at the University of Michigan (Friedman 1969). The purpose was to demonstrate the ease of transcribing a relative formal grammar into the…

  13. Generalized Gibbs ensembles for quantum field theories

    NASA Astrophysics Data System (ADS)

    Essler, F. H. L.; Mussardo, G.; Panfil, M.

    2015-05-01

    We consider the nonequilibrium dynamics in quantum field theories (QFTs). After being prepared in a density matrix that is not an eigenstate of the Hamiltonian, such systems are expected to relax locally to a stationary state. In the presence of local conservation laws, these stationary states are believed to be described by appropriate generalized Gibbs ensembles. Here we demonstrate that in order to obtain a correct description of the stationary state, it is necessary to take into account conservation laws that are not (ultra)local in the usual sense of QFTs, but fulfill a significantly weaker form of locality. We discuss the implications of our results for integrable QFTs in one spatial dimension.

  14. Development of a stacked ensemble model for forecasting and analyzing daily average PM2.5 concentrations in Beijing, China.

    PubMed

    Zhai, Binxu; Chen, Jianguo

    2018-04-18

    A stacked ensemble model is developed for forecasting and analyzing the daily average concentrations of fine particulate matter (PM 2.5 ) in Beijing, China. Special feature extraction procedures, including those of simplification, polynomial, transformation and combination, are conducted before modeling to identify potentially significant features based on an exploratory data analysis. Stability feature selection and tree-based feature selection methods are applied to select important variables and evaluate the degrees of feature importance. Single models including LASSO, Adaboost, XGBoost and multi-layer perceptron optimized by the genetic algorithm (GA-MLP) are established in the level 0 space and are then integrated by support vector regression (SVR) in the level 1 space via stacked generalization. A feature importance analysis reveals that nitrogen dioxide (NO 2 ) and carbon monoxide (CO) concentrations measured from the city of Zhangjiakou are taken as the most important elements of pollution factors for forecasting PM 2.5 concentrations. Local extreme wind speeds and maximal wind speeds are considered to extend the most effects of meteorological factors to the cross-regional transportation of contaminants. Pollutants found in the cities of Zhangjiakou and Chengde have a stronger impact on air quality in Beijing than other surrounding factors. Our model evaluation shows that the ensemble model generally performs better than a single nonlinear forecasting model when applied to new data with a coefficient of determination (R 2 ) of 0.90 and a root mean squared error (RMSE) of 23.69μg/m 3 . For single pollutant grade recognition, the proposed model performs better when applied to days characterized by good air quality than when applied to days registering high levels of pollution. The overall classification accuracy level is 73.93%, with most misclassifications made among adjacent categories. The results demonstrate the interpretability and generalizability of the stacked ensemble model. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Rényi entropy, abundance distribution, and the equivalence of ensembles.

    PubMed

    Mora, Thierry; Walczak, Aleksandra M

    2016-05-01

    Distributions of abundances or frequencies play an important role in many fields of science, from biology to sociology, as does the Rényi entropy, which measures the diversity of a statistical ensemble. We derive a mathematical relation between the abundance distribution and the Rényi entropy, by analogy with the equivalence of ensembles in thermodynamics. The abundance distribution is mapped onto the density of states, and the Rényi entropy to the free energy. The two quantities are related in the thermodynamic limit by a Legendre transform, by virtue of the equivalence between the micro-canonical and canonical ensembles. In this limit, we show how the Rényi entropy can be constructed geometrically from rank-frequency plots. This mapping predicts that non-concave regions of the rank-frequency curve should result in kinks in the Rényi entropy as a function of its order. We illustrate our results on simple examples, and emphasize the limitations of the equivalence of ensembles when a thermodynamic limit is not well defined. Our results help choose reliable diversity measures based on the experimental accuracy of the abundance distributions in particular frequency ranges.

  16. A Localized Ensemble Kalman Smoother

    NASA Technical Reports Server (NTRS)

    Butala, Mark D.

    2012-01-01

    Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.

  17. Balanced Atmospheric Data Assimilation

    NASA Astrophysics Data System (ADS)

    Hastermann, Gottfried; Reinhardt, Maria; Klein, Rupert; Reich, Sebastian

    2017-04-01

    The atmosphere's multi-scale structure poses several major challenges in numerical weather prediction. One of these arises in the context of data assimilation. The large-scale dynamics of the atmosphere are balanced in the sense that acoustic or rapid internal wave oscillations generally come with negligibly small amplitudes. If triggered artificially, however, through inappropriate initialization or by data assimilation, such oscillations can have a detrimental effect on forecast quality as they interact with the moist aerothermodynamics of the atmosphere. In the setting of sequential Bayesian data assimilation, we therefore investigate two different strategies to reduce these artificial oscillations induced by the analysis step. On the one hand, we develop a new modification for a local ensemble transform Kalman filter, which penalizes imbalances via a minimization problem. On the other hand, we modify the first steps of the subsequent forecast to push the ensemble members back to the slow evolution. We therefore propose the use of certain asymptotically consistent integrators that can blend between the balanced and the unbalanced evolution model seamlessly. In our work, we furthermore present numerical results and performance of the proposed methods for two nonlinear ordinary differential equation models, where we can identify the different scales clearly. The first one is a Lorenz 96 model coupled with a wave equation. In this case the balance relation is linear and the imbalances are caused only by the localization of the filter. The second one is the elastic double pendulum where the balance relation itself is already highly nonlinear. In both cases the methods perform very well and could significantly reduce the imbalances and therefore increase the forecast quality of the slow variables.

  18. Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes

    NASA Astrophysics Data System (ADS)

    Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping

    2017-01-01

    Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.

  19. Local box-counting dimensions of discrete quantum eigenvalue spectra: Analytical connection to quantum spectral statistics

    NASA Astrophysics Data System (ADS)

    Sakhr, Jamal; Nieminen, John M.

    2018-03-01

    Two decades ago, Wang and Ong, [Phys. Rev. A 55, 1522 (1997)], 10.1103/PhysRevA.55.1522 hypothesized that the local box-counting dimension of a discrete quantum spectrum should depend exclusively on the nearest-neighbor spacing distribution (NNSD) of the spectrum. In this Rapid Communication, we validate their hypothesis by deriving an explicit formula for the local box-counting dimension of a countably-infinite discrete quantum spectrum. This formula expresses the local box-counting dimension of a spectrum in terms of single and double integrals of the NNSD of the spectrum. As applications, we derive an analytical formula for Poisson spectra and closed-form approximations to the local box-counting dimension for spectra having Gaussian orthogonal ensemble (GOE), Gaussian unitary ensemble (GUE), and Gaussian symplectic ensemble (GSE) spacing statistics. In the Poisson and GOE cases, we compare our theoretical formulas with the published numerical data of Wang and Ong and observe excellent agreement between their data and our theory. We also study numerically the local box-counting dimensions of the Riemann zeta function zeros and the alternate levels of GOE spectra, which are often used as numerical models of spectra possessing GUE and GSE spacing statistics, respectively. In each case, the corresponding theoretical formula is found to accurately describe the numerically computed local box-counting dimension.

  20. Ensemble Deep Learning for Biomedical Time Series Classification

    PubMed Central

    2016-01-01

    Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost. PMID:27725828

  1. Advanced Atmospheric Ensemble Modeling Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckley, R.; Chiswell, S.; Kurzeja, R.

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two releasemore » times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.« less

  2. Understanding the Structural Ensembles of a Highly Extended Disordered Protein†

    PubMed Central

    Daughdrill, Gary W.; Kashtanov, Stepan; Stancik, Amber; Hill, Shannon E.; Helms, Gregory; Muschol, Martin

    2013-01-01

    Developing a comprehensive description of the equilibrium structural ensembles for intrinsically disordered proteins (IDPs) is essential to understanding their function. The p53 transactivation domain (p53TAD) is an IDP that interacts with multiple protein partners and contains numerous phosphorylation sites. Multiple techniques were used to investigate the equilibrium structural ensemble of p53TAD in its native and chemically unfolded states. The results from these experiments show that the native state of p53TAD has dimensions similar to a classical random coil while the chemically unfolded state is more extended. To investigate the molecular properties responsible for this behavior, a novel algorithm that generates diverse and unbiased structural ensembles of IDPs was developed. This algorithm was used to generate a large pool of plausible p53TAD structures that were reweighted to identify a subset of structures with the best fit to small angle X-ray scattering data. High weight structures in the native state ensemble show features that are localized to protein binding sites and regions with high proline content. The features localized to the protein binding sites are mostly eliminated in the chemically unfolded ensemble; while, the regions with high proline content remain relatively unaffected. Data from NMR experiments support these results, showing that residues from the protein binding sites experience larger environmental changes upon unfolding by urea than regions with high proline content. This behavior is consistent with the urea-induced exposure of nonpolar and aromatic side-chains in the protein binding sites that are partially excluded from solvent in the native state ensemble. PMID:21979461

  3. Far-from-equilibrium magnetic granular layers: dynamic patterns, magnetic order and self-assembled swimmers

    NASA Astrophysics Data System (ADS)

    Snezhko, Alexey

    2010-03-01

    Ensembles of interacting particles subject to an external periodic forcing often develop nontrivial collective behavior and self-assembled dynamic patterns. We study emergent phenomena in magnetic granular ensembles suspended at a liquid-air and liquid-liquid interfaces and subjected to a transversal alternating magnetic field. Experiments reveal a new type of nontrivially ordered dynamic self-assembled structures (in particular, ``magnetic snakes'', ``asters'', ``clams'') emerging in such systems in a certain range of excitation parameters. These non-equilibrium dynamic structures emerge as a result of the competition between magnetic and hydrodynamic forces and have complex magnetic ordering. Transition between different self-assembled phases with parameters of external driving magnetic field is observed. I will show that above some frequency threshold magnetic snakes spontaneously break the symmetry of the self-induced surface flows (symmetry breaking instability) and turn into swimmers. Self-induced surface flows symmetry can be also broken in a controlled fashion by introduction of a large bead to a magnetic snake (bead-snake hybrid), that transforms it into a robust self-locomoting entity. Some features of the self-localized structures can be understood in the framework of an amplitude equation for parametric waves coupled to the conservation law equation describing the evolution of the magnetic particle density and the Navier-Stokes equation for hydrodynamic flows.

  4. Representational analysis of extended disorder in atomistic ensembles derived from total scattering data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neilson, James R.; McQueen, Tyrel M.

    With the increased availability of high-intensity time-of-flight neutron and synchrotron X-ray scattering sources that can access wide ranges of momentum transfer, the pair distribution function method has become a standard analysis technique for studying disorder of local coordination spheres and at intermediate atomic separations. In some cases, rational modeling of the total scattering data (Bragg and diffuse) becomes intractable with least-squares approaches, necessitating reverse Monte Carlo simulations using large atomistic ensembles. However, the extraction of meaningful information from the resulting atomistic ensembles is challenging, especially at intermediate length scales. Representational analysis is used here to describe the displacements of atomsmore » in reverse Monte Carlo ensembles from an ideal crystallographic structure in an approach analogous to tight-binding methods. Rewriting the displacements in terms of a local basis that is descriptive of the ideal crystallographic symmetry provides a robust approach to characterizing medium-range order (and disorder) and symmetry breaking in complex and disordered crystalline materials. Lastly, this method enables the extraction of statistically relevant displacement modes (orientation, amplitude and distribution) of the crystalline disorder and provides directly meaningful information in a locally symmetry-adapted basis set that is most descriptive of the crystal chemistry and physics.« less

  5. Representational analysis of extended disorder in atomistic ensembles derived from total scattering data

    DOE PAGES

    Neilson, James R.; McQueen, Tyrel M.

    2015-09-20

    With the increased availability of high-intensity time-of-flight neutron and synchrotron X-ray scattering sources that can access wide ranges of momentum transfer, the pair distribution function method has become a standard analysis technique for studying disorder of local coordination spheres and at intermediate atomic separations. In some cases, rational modeling of the total scattering data (Bragg and diffuse) becomes intractable with least-squares approaches, necessitating reverse Monte Carlo simulations using large atomistic ensembles. However, the extraction of meaningful information from the resulting atomistic ensembles is challenging, especially at intermediate length scales. Representational analysis is used here to describe the displacements of atomsmore » in reverse Monte Carlo ensembles from an ideal crystallographic structure in an approach analogous to tight-binding methods. Rewriting the displacements in terms of a local basis that is descriptive of the ideal crystallographic symmetry provides a robust approach to characterizing medium-range order (and disorder) and symmetry breaking in complex and disordered crystalline materials. Lastly, this method enables the extraction of statistically relevant displacement modes (orientation, amplitude and distribution) of the crystalline disorder and provides directly meaningful information in a locally symmetry-adapted basis set that is most descriptive of the crystal chemistry and physics.« less

  6. Internal and forced eddy variability in the Labrador Sea

    NASA Astrophysics Data System (ADS)

    Bracco, A.; Luo, H.; Zhong, Y.; Lilly, J.

    2009-04-01

    Water mass transformation in the Labrador Sea, widely believed to be one of the key regions in the Atlantic Meridional Overturning Circulation (AMOC), now appears to be strongly impacted by vortex dynamics of the unstable boundary current. Large interannual variations in both eddy shedding and buoyancy transport from the boundary current have been observed but not explained, and are apparently sensitive to the state of the inflowing current. Heat and salinity fluxes associated with the eddies drive ventilation changes not accounted for by changes in local surface forcing, particularly during occasional years of extreme eddy activity, and constitute a predominant source of "internal" oceanic variability. The nature of this variable eddy-driven restratification is one of the outstanding questions along the northern transformation pathway. Here we investigate the eddy generation mechanism and the associated buoyancy fluxes by combining realistic and idealized numerical modeling, data analysis, and theory. Theory, supported by idealized experiments, provides criteria to test hypotheses as to the vortex formation process (by baroclinic instability linked to the bottom topography). Ensembles of numerical experiments with a high-resolution regional model (ROMS) allow for quantifying the sensitivity of eddy generation and property transport to variations in local and external forcing parameters. For the first time, we reproduce with a numerical simulation the observed interannual variability in the eddy kinetic energy in the convective region of the Labrador Basin and along the West Greenland Current.

  7. Evaluating a Local Ensemble Transform Kalman Filter snow cover data assimilation method to estimate SWE within a high-resolution hydrologic modeling framework across Western US mountainous regions

    NASA Astrophysics Data System (ADS)

    Oaida, C. M.; Andreadis, K.; Reager, J. T., II; Famiglietti, J. S.; Levoe, S.

    2017-12-01

    Accurately estimating how much snow water equivalent (SWE) is stored in mountainous regions characterized by complex terrain and snowmelt-driven hydrologic cycles is not only greatly desirable, but also a big challenge. Mountain snowpack exhibits high spatial variability across a broad range of spatial and temporal scales due to a multitude of physical and climatic factors, making it difficult to observe or estimate in its entirety. Combing remotely sensed data and high resolution hydrologic modeling through data assimilation (DA) has the potential to provide a spatially and temporally continuous SWE dataset at horizontal scales that capture sub-grid snow spatial variability and are also relevant to stakeholders such as water resource managers. Here, we present the evaluation of a new snow DA approach that uses a Local Ensemble Transform Kalman Filter (LETKF) in tandem with the Variable Infiltration Capacity macro-scale hydrologic model across the Western United States, at a daily temporal resolution, and a horizontal resolution of 1.75 km x 1.75 km. The LETKF is chosen for its relative simplicity, ease of implementation, and computational efficiency and scalability. The modeling/DA system assimilates daily MODIS Snow Covered Area and Grain Size (MODSCAG) fractional snow cover over, and has been developed to efficiently calculate SWE estimates over extended periods of time and covering large regional-scale areas at relatively high spatial resolution, ultimately producing a snow reanalysis-type dataset. Here we focus on the assessment of SWE produced by the DA scheme over several basins in California's Sierra Nevada Mountain range where Airborne Snow Observatory data is available, during the last five water years (2013-2017), which include both one of the driest and one of the wettest years. Comparison against such a spatially distributed SWE observational product provides a greater understanding of the model's ability to estimate SWE and SWE spatial variability, and highlights under which conditions snow cover DA can add value in estimating SWE.

  8. Chimera regimes in a ring of oscillators with local nonlinear interaction

    NASA Astrophysics Data System (ADS)

    Shepelev, Igor A.; Zakharova, Anna; Vadivasova, Tatiana E.

    2017-03-01

    One of important problems concerning chimera states is the conditions of their existence and stability. Until now, it was assumed that chimeras could arise only in ensembles with nonlocal character of interactions. However, this assumption is not exactly right. In some special cases chimeras can be realized for local type of coupling [1-3]. We propose a simple model of ensemble with local coupling when chimeras are realized. This model is a ring of linear oscillators with the local nonlinear unidirectional interaction. Chimera structures in the ring are found using computer simulations for wide area of values of parameters. Diagram of the regimes on plane of control parameters is plotted and scenario of chimera destruction are studied when the parameters are changed.

  9. Quasi-static ensemble variational data assimilation: a theoretical and numerical study with the iterative ensemble Kalman smoother

    NASA Astrophysics Data System (ADS)

    Fillion, Anthony; Bocquet, Marc; Gratton, Serge

    2018-04-01

    The analysis in nonlinear variational data assimilation is the solution of a non-quadratic minimization. Thus, the analysis efficiency relies on its ability to locate a global minimum of the cost function. If this minimization uses a Gauss-Newton (GN) method, it is critical for the starting point to be in the attraction basin of a global minimum. Otherwise the method may converge to a local extremum, which degrades the analysis. With chaotic models, the number of local extrema often increases with the temporal extent of the data assimilation window, making the former condition harder to satisfy. This is unfortunate because the assimilation performance also increases with this temporal extent. However, a quasi-static (QS) minimization may overcome these local extrema. It accomplishes this by gradually injecting the observations in the cost function. This method was introduced by Pires et al. (1996) in a 4D-Var context. We generalize this approach to four-dimensional strong-constraint nonlinear ensemble variational (EnVar) methods, which are based on both a nonlinear variational analysis and the propagation of dynamical error statistics via an ensemble. This forces one to consider the cost function minimizations in the broader context of cycled data assimilation algorithms. We adapt this QS approach to the iterative ensemble Kalman smoother (IEnKS), an exemplar of nonlinear deterministic four-dimensional EnVar methods. Using low-order models, we quantify the positive impact of the QS approach on the IEnKS, especially for long data assimilation windows. We also examine the computational cost of QS implementations and suggest cheaper algorithms.

  10. Stability and Noise-induced Transitions in an Ensemble of Nonlocally Coupled Chaotic Maps

    NASA Astrophysics Data System (ADS)

    Bukh, Andrei V.; Slepnev, Andrei V.; Anishchenko, Vadim S.; Vadivasova, Tatiana E.

    2018-05-01

    The influence of noise on chimera states arising in ensembles of nonlocally coupled chaotic maps is studied. There are two types of chimera structures that can be obtained in such ensembles: phase and amplitude chimera states. In this work, a series of numerical experiments is carried out to uncover the impact of noise on both types of chimeras. The noise influence on a chimera state in the regime of periodic dynamics results in the transition to chaotic dynamics. At the same time, the transformation of incoherence clusters of the phase chimera to incoherence clusters of the amplitude chimera occurs. Moreover, it is established that the noise impact may result in the appearance of a cluster with incoherent behavior in the middle of a coherence cluster.

  11. A non-parametric postprocessor for bias-correcting multi-model ensemble forecasts of hydrometeorological and hydrologic variables

    NASA Astrophysics Data System (ADS)

    Brown, James; Seo, Dong-Jun

    2010-05-01

    Operational forecasts of hydrometeorological and hydrologic variables often contain large uncertainties, for which ensemble techniques are increasingly used. However, the utility of ensemble forecasts depends on the unbiasedness of the forecast probabilities. We describe a technique for quantifying and removing biases from ensemble forecasts of hydrometeorological and hydrologic variables, intended for use in operational forecasting. The technique makes no a priori assumptions about the distributional form of the variables, which is often unknown or difficult to model parametrically. The aim is to estimate the conditional cumulative distribution function (ccdf) of the observed variable given a (possibly biased) real-time ensemble forecast from one or several forecasting systems (multi-model ensembles). The technique is based on Bayesian optimal linear estimation of indicator variables, and is analogous to indicator cokriging (ICK) in geostatistics. By developing linear estimators for the conditional expectation of the observed variable at many thresholds, ICK provides a discrete approximation of the full ccdf. Since ICK minimizes the conditional error variance of the indicator expectation at each threshold, it effectively minimizes the Continuous Ranked Probability Score (CRPS) when infinitely many thresholds are employed. However, the ensemble members used as predictors in ICK, and other bias-correction techniques, are often highly cross-correlated, both within and between models. Thus, we propose an orthogonal transform of the predictors used in ICK, which is analogous to using their principal components in the linear system of equations. This leads to a well-posed problem in which a minimum number of predictors are used to provide maximum information content in terms of the total variance explained. The technique is used to bias-correct precipitation ensemble forecasts from the NCEP Global Ensemble Forecast System (GEFS), for which independent validation results are presented. Extension to multimodel ensembles from the NCEP GFS and Short Range Ensemble Forecast (SREF) systems is also proposed.

  12. Localization of a variational particle smoother

    NASA Astrophysics Data System (ADS)

    Morzfeld, M.; Hodyss, D.; Poterjoy, J.

    2017-12-01

    Given the success of 4D-variational methods (4D-Var) in numerical weather prediction,and recent efforts to merge ensemble Kalman filters with 4D-Var,we consider a method to merge particle methods and 4D-Var.This leads us to revisit variational particle smoothers (varPS).We study the collapse of varPS in high-dimensional problemsand show how it can be prevented by weight-localization.We test varPS on the Lorenz'96 model of dimensionsn=40, n=400, and n=2000.In our numerical experiments, weight localization prevents the collapse of the varPS,and we note that the varPS yields results comparable to ensemble formulations of 4D-variational methods,while it outperforms EnKF with tuned localization and inflation,and the localized standard particle filter.Additional numerical experiments suggest that using localized weights in varPS may not yield significant advantages over unweighted or linearizedsolutions in near-Gaussian problems.

  13. First-Order Interfacial Transformations with a Critical Point: Breaking the Symmetry at a Symmetric Tilt Grain Boundary

    NASA Astrophysics Data System (ADS)

    Yang, Shengfeng; Zhou, Naixie; Zheng, Hui; Ong, Shyue Ping; Luo, Jian

    2018-02-01

    First-order interfacial phaselike transformations that break the mirror symmetry of the symmetric ∑5 (210 ) tilt grain boundary (GB) are discovered by combining a modified genetic algorithm with hybrid Monte Carlo and molecular dynamics simulations. Density functional theory calculations confirm this prediction. This first-order coupled structural and adsorption transformation, which produces two variants of asymmetric bilayers, vanishes at an interfacial critical point. A GB complexion (phase) diagram is constructed via semigrand canonical ensemble atomistic simulations for the first time.

  14. Surface drift prediction in the Adriatic Sea using hyper-ensemble statistics on atmospheric, ocean and wave models: Uncertainties and probability distribution areas

    USGS Publications Warehouse

    Rixen, M.; Ferreira-Coelho, E.; Signell, R.

    2008-01-01

    Despite numerous and regular improvements in underlying models, surface drift prediction in the ocean remains a challenging task because of our yet limited understanding of all processes involved. Hence, deterministic approaches to the problem are often limited by empirical assumptions on underlying physics. Multi-model hyper-ensemble forecasts, which exploit the power of an optimal local combination of available information including ocean, atmospheric and wave models, may show superior forecasting skills when compared to individual models because they allow for local correction and/or bias removal. In this work, we explore in greater detail the potential and limitations of the hyper-ensemble method in the Adriatic Sea, using a comprehensive surface drifter database. The performance of the hyper-ensembles and the individual models are discussed by analyzing associated uncertainties and probability distribution maps. Results suggest that the stochastic method may reduce position errors significantly for 12 to 72??h forecasts and hence compete with pure deterministic approaches. ?? 2007 NATO Undersea Research Centre (NURC).

  15. Breaking of Ensemble Equivalence in Networks

    NASA Astrophysics Data System (ADS)

    Squartini, Tiziano; de Mol, Joey; den Hollander, Frank; Garlaschelli, Diego

    2015-12-01

    It is generally believed that, in the thermodynamic limit, the microcanonical description as a function of energy coincides with the canonical description as a function of temperature. However, various examples of systems for which the microcanonical and canonical ensembles are not equivalent have been identified. A complete theory of this intriguing phenomenon is still missing. Here we show that ensemble nonequivalence can manifest itself also in random graphs with topological constraints. We find that, while graphs with a given number of links are ensemble equivalent, graphs with a given degree sequence are not. This result holds irrespective of whether the energy is nonadditive (as in unipartite graphs) or additive (as in bipartite graphs). In contrast with previous expectations, our results show that (1) physically, nonequivalence can be induced by an extensive number of local constraints, and not necessarily by long-range interactions or nonadditivity, (2) mathematically, nonequivalence is determined by a different large-deviation behavior of microcanonical and canonical probabilities for a single microstate, and not necessarily for almost all microstates. The latter criterion, which is entirely local, is not restricted to networks and holds in general.

  16. Triplet correlation functions in liquid water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhabal, Debdas; Chakravarty, Charusita, E-mail: charus@chemistry.iitd.ac.in; Singh, Murari

    Triplet correlations have been shown to play a crucial role in the transformation of simple liquids to anomalous tetrahedral fluids [M. Singh, D. Dhabal, A. H. Nguyen, V. Molinero, and C. Chakravarty, Phys. Rev. Lett. 112, 147801 (2014)]. Here we examine triplet correlation functions for water, arguably the most important tetrahedral liquid, under ambient conditions, using configurational ensembles derived from molecular dynamics (MD) simulations and reverse Monte Carlo (RMC) datasets fitted to experimental scattering data. Four different RMC data sets with widely varying hydrogen-bond topologies fitted to neutron and x-ray scattering data are considered [K. T. Wikfeldt, M. Leetmaa, M.more » P. Ljungberg, A. Nilsson, and L. G. M. Pettersson, J. Phys. Chem. B 113, 6246 (2009)]. Molecular dynamics simulations are performed for two rigid-body effective pair potentials (SPC/E and TIP4P/2005) and the monatomic water (mW) model. Triplet correlation functions are compared with other structural measures for tetrahedrality, such as the O–O–O angular distribution function and the local tetrahedral order distributions. In contrast to the pair correlation functions, which are identical for all the RMC ensembles, the O–O–O triplet correlation function can discriminate between ensembles with different degrees of tetrahedral network formation with the maximally symmetric, tetrahedral SYM dataset displaying distinct signatures of tetrahedrality similar to those obtained from atomistic simulations of the SPC/E model. Triplet correlations from the RMC datasets conform closely to the Kirkwood superposition approximation, while those from MD simulations show deviations within the first two neighbour shells. The possibilities for experimental estimation of triplet correlations of water and other tetrahedral liquids are discussed.« less

  17. Influence of Aerosol Heating on the Stratospheric Transport of the Mt. Pinatubo Eruption

    NASA Technical Reports Server (NTRS)

    Aquila, Valentina; Oman, Luke D.; Stolarski, Richard S.

    2011-01-01

    On June 15th, 1991 the eruption of Mt. Pinatubo (15.1 deg. N, 120.3 Deg. E) in the Philippines injected about 20 Tg of sulfur dioxide in the stratosphere, which was transformed into sulfuric acid aerosol. The large perturbation of the background aerosol caused an increase in temperature in the lower stratosphere of 2-3 K. Even though stratospheric winds climatological]y tend to hinder the air mixing between the two hemispheres, observations have shown that a large part of the SO2 emitted by Mt. Pinatubo have been transported from the Northern to the Southern Hemisphere. We simulate the eruption of Mt. Pinatubo with the Goddard Earth Observing System (GEOS) version 5 global climate model, coupled to the aerosol module GOCART and the stratospheric chemistry module StratChem, to investigate the influence of the eruption of Mt. Pinatubo on the stratospheric transport pattern. We perform two ensembles of simulations: the first ensemble consists of runs without coupling between aerosol and radiation. In these simulations the plume of aerosols is treated as a passive tracer and the atmosphere is unperturbed. In the second ensemble of simulations aerosols and radiation are coupled. We show that the set of runs with interactive aerosol produces a larger cross-equatorial transport of the Pinatubo cloud. In our simulations the local heating perturbation caused by the sudden injection of volcanic aerosol changes the pattern of the stratospheric winds causing more intrusion of air from the Northern into the Southern Hemisphere. Furthermore, we perform simulations changing the injection height of the cloud, and study the transport of the plume resulting from the different scenarios. Comparisons of model results with SAGE II and AVHRR satellite observations will be shown.

  18. A preliminary experiment for the long-term regional reanalysis over Japan assimilating conventional observations with NHM-LETKF

    NASA Astrophysics Data System (ADS)

    Fukui, Shin; Iwasaki, Toshiki; Saito, Kazuo; Seko, Hiromu; Kunii, Masaru

    2016-04-01

    Several long-term global reanalyses have been produced by major operational centres and have contributed to the advance of weather and climate researches considerably. Although the horizontal resolutions of these global reanalyses are getting higher partly due to the development of computing technology, they are still too coarse to reproduce local circulations and precipitation realistically. To solve this problem, dynamical downscaling is often employed. However, the forcing from lateral boundaries only cannot necessarily control the inner fields especially in long-term dynamical downscaling. Regional reanalysis is expected to overcome the difficulty. To maintain the long-term consistency of the analysis quality, it is better to assimilate only the conventional observations that are available in long period. To confirm the effectiveness of the regional reanalysis, some assimilation experiments are performed. In the experiments, only conventional observations (SYNOP, SHIP, BUOY, TEMP, PILOT, TC-Bogus) are assimilated with the NHM-LETKF system, which consists of the nonhydrostatic model (NHM) of the Japan Meteorological Agency (JMA) and the local ensemble transform Kalman filter (LETKF). The horizontal resolution is 25 km and the domain covers Japan and its surroundings. Japanese 55-year reanalysis (JRA-55) is adopted as the initial and lateral boundary conditions for the NHM-LETKF forecast-analysis cycles. The ensemble size is 10. The experimental period is August 2014 as a representative of warm season for the region. The results are verified against the JMA's operational Meso-scale Analysis, which is produced with assimilating observation data including various remote sensing observations using a 4D-Var scheme, and compared with those of the simple dynamical downscaling experiment without data assimilation. Effects of implementation of lateral boundary perturbations derived from an EOF analysis of JRA-55 over the targeted domain are also examined. The comparison proposes that the assimilation system can reproduce more accurate fields than dynamical downscaling. The implementation of the lateral boundary perturbations implies that the perturbations contribute to providing more appropriate ensemble spreads, though the perturbations are not necessarily consistent to those of the inner fields given by NHM-LETKF.

  19. A probabilistic verification score for contours demonstrated with idealized ice-edge forecasts

    NASA Astrophysics Data System (ADS)

    Goessling, Helge; Jung, Thomas

    2017-04-01

    We introduce a probabilistic verification score for ensemble-based forecasts of contours: the Spatial Probability Score (SPS). Defined as the spatial integral of local (Half) Brier Scores, the SPS can be considered the spatial analog of the Continuous Ranked Probability Score (CRPS). Applying the SPS to idealized seasonal ensemble forecasts of the Arctic sea-ice edge in a global coupled climate model, we demonstrate that the SPS responds properly to ensemble size, bias, and spread. When applied to individual forecasts or ensemble means (or quantiles), the SPS is reduced to the 'volume' of mismatch, in case of the ice edge corresponding to the Integrated Ice Edge Error (IIEE).

  20. Quantifying Nucleic Acid Ensembles with X-ray Scattering Interferometry.

    PubMed

    Shi, Xuesong; Bonilla, Steve; Herschlag, Daniel; Harbury, Pehr

    2015-01-01

    The conformational ensemble of a macromolecule is the complete description of the macromolecule's solution structures and can reveal important aspects of macromolecular folding, recognition, and function. However, most experimental approaches determine an average or predominant structure, or follow transitions between states that each can only be described by an average structure. Ensembles have been extremely difficult to experimentally characterize. We present the unique advantages and capabilities of a new biophysical technique, X-ray scattering interferometry (XSI), for probing and quantifying structural ensembles. XSI measures the interference of scattered waves from two heavy metal probes attached site specifically to a macromolecule. A Fourier transform of the interference pattern gives the fractional abundance of different probe separations directly representing the multiple conformation states populated by the macromolecule. These probe-probe distance distributions can then be used to define the structural ensemble of the macromolecule. XSI provides accurate, calibrated distance in a model-independent fashion with angstrom scale sensitivity in distances. XSI data can be compared in a straightforward manner to atomic coordinates determined experimentally or predicted by molecular dynamics simulations. We describe the conceptual framework for XSI and provide a detailed protocol for carrying out an XSI experiment. © 2015 Elsevier Inc. All rights reserved.

  1. Exploring the calibration of a wind forecast ensemble for energy applications

    NASA Astrophysics Data System (ADS)

    Heppelmann, Tobias; Ben Bouallegue, Zied; Theis, Susanne

    2015-04-01

    In the German research project EWeLiNE, Deutscher Wetterdienst (DWD) and Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) are collaborating with three German Transmission System Operators (TSO) in order to provide the TSOs with improved probabilistic power forecasts. Probabilistic power forecasts are derived from probabilistic weather forecasts, themselves derived from ensemble prediction systems (EPS). Since the considered raw ensemble wind forecasts suffer from underdispersiveness and bias, calibration methods are developed for the correction of the model bias and the ensemble spread bias. The overall aim is to improve the ensemble forecasts such that the uncertainty of the possible weather deployment is depicted by the ensemble spread from the first forecast hours. Additionally, the ensemble members after calibration should remain physically consistent scenarios. We focus on probabilistic hourly wind forecasts with horizon of 21 h delivered by the convection permitting high-resolution ensemble system COSMO-DE-EPS which has become operational in 2012 at DWD. The ensemble consists of 20 ensemble members driven by four different global models. The model area includes whole Germany and parts of Central Europe with a horizontal resolution of 2.8 km and a vertical resolution of 50 model levels. For verification we use wind mast measurements around 100 m height that corresponds to the hub height of wind energy plants that belong to wind farms within the model area. Calibration of the ensemble forecasts can be performed by different statistical methods applied to the raw ensemble output. Here, we explore local bivariate Ensemble Model Output Statistics at individual sites and quantile regression with different predictors. Applying different methods, we already show an improvement of ensemble wind forecasts from COSMO-DE-EPS for energy applications. In addition, an ensemble copula coupling approach transfers the time-dependencies of the raw ensemble to the calibrated ensemble. The calibrated wind forecasts are evaluated first with univariate probabilistic scores and additionally with diagnostics of wind ramps in order to assess the time-consistency of the calibrated ensemble members.

  2. Multi-model ensemble hydrologic prediction using Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh

    2007-05-01

    Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.

  3. Modeling the intense 2012-2013 dense water formation event in the northwestern Mediterranean Sea: Evaluation with an ensemble simulation approach

    NASA Astrophysics Data System (ADS)

    Waldman, Robin; Somot, Samuel; Herrmann, Marine; Bosse, Anthony; Caniaux, Guy; Estournel, Claude; Houpert, Loic; Prieur, Louis; Sevault, Florence; Testor, Pierre

    2017-02-01

    The northwestern Mediterranean Sea is a well-observed ocean deep convection site. Winter 2012-2013 was an intense and intensely documented dense water formation (DWF) event. We evaluate this DWF event in an ensemble configuration of the regional ocean model NEMOMED12. We then assess for the first time the impact of ocean intrinsic variability on DWF with a novel perturbed initial state ensemble method. Finally, we identify the main physical mechanisms driving water mass transformations. NEMOMED12 reproduces accurately the deep convection chronology between late January and March, its location off the Gulf of Lions although with a southward shift and its magnitude. It fails to reproduce the Western Mediterranean Deep Waters salinification and warming, consistently with too strong a surface heat loss. The Ocean Intrinsic Variability modulates half of the DWF area, especially in the open-sea where the bathymetry slope is low. It modulates marginally (3-5%) the integrated DWF rate, but its increase with time suggests its impact could be larger at interannual timescales. We conclude that ensemble frameworks are necessary to evaluate accurately numerical simulations of DWF. Each phase of DWF has distinct diapycnal and thermohaline regimes: during preconditioning, the Mediterranean thermohaline circulation is driven by exchanges with the Algerian basin. During the intense mixing phase, surface heat fluxes trigger deep convection and internal mixing largely determines the resulting deep water properties. During restratification, lateral exchanges and internal mixing are enhanced. Finally, isopycnal mixing was shown to play a large role in water mass transformations during the preconditioning and restratification phases.

  4. Efficient and Unbiased Sampling of Biomolecular Systems in the Canonical Ensemble: A Review of Self-Guided Langevin Dynamics

    PubMed Central

    Wu, Xiongwu; Damjanovic, Ana; Brooks, Bernard R.

    2013-01-01

    This review provides a comprehensive description of the self-guided Langevin dynamics (SGLD) and the self-guided molecular dynamics (SGMD) methods and their applications. Example systems are included to provide guidance on optimal application of these methods in simulation studies. SGMD/SGLD has enhanced ability to overcome energy barriers and accelerate rare events to affordable time scales. It has been demonstrated that with moderate parameters, SGLD can routinely cross energy barriers of 20 kT at a rate that molecular dynamics (MD) or Langevin dynamics (LD) crosses 10 kT barriers. The core of these methods is the use of local averages of forces and momenta in a direct manner that can preserve the canonical ensemble. The use of such local averages results in methods where low frequency motion “borrows” energy from high frequency degrees of freedom when a barrier is approached and then returns that excess energy after a barrier is crossed. This self-guiding effect also results in an accelerated diffusion to enhance conformational sampling efficiency. The resulting ensemble with SGLD deviates in a small way from the canonical ensemble, and that deviation can be corrected with either an on-the-fly or a post processing reweighting procedure that provides an excellent canonical ensemble for systems with a limited number of accelerated degrees of freedom. Since reweighting procedures are generally not size extensive, a newer method, SGLDfp, uses local averages of both momenta and forces to preserve the ensemble without reweighting. The SGLDfp approach is size extensive and can be used to accelerate low frequency motion in large systems, or in systems with explicit solvent where solvent diffusion is also to be enhanced. Since these methods are direct and straightforward, they can be used in conjunction with many other sampling methods or free energy methods by simply replacing the integration of degrees of freedom that are normally sampled by MD or LD. PMID:23913991

  5. Distinguishing high and low flow domains in urban drainage systems 2 days ahead using numerical weather prediction ensembles

    NASA Astrophysics Data System (ADS)

    Courdent, Vianney; Grum, Morten; Mikkelsen, Peter Steen

    2018-01-01

    Precipitation constitutes a major contribution to the flow in urban storm- and wastewater systems. Forecasts of the anticipated runoff flows, created from radar extrapolation and/or numerical weather predictions, can potentially be used to optimize operation in both wet and dry weather periods. However, flow forecasts are inevitably uncertain and their use will ultimately require a trade-off between the value of knowing what will happen in the future and the probability and consequence of being wrong. In this study we examine how ensemble forecasts from the HIRLAM-DMI-S05 numerical weather prediction (NWP) model subject to three different ensemble post-processing approaches can be used to forecast flow exceedance in a combined sewer for a wide range of ratios between the probability of detection (POD) and the probability of false detection (POFD). We use a hydrological rainfall-runoff model to transform the forecasted rainfall into forecasted flow series and evaluate three different approaches to establishing the relative operating characteristics (ROC) diagram of the forecast, which is a plot of POD against POFD for each fraction of concordant ensemble members and can be used to select the weight of evidence that matches the desired trade-off between POD and POFD. In the first approach, the rainfall input to the model is calculated for each of 25 ensemble members as a weighted average of rainfall from the NWP cells over the catchment where the weights are proportional to the areal intersection between the catchment and the NWP cells. In the second approach, a total of 2825 flow ensembles are generated using rainfall input from the neighbouring NWP cells up to approximately 6 cells in all directions from the catchment. In the third approach, the first approach is extended spatially by successively increasing the area covered and for each spatial increase and each time step selecting only the cell with the highest intensity resulting in a total of 175 ensemble members. While the first and second approaches have the disadvantage of not covering the full range of the ROC diagram and being computationally heavy, respectively, the third approach leads to both a broad coverage of the ROC diagram range at a relatively low computational cost. A broad coverage of the ROC diagram offers a larger selection of prediction skill to choose from to best match to the prediction purpose. The study distinguishes itself from earlier research in being the first application to urban hydrology, with fast runoff and small catchments that are highly sensitive to local extremes. Furthermore, no earlier reference has been found on the highly efficient third approach using only neighbouring cells with the highest threat to expand the range of the ROC diagram. This study provides an efficient and robust approach to using ensemble rainfall forecasts affected by bias and misplacement errors for predicting flow threshold exceedance in urban drainage systems.

  6. A Method of Successive Corrections of the Control Subspace in the Reduced-Order Variational Data Assimilation

    DTIC Science & Technology

    2009-02-01

    Evensen, G., 2003: The ensemble Kalman filter : Theoretical formulation and practical implementation. Ocean Dyn., 53, 343–357, doi:10.1007/s10236-003...0036-9. ——, 2006: Data Assimilation: The Ensemble Kalman Filter . Springer, 288 pp. Fang, F., C. C. Pain, I. M. Navon, G. J. Gorman, M. D. Piggott, P. A...E. J. Kostelich, M. Corazza, E. Kalnay, and D. J. Patil, 2004: A local ensemble Kalman filter for atmospheric data assimilation. Tellus, 56A, 415–428

  7. Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics

    NASA Astrophysics Data System (ADS)

    Lazarus, S. M.; Holman, B. P.; Splitt, M. E.

    2017-12-01

    A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.

  8. Impact of Representing Model Error in a Hybrid Ensemble-Variational Data Assimilation System for Track Forecast of Tropical Cyclones over the Bay of Bengal

    NASA Astrophysics Data System (ADS)

    Kutty, Govindan; Muraleedharan, Rohit; Kesarkar, Amit P.

    2018-03-01

    Uncertainties in the numerical weather prediction models are generally not well-represented in ensemble-based data assimilation (DA) systems. The performance of an ensemble-based DA system becomes suboptimal, if the sources of error are undersampled in the forecast system. The present study examines the effect of accounting for model error treatments in the hybrid ensemble transform Kalman filter—three-dimensional variational (3DVAR) DA system (hybrid) in the track forecast of two tropical cyclones viz. Hudhud and Thane, formed over the Bay of Bengal, using Advanced Research Weather Research and Forecasting (ARW-WRF) model. We investigated the effect of two types of model error treatment schemes and their combination on the hybrid DA system; (i) multiphysics approach, which uses different combination of cumulus, microphysics and planetary boundary layer schemes, (ii) stochastic kinetic energy backscatter (SKEB) scheme, which perturbs the horizontal wind and potential temperature tendencies, (iii) a combination of both multiphysics and SKEB scheme. Substantial improvements are noticed in the track positions of both the cyclones, when flow-dependent ensemble covariance is used in 3DVAR framework. Explicit model error representation is found to be beneficial in treating the underdispersive ensembles. Among the model error schemes used in this study, a combination of multiphysics and SKEB schemes has outperformed the other two schemes with improved track forecast for both the tropical cyclones.

  9. Soil moisture estimation by assimilating L-band microwave brightness temperature with geostatistics and observation localization.

    PubMed

    Han, Xujun; Li, Xin; Rigon, Riccardo; Jin, Rui; Endrizzi, Stefano

    2015-01-01

    The observation could be used to reduce the model uncertainties with data assimilation. If the observation cannot cover the whole model area due to spatial availability or instrument ability, how to do data assimilation at locations not covered by observation? Two commonly used strategies were firstly described: One is covariance localization (CL); the other is observation localization (OL). Compared with CL, OL is easy to parallelize and more efficient for large-scale analysis. This paper evaluated OL in soil moisture profile characterizations, in which the geostatistical semivariogram was used to fit the spatial correlated characteristics of synthetic L-Band microwave brightness temperature measurement. The fitted semivariogram model and the local ensemble transform Kalman filter algorithm are combined together to weight and assimilate the observations within a local region surrounding the grid cell of land surface model to be analyzed. Six scenarios were compared: 1_Obs with one nearest observation assimilated, 5_Obs with no more than five nearest local observations assimilated, and 9_Obs with no more than nine nearest local observations assimilated. The scenarios with no more than 16, 25, and 36 local observations were also compared. From the results we can conclude that more local observations involved in assimilation will improve estimations with an upper bound of 9 observations in this case. This study demonstrates the potentials of geostatistical correlation representation in OL to improve data assimilation of catchment scale soil moisture using synthetic L-band microwave brightness temperature, which cannot cover the study area fully in space due to vegetation effects.

  10. Soil Moisture Estimation by Assimilating L-Band Microwave Brightness Temperature with Geostatistics and Observation Localization

    PubMed Central

    Han, Xujun; Li, Xin; Rigon, Riccardo; Jin, Rui; Endrizzi, Stefano

    2015-01-01

    The observation could be used to reduce the model uncertainties with data assimilation. If the observation cannot cover the whole model area due to spatial availability or instrument ability, how to do data assimilation at locations not covered by observation? Two commonly used strategies were firstly described: One is covariance localization (CL); the other is observation localization (OL). Compared with CL, OL is easy to parallelize and more efficient for large-scale analysis. This paper evaluated OL in soil moisture profile characterizations, in which the geostatistical semivariogram was used to fit the spatial correlated characteristics of synthetic L-Band microwave brightness temperature measurement. The fitted semivariogram model and the local ensemble transform Kalman filter algorithm are combined together to weight and assimilate the observations within a local region surrounding the grid cell of land surface model to be analyzed. Six scenarios were compared: 1_Obs with one nearest observation assimilated, 5_Obs with no more than five nearest local observations assimilated, and 9_Obs with no more than nine nearest local observations assimilated. The scenarios with no more than 16, 25, and 36 local observations were also compared. From the results we can conclude that more local observations involved in assimilation will improve estimations with an upper bound of 9 observations in this case. This study demonstrates the potentials of geostatistical correlation representation in OL to improve data assimilation of catchment scale soil moisture using synthetic L-band microwave brightness temperature, which cannot cover the study area fully in space due to vegetation effects. PMID:25635771

  11. Distinct cognitive mechanisms involved in the processing of single objects and object ensembles

    PubMed Central

    Cant, Jonathan S.; Sun, Sol Z.; Xu, Yaoda

    2015-01-01

    Behavioral research has demonstrated that the shape and texture of single objects can be processed independently. Similarly, neuroimaging results have shown that an object's shape and texture are processed in distinct brain regions with shape in the lateral occipital area and texture in parahippocampal cortex. Meanwhile, objects are not always seen in isolation and are often grouped together as an ensemble. We recently showed that the processing of ensembles also involves parahippocampal cortex and that the shape and texture of ensemble elements are processed together within this region. These neural data suggest that the independence seen between shape and texture in single-object perception would not be observed in object-ensemble perception. Here we tested this prediction by examining whether observers could attend to the shape of ensemble elements while ignoring changes in an unattended texture feature and vice versa. Across six behavioral experiments, we replicated previous findings of independence between shape and texture in single-object perception. In contrast, we observed that changes in an unattended ensemble feature negatively impacted the processing of an attended ensemble feature only when ensemble features were attended globally. When they were attended locally, thereby making ensemble processing similar to single-object processing, interference was abolished. Overall, these findings confirm previous neuroimaging results and suggest that distinct cognitive mechanisms may be involved in single-object and object-ensemble perception. Additionally, they show that the scope of visual attention plays a critical role in determining which type of object processing (ensemble or single object) is engaged by the visual system. PMID:26360156

  12. Application Bayesian Model Averaging method for ensemble system for Poland

    NASA Astrophysics Data System (ADS)

    Guzikowski, Jakub; Czerwinska, Agnieszka

    2014-05-01

    The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation probabilistic data The Brier Score (BS) and Continuous Ranked Probability Score (CRPS) were used. Finally comparison between BMA calibrated data and data from ensemble members will be displayed.

  13. Multiple-instance ensemble learning for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Ergul, Ugur; Bilgin, Gokhan

    2017-10-01

    An ensemble framework for multiple-instance (MI) learning (MIL) is introduced for use in hyperspectral images (HSIs) by inspiring the bagging (bootstrap aggregation) method in ensemble learning. Ensemble-based bagging is performed by a small percentage of training samples, and MI bags are formed by a local windowing process with variable window sizes on selected instances. In addition to bootstrap aggregation, random subspace is another method used to diversify base classifiers. The proposed method is implemented using four MIL classification algorithms. The classifier model learning phase is carried out with MI bags, and the estimation phase is performed over single-test instances. In the experimental part of the study, two different HSIs that have ground-truth information are used, and comparative results are demonstrated with state-of-the-art classification methods. In general, the MI ensemble approach produces more compact results in terms of both diversity and error compared to equipollent non-MIL algorithms.

  14. Potentialities of ensemble strategies for flood forecasting over the Milano urban area

    NASA Astrophysics Data System (ADS)

    Ravazzani, Giovanni; Amengual, Arnau; Ceppi, Alessandro; Homar, Víctor; Romero, Romu; Lombardi, Gabriele; Mancini, Marco

    2016-08-01

    Analysis of ensemble forecasting strategies, which can provide a tangible backing for flood early warning procedures and mitigation measures over the Mediterranean region, is one of the fundamental motivations of the international HyMeX programme. Here, we examine two severe hydrometeorological episodes that affected the Milano urban area and for which the complex flood protection system of the city did not completely succeed. Indeed, flood damage have exponentially increased during the last 60 years, due to industrial and urban developments. Thus, the improvement of the Milano flood control system needs a synergism between structural and non-structural approaches. First, we examine how land-use changes due to urban development have altered the hydrological response to intense rainfalls. Second, we test a flood forecasting system which comprises the Flash-flood Event-based Spatially distributed rainfall-runoff Transformation, including Water Balance (FEST-WB) and the Weather Research and Forecasting (WRF) models. Accurate forecasts of deep moist convection and extreme precipitation are difficult to be predicted due to uncertainties arising from the numeric weather prediction (NWP) physical parameterizations and high sensitivity to misrepresentation of the atmospheric state; however, two hydrological ensemble prediction systems (HEPS) have been designed to explicitly cope with uncertainties in the initial and lateral boundary conditions (IC/LBCs) and physical parameterizations of the NWP model. No substantial differences in skill have been found between both ensemble strategies when considering an enhanced diversity of IC/LBCs for the perturbed initial conditions ensemble. Furthermore, no additional benefits have been found by considering more frequent LBCs in a mixed physics ensemble, as ensemble spread seems to be reduced. These findings could help to design the most appropriate ensemble strategies before these hydrometeorological extremes, given the computational cost of running such advanced HEPSs for operational purposes.

  15. Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry

    2014-07-01

    Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.

  16. A consensual neural network

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Ersoy, O. K.; Swain, P. H.

    1991-01-01

    A neural network architecture called a consensual neural network (CNN) is proposed for the classification of data from multiple sources. Its relation to hierarchical and ensemble neural networks is discussed. CNN is based on the statistical consensus theory and uses nonlinearly transformed input data. The input data are transformed several times, and the different transformed data are applied as if they were independent inputs. The independent inputs are classified using stage neural networks and outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote-sensing data and geographic data are given.

  17. Application of an Ensemble Smoother to Precipitation Assimilation

    NASA Technical Reports Server (NTRS)

    Zhang, Sara; Zupanski, Dusanka; Hou, Arthur; Zupanski, Milija

    2008-01-01

    Assimilation of precipitation in a global modeling system poses a special challenge in that the observation operators for precipitation processes are highly nonlinear. In the variational approach, substantial development work and model simplifications are required to include precipitation-related physical processes in the tangent linear model and its adjoint. An ensemble based data assimilation algorithm "Maximum Likelihood Ensemble Smoother (MLES)" has been developed to explore the ensemble representation of the precipitation observation operator with nonlinear convection and large-scale moist physics. An ensemble assimilation system based on the NASA GEOS-5 GCM has been constructed to assimilate satellite precipitation data within the MLES framework. The configuration of the smoother takes the time dimension into account for the relationship between state variables and observable rainfall. The full nonlinear forward model ensembles are used to represent components involving the observation operator and its transpose. Several assimilation experiments using satellite precipitation observations have been carried out to investigate the effectiveness of the ensemble representation of the nonlinear observation operator and the data impact of assimilating rain retrievals from the TMI and SSM/I sensors. Preliminary results show that this ensemble assimilation approach is capable of extracting information from nonlinear observations to improve the analysis and forecast if ensemble size is adequate, and a suitable localization scheme is applied. In addition to a dynamically consistent precipitation analysis, the assimilation system produces a statistical estimate of the analysis uncertainty.

  18. Controlling the delocalization-localization transition of light via electromagnetically induced transparency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng Jing; Huang Guoxiang; State Key Laboratory of Precision Spectroscopy, East China Normal University, Shanghai 200062

    2011-05-15

    We propose a scheme to realize a transition from delocalization to localization of light waves via electromagnetically induced transparency. The system we suggested is a resonant cold atomic ensemble having N configuration, with a control field consisting of two pairs of laser beams with different cross angles, which produce an electromagnetically induced quasiperiodic waveguide (EIQPW) for the propagation of a signal field. By appropriately tuning the incommensurate rate or relative modulation strength between the two pairs of control-field components, the signal field can exhibit the delocalization-localization transition as it transports inside the atomic ensemble. The delocalization-localization transition point is determinedmore » and the propagation property of the signal field is studied in detail. Our work provides a way of realizing wave localization via atomic coherence, which is quite different from the conventional, off-resonant mechanism-based Aubry-Andre model, and the great controllability of the EIQPW also allows an easy manipulation of the delocalization-localization transition.« less

  19. Transformations of dislocation martensite in tempering secondary-hardening steel

    NASA Astrophysics Data System (ADS)

    Gorynin, I. V.; Rybin, V. V.; Malyshevskii, V. A.; Semicheva, T. G.; Sherokhina, L. G.

    1999-09-01

    Analysis of the evolution of the fine structure of secondary-hardening steel in tempering makes it possible to understand the nature of processes that cause changes in the strength and ductility. They are connected with the changes that occur in the solid solution, the ensemble of disperse segregations of the carbide phase, and the dislocation structure of martensite. These transformations are interrelated, and their specific features are determined by the chemical composition of the steel.

  20. Improvement in T2* via Cancellation of Spin Bath Induced Dephasing in Solid-State Spins

    NASA Astrophysics Data System (ADS)

    Bauch, Erik; Hart, Connor; Schloss, Jennifer; Turner, Matthew; Barry, John; Walsworth, Ronald L.

    2017-04-01

    In measurements using ensembles of nitrogen vacancy (NV) centers in diamond, the magnetic field sensitivity can be improved by increasing the NV spin dephasing time, T2*. For NV ensembles, T2* is limited by dephasing arising from variations in the local environment sensed by individual NVs, such as applied magnetic fields, noise induced by other nearby spins, and strain. Here, we describe a systematic study of parameters influencing the NV ensemble T2*, and efforts to mitigate sources of inhomogeneity with demonstrated T2* improvements exceeding one order of magnitude.

  1. Tracking single mRNA molecules in live cells

    NASA Astrophysics Data System (ADS)

    Moon, Hyungseok C.; Lee, Byung Hun; Lim, Kiseong; Son, Jae Seok; Song, Minho S.; Park, Hye Yoon

    2016-06-01

    mRNAs inside cells interact with numerous RNA-binding proteins, microRNAs, and ribosomes that together compose a highly heterogeneous population of messenger ribonucleoprotein (mRNP) particles. Perhaps one of the best ways to investigate the complex regulation of mRNA is to observe individual molecules. Single molecule imaging allows the collection of quantitative and statistical data on subpopulations and transient states that are otherwise obscured by ensemble averaging. In addition, single particle tracking reveals the sequence of events that occur in the formation and remodeling of mRNPs in real time. Here, we review the current state-of-the-art techniques in tagging, delivery, and imaging to track single mRNAs in live cells. We also discuss how these techniques are applied to extract dynamic information on the transcription, transport, localization, and translation of mRNAs. These studies demonstrate how single molecule tracking is transforming the understanding of mRNA regulation in live cells.

  2. Ionospheric Data Assimilation and Targeted Observation Strategies: Proof of Concept Analysis in a Geomagnetic Storm Event

    NASA Astrophysics Data System (ADS)

    Kostelich, Eric; Durazo, Juan; Mahalov, Alex

    2017-11-01

    The dynamics of the ionosphere involve complex interactions between the atmosphere, solar wind, cosmic radiation, and Earth's magnetic field. Geomagnetic storms arising from solar activity can perturb these dynamics sufficiently to disrupt radio and satellite communications. Efforts to predict ``space weather,'' including ionospheric dynamics, require the development of a data assimilation system that combines observing systems with appropriate forecast models. This talk will outline a proof-of-concept targeted observation strategy, consisting of the Local Ensemble Transform Kalman Filter, coupled with the Thermosphere Ionosphere Electrodynamics Global Circulation Model, to select optimal locations where additional observations can be made to improve short-term ionospheric forecasts. Initial results using data and forecasts from the geomagnetic storm of 26-27 September 2011 will be described. Work supported by the Air Force Office of Scientific Research (Grant Number FA9550-15-1-0096) and by the National Science Foundation (Grant Number DMS-0940314).

  3. Three-dimensional localization of nanoscale battery reactions using soft X-ray tomography.

    PubMed

    Yu, Young-Sang; Farmand, Maryam; Kim, Chunjoong; Liu, Yijin; Grey, Clare P; Strobridge, Fiona C; Tyliszczak, Tolek; Celestre, Rich; Denes, Peter; Joseph, John; Krishnan, Harinarayan; Maia, Filipe R N C; Kilcoyne, A L David; Marchesini, Stefano; Leite, Talita Perciano Costa; Warwick, Tony; Padmore, Howard; Cabana, Jordi; Shapiro, David A

    2018-03-02

    Battery function is determined by the efficiency and reversibility of the electrochemical phase transformations at solid electrodes. The microscopic tools available to study the chemical states of matter with the required spatial resolution and chemical specificity are intrinsically limited when studying complex architectures by their reliance on two-dimensional projections of thick material. Here, we report the development of soft X-ray ptychographic tomography, which resolves chemical states in three dimensions at 11 nm spatial resolution. We study an ensemble of nano-plates of lithium iron phosphate extracted from a battery electrode at 50% state of charge. Using a set of nanoscale tomograms, we quantify the electrochemical state and resolve phase boundaries throughout the volume of individual nanoparticles. These observations reveal multiple reaction points, intra-particle heterogeneity, and size effects that highlight the importance of multi-dimensional analytical tools in providing novel insight to the design of the next generation of high-performance devices.

  4. Coherence rephasing combined with spin-wave storage using chirped control pulses

    NASA Astrophysics Data System (ADS)

    Demeter, Gabor

    2014-06-01

    Photon-echo based optical quantum memory schemes often employ intermediate steps to transform optical coherences to spin coherences for longer storage times. We analyze a scheme that uses three identical chirped control pulses for coherence rephasing in an inhomogeneously broadened ensemble of three-level Λ systems. The pulses induce a cyclic permutation of the atomic populations in the adiabatic regime. Optical coherences created by a signal pulse are stored as spin coherences at an intermediate time interval, and are rephased for echo emission when the ensemble is returned to the initial state. Echo emission during a possible partial rephasing when the medium is inverted can be suppressed with an appropriate choice of control pulse wave vectors. We demonstrate that the scheme works in an optically dense ensemble, despite control pulse distortions during propagation. It integrates conveniently the spin-wave storage step into memory schemes based on a second rephasing of the atomic coherences.

  5. Fast adaptive flat-histogram ensemble to enhance the sampling in large systems

    NASA Astrophysics Data System (ADS)

    Xu, Shun; Zhou, Xin; Jiang, Yi; Wang, YanTing

    2015-09-01

    An efficient novel algorithm was developed to estimate the Density of States (DOS) for large systems by calculating the ensemble means of an extensive physical variable, such as the potential energy, U, in generalized canonical ensembles to interpolate the interior reverse temperature curve , where S( U) is the logarithm of the DOS. This curve is computed with different accuracies in different energy regions to capture the dependence of the reverse temperature on U without setting prior grid in the U space. By combining with a U-compression transformation, we decrease the computational complexity from O( N 3/2) in the normal Wang Landau type method to O( N 1/2) in the current algorithm, as the degrees of freedom of system N. The efficiency of the algorithm is demonstrated by applying to Lennard Jones fluids with various N, along with its ability to find different macroscopic states, including metastable states.

  6. The Contribution of Object Shape and Surface Properties to Object Ensemble Representation in Anterior-medial Ventral Visual Cortex.

    PubMed

    Cant, Jonathan S; Xu, Yaoda

    2017-02-01

    Our visual system can extract summary statistics from large collections of objects without forming detailed representations of the individual objects in the ensemble. In a region in ventral visual cortex encompassing the collateral sulcus and the parahippocampal gyrus and overlapping extensively with the scene-selective parahippocampal place area (PPA), we have previously reported fMRI adaptation to object ensembles when ensemble statistics repeated, even when local image features differed across images (e.g., two different images of the same strawberry pile). We additionally showed that this ensemble representation is similar to (but still distinct from) how visual texture patterns are processed in this region and is not explained by appealing to differences in the color of the elements that make up the ensemble. To further explore the nature of ensemble representation in this brain region, here we used PPA as our ROI and investigated in detail how the shape and surface properties (i.e., both texture and color) of the individual objects constituting an ensemble affect the ensemble representation in anterior-medial ventral visual cortex. We photographed object ensembles of stone beads that varied in shape and surface properties. A given ensemble always contained beads of the same shape and surface properties (e.g., an ensemble of star-shaped rose quartz beads). A change to the shape and/or surface properties of all the beads in an ensemble resulted in a significant release from adaptation in PPA compared with conditions in which no ensemble feature changed. In contrast, in the object-sensitive lateral occipital area (LO), we only observed a significant release from adaptation when the shape of the ensemble elements varied, and found no significant results in additional scene-sensitive regions, namely, the retrosplenial complex and occipital place area. Together, these results demonstrate that the shape and surface properties of the individual objects comprising an ensemble both contribute significantly to object ensemble representation in anterior-medial ventral visual cortex and further demonstrate a functional dissociation between object- (LO) and scene-selective (PPA) visual cortical regions and within the broader scene-processing network itself.

  7. Using the fast fourier transform in binding free energy calculations.

    PubMed

    Nguyen, Trung Hai; Zhou, Huan-Xiang; Minh, David D L

    2018-04-30

    According to implicit ligand theory, the standard binding free energy is an exponential average of the binding potential of mean force (BPMF), an exponential average of the interaction energy between the unbound ligand ensemble and a rigid receptor. Here, we use the fast Fourier transform (FFT) to efficiently evaluate BPMFs by calculating interaction energies when rigid ligand configurations from the unbound ensemble are discretely translated across rigid receptor conformations. Results for standard binding free energies between T4 lysozyme and 141 small organic molecules are in good agreement with previous alchemical calculations based on (1) a flexible complex ( R≈0.9 for 24 systems) and (2) flexible ligand with multiple rigid receptor configurations ( R≈0.8 for 141 systems). While the FFT is routinely used for molecular docking, to our knowledge this is the first time that the algorithm has been used for rigorous binding free energy calculations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. Model Parameter Estimation Using Ensemble Data Assimilation: A Case with the Nonhydrostatic Icosahedral Atmospheric Model NICAM and the Global Satellite Mapping of Precipitation Data

    NASA Astrophysics Data System (ADS)

    Kotsuki, Shunji; Terasaki, Koji; Yashiro, Hasashi; Tomita, Hirofumi; Satoh, Masaki; Miyoshi, Takemasa

    2017-04-01

    This study aims to improve precipitation forecasts from numerical weather prediction (NWP) models through effective use of satellite-derived precipitation data. Kotsuki et al. (2016, JGR-A) successfully improved the precipitation forecasts by assimilating the Japan Aerospace eXploration Agency (JAXA)'s Global Satellite Mapping of Precipitation (GSMaP) data into the Nonhydrostatic Icosahedral Atmospheric Model (NICAM) at 112-km horizontal resolution. Kotsuki et al. mitigated the non-Gaussianity of the precipitation variables by the Gaussian transform method for observed and forecasted precipitation using the previous 30-day precipitation data. This study extends the previous study by Kotsuki et al. and explores an online estimation of model parameters using ensemble data assimilation. We choose two globally-uniform parameters, one is the cloud-to-rain auto-conversion parameter of the Berry's scheme for large scale condensation and the other is the relative humidity threshold of the Arakawa-Schubert cumulus parameterization scheme. We perform the online-estimation of the two model parameters with an ensemble transform Kalman filter by assimilating the GSMaP precipitation data. The estimated parameters improve the analyzed and forecasted mixing ratio in the lower troposphere. Therefore, the parameter estimation would be a useful technique to improve the NWP models and their forecasts. This presentation will include the most recent progress up to the time of the symposium.

  9. Quantum formalism for classical statistics

    NASA Astrophysics Data System (ADS)

    Wetterich, C.

    2018-06-01

    In static classical statistical systems the problem of information transport from a boundary to the bulk finds a simple description in terms of wave functions or density matrices. While the transfer matrix formalism is a type of Heisenberg picture for this problem, we develop here the associated Schrödinger picture that keeps track of the local probabilistic information. The transport of the probabilistic information between neighboring hypersurfaces obeys a linear evolution equation, and therefore the superposition principle for the possible solutions. Operators are associated to local observables, with rules for the computation of expectation values similar to quantum mechanics. We discuss how non-commutativity naturally arises in this setting. Also other features characteristic of quantum mechanics, such as complex structure, change of basis or symmetry transformations, can be found in classical statistics once formulated in terms of wave functions or density matrices. We construct for every quantum system an equivalent classical statistical system, such that time in quantum mechanics corresponds to the location of hypersurfaces in the classical probabilistic ensemble. For suitable choices of local observables in the classical statistical system one can, in principle, compute all expectation values and correlations of observables in the quantum system from the local probabilistic information of the associated classical statistical system. Realizing a static memory material as a quantum simulator for a given quantum system is not a matter of principle, but rather of practical simplicity.

  10. Examining dynamic interactions among experimental factors influencing hydrologic data assimilation with the ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Cai, X. M.; Ancell, B. C.; Fan, Y. R.

    2017-11-01

    The ensemble Kalman filter (EnKF) is recognized as a powerful data assimilation technique that generates an ensemble of model variables through stochastic perturbations of forcing data and observations. However, relatively little guidance exists with regard to the proper specification of the magnitude of the perturbation and the ensemble size, posing a significant challenge in optimally implementing the EnKF. This paper presents a robust data assimilation system (RDAS), in which a multi-factorial design of the EnKF experiments is first proposed for hydrologic ensemble predictions. A multi-way analysis of variance is then used to examine potential interactions among factors affecting the EnKF experiments, achieving optimality of the RDAS with maximized performance of hydrologic predictions. The RDAS is applied to the Xiangxi River watershed which is the most representative watershed in China's Three Gorges Reservoir region to demonstrate its validity and applicability. Results reveal that the pairwise interaction between perturbed precipitation and streamflow observations has the most significant impact on the performance of the EnKF system, and their interactions vary dynamically across different settings of the ensemble size and the evapotranspiration perturbation. In addition, the interactions among experimental factors vary greatly in magnitude and direction depending on different statistical metrics for model evaluation including the Nash-Sutcliffe efficiency and the Box-Cox transformed root-mean-square error. It is thus necessary to test various evaluation metrics in order to enhance the robustness of hydrologic prediction systems.

  11. Simultaneous calibration of ensemble river flow predictions over an entire range of lead times

    NASA Astrophysics Data System (ADS)

    Hemri, S.; Fundel, F.; Zappa, M.

    2013-10-01

    Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.

  12. Rotating hairy black holes in arbitrary dimensions

    NASA Astrophysics Data System (ADS)

    Erices, Cristián; Martínez, Cristián

    2018-01-01

    A class of exact rotating black hole solutions of gravity nonminimally coupled to a self-interacting scalar field in arbitrary dimensions is presented. These spacetimes are asymptotically locally anti-de Sitter manifolds and have a Ricci-flat event horizon hiding a curvature singularity at the origin. The scalar field is real and regular everywhere, and its effective mass, coming from the nonminimal coupling with the scalar curvature, saturates the Breitenlohner-Freedman bound for the corresponding spacetime dimension. The rotating black hole is obtained by applying an improper coordinate transformation to the static one. Although both spacetimes are locally equivalent, they are globally different, as it is confirmed by the nonvanishing angular momentum of the rotating black hole. It is found that the mass is bounded from below by the angular momentum, in agreement with the existence of an event horizon. The thermodynamical analysis is carried out in the grand canonical ensemble. The first law is satisfied, and a Smarr formula is exhibited. The thermodynamical local stability of the rotating hairy black holes is established from their Gibbs free energy. However, the global stability analysis establishes that the vacuum spacetime is always preferred over the hairy black hole. Thus, the hairy black hole is likely to decay into the vacuum one for any temperature.

  13. Discriminating strength: a bona fide measure of non-classical correlations

    NASA Astrophysics Data System (ADS)

    Farace, A.; De Pasquale, A.; Rigovacca, L.; Giovannetti, V.

    2014-07-01

    A new measure of non-classical correlations is introduced and characterized. It tests the ability of using a state ρ of a composite system AB as a probe for a quantum illumination task (e.g. see Lloyd 2008 Science 321 1463), in which one is asked to remotely discriminate between the two following scenarios: (i) either nothing happens to the probe, or (ii) the subsystem A is transformed via a local unitary {{R}_{A}} whose properties are partially unspecified when producing ρ. This new measure can be seen as the discrete version of the recently introduced interferometric power measure (Girolami et al 2013 e-print arXiv:1309.1472) and, at least for the case in which A is a qubit, it is shown to coincide (up to an irrelevant scaling factor) with the local quantum uncertainty measure of Girolami, Tufarelli and Adesso (2013 Phys. Rev. Lett. 110 240402). Analytical expressions are derived which allow us to formally prove that, within the set of separable configurations, the maximum value of our non-classicality measure is achieved over the set of quantum-classical states (i.e. states ρ which admit a statistical unravelling where each element of the associated ensemble is distinguishable via local measures on B).

  14. Typical event horizons in AdS/CFT

    NASA Astrophysics Data System (ADS)

    Avery, Steven G.; Lowe, David A.

    2016-01-01

    We consider the construction of local bulk operators in a black hole background dual to a pure state in conformal field theory. The properties of these operators in a microcanonical ensemble are studied. It has been argued in the literature that typical states in such an ensemble contain firewalls, or otherwise singular horizons. We argue this conclusion can be avoided with a proper definition of the interior operators.

  15. Impact of horizontal and vertical localization scales on microwave sounder SAPHIR radiance assimilation

    NASA Astrophysics Data System (ADS)

    Krishnamoorthy, C.; Balaji, C.

    2016-05-01

    In the present study, the effect of horizontal and vertical localization scales on the assimilation of direct SAPHIR radiances is studied. An Artificial Neural Network (ANN) has been used as a surrogate for the forward radiative calculations. The training input dataset for ANN consists of vertical layers of atmospheric pressure, temperature, relative humidity and other hydrometeor profiles with 6 channel Brightness Temperatures (BTs) as output. The best neural network architecture has been arrived at, by a neuron independence study. Since vertical localization of radiance data requires weighting functions, a ANN has been trained for this purpose. The radiances were ingested into the NWP using the Ensemble Kalman Filter (EnKF) technique. The horizontal localization has been taken care of, by using a Gaussian localization function centered around the observed coordinates. Similarly, the vertical localization is accomplished by assuming a function which depends on the weighting function of the channel to be assimilated. The effect of both horizontal and vertical localizations has been studied in terms of ensemble spread in the precipitation. Aditionally, improvements in 24 hr forecast from assimilation are also reported.

  16. Computational Amide I Spectroscopy for Refinement of Disordered Peptide Ensembles: Maximum Entropy and Related Approaches

    NASA Astrophysics Data System (ADS)

    Reppert, Michael; Tokmakoff, Andrei

    The structural characterization of intrinsically disordered peptides (IDPs) presents a challenging biophysical problem. Extreme heterogeneity and rapid conformational interconversion make traditional methods difficult to interpret. Due to its ultrafast (ps) shutter speed, Amide I vibrational spectroscopy has received considerable interest as a novel technique to probe IDP structure and dynamics. Historically, Amide I spectroscopy has been limited to delivering global secondary structural information. More recently, however, the method has been adapted to study structure at the local level through incorporation of isotope labels into the protein backbone at specific amide bonds. Thanks to the acute sensitivity of Amide I frequencies to local electrostatic interactions-particularly hydrogen bonds-spectroscopic data on isotope labeled residues directly reports on local peptide conformation. Quantitative information can be extracted using electrostatic frequency maps which translate molecular dynamics trajectories into Amide I spectra for comparison with experiment. Here we present our recent efforts in the development of a rigorous approach to incorporating Amide I spectroscopic restraints into refined molecular dynamics structural ensembles using maximum entropy and related approaches. By combining force field predictions with experimental spectroscopic data, we construct refined structural ensembles for a family of short, strongly disordered, elastin-like peptides in aqueous solution.

  17. Ensemble-based evaluation of extreme water levels for the eastern Baltic Sea

    NASA Astrophysics Data System (ADS)

    Eelsalu, Maris; Soomere, Tarmo

    2016-04-01

    The risks and damages associated with coastal flooding that are naturally associated with an increase in the magnitude of extreme storm surges are one of the largest concerns of countries with extensive low-lying nearshore areas. The relevant risks are even more contrast for semi-enclosed water bodies such as the Baltic Sea where subtidal (weekly-scale) variations in the water volume of the sea substantially contribute to the water level and lead to large spreading of projections of future extreme water levels. We explore the options for using large ensembles of projections to more reliably evaluate return periods of extreme water levels. Single projections of the ensemble are constructed by means of fitting several sets of block maxima with various extreme value distributions. The ensemble is based on two simulated data sets produced in the Swedish Meteorological and Hydrological Institute. A hindcast by the Rossby Centre Ocean model is sampled with a resolution of 6 h and a similar hindcast by the circulation model NEMO with a resolution of 1 h. As the annual maxima of water levels in the Baltic Sea are not always uncorrelated, we employ maxima for calendar years and for stormy seasons. As the shape parameter of the Generalised Extreme Value distribution changes its sign and substantially varies in magnitude along the eastern coast of the Baltic Sea, the use of a single distribution for the entire coast is inappropriate. The ensemble involves projections based on the Generalised Extreme Value, Gumbel and Weibull distributions. The parameters of these distributions are evaluated using three different ways: maximum likelihood method and method of moments based on both biased and unbiased estimates. The total number of projections in the ensemble is 40. As some of the resulting estimates contain limited additional information, the members of pairs of projections that are highly correlated are assigned weights 0.6. A comparison of the ensemble-based projection of extreme water levels and their return periods with similar estimates derived from local observations reveals an interesting pattern of match and mismatch. The match is almost perfect in measurement sites where local effects (e.g., wave-induced set-up or local surge in very shallow areas that are not resolved by circulation models) do not contribute to the observed values of water level. There is, however, substantial mismatch between projected and observed extreme values for most of the Estonian coast. The mismatch is largest for sections that are open to high waves and for several bays that are deeply cut into mainland but open for predominant strong wind directions. Detailed quantification of this mismatch eventually makes it possible to develop substantially improved estimates of extreme water levels in sections where local effects considerably contribute into the total water level.

  18. Ensemble Response in Mushroom Body Output Neurons of the Honey Bee Outpaces Spatiotemporal Odor Processing Two Synapses Earlier in the Antennal Lobe

    PubMed Central

    Strube-Bloss, Martin F.; Herrera-Valdez, Marco A.; Smith, Brian H.

    2012-01-01

    Neural representations of odors are subject to computations that involve sequentially convergent and divergent anatomical connections across different areas of the brains in both mammals and insects. Furthermore, in both mammals and insects higher order brain areas are connected via feedback connections. In order to understand the transformations and interactions that this connectivity make possible, an ideal experiment would compare neural responses across different, sequential processing levels. Here we present results of recordings from a first order olfactory neuropile – the antennal lobe (AL) – and a higher order multimodal integration and learning center – the mushroom body (MB) – in the honey bee brain. We recorded projection neurons (PN) of the AL and extrinsic neurons (EN) of the MB, which provide the outputs from the two neuropils. Recordings at each level were made in different animals in some experiments and simultaneously in the same animal in others. We presented two odors and their mixture to compare odor response dynamics as well as classification speed and accuracy at each neural processing level. Surprisingly, the EN ensemble significantly starts separating odor stimuli rapidly and before the PN ensemble has reached significant separation. Furthermore the EN ensemble at the MB output reaches a maximum separation of odors between 84–120 ms after odor onset, which is 26 to 133 ms faster than the maximum separation at the AL output ensemble two synapses earlier in processing. It is likely that a subset of very fast PNs, which respond before the ENs, may initiate the rapid EN ensemble response. We suggest therefore that the timing of the EN ensemble activity would allow retroactive integration of its signal into the ongoing computation of the AL via centrifugal feedback. PMID:23209711

  19. Computer algorithms for automated detection and analysis of local Ca2+ releases in spontaneously beating cardiac pacemaker cells

    PubMed Central

    Kim, Mary S.; Tsutsui, Kenta; Stern, Michael D.; Lakatta, Edward G.; Maltsev, Victor A.

    2017-01-01

    Local Ca2+ Releases (LCRs) are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA) node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame) sensitivity algorithm applied to each pixel (cell location). An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves), sparks and embers in muscle cells and Ca2+ puffs and syntillas in neurons. PMID:28683095

  20. Remarks on thermalization in 2D CFT

    NASA Astrophysics Data System (ADS)

    de Boer, Jan; Engelhardt, Dalit

    2016-12-01

    We revisit certain aspects of thermalization in 2D conformal field theory (CFT). In particular, we consider similarities and differences between the time dependence of correlation functions in various states in rational and non-rational CFTs. We also consider the distinction between global and local thermalization and explain how states obtained by acting with a diffeomorphism on the ground state can appear locally thermal, and we review why the time-dependent expectation value of the energy-momentum tensor is generally a poor diagnostic of global thermalization. Since all 2D CFTs have an infinite set of commuting conserved charges, generic initial states might be expected to give rise to a generalized Gibbs ensemble rather than a pure thermal ensemble at late times. We construct the holographic dual of the generalized Gibbs ensemble and show that, to leading order, it is still described by a Banados-Teitelboim-Zanelli black hole. The extra conserved charges, while rendering c <1 theories essentially integrable, therefore seem to have little effect on large-c conformal field theories.

  1. Measuring effective temperatures in a generalized Gibbs ensemble

    NASA Astrophysics Data System (ADS)

    Foini, Laura; Gambassi, Andrea; Konik, Robert; Cugliandolo, Leticia F.

    2017-05-01

    The local physical properties of an isolated quantum statistical system in the stationary state reached long after a quench are generically described by the Gibbs ensemble, which involves only its Hamiltonian and the temperature as a parameter. If the system is instead integrable, additional quantities conserved by the dynamics intervene in the description of the stationary state. The resulting generalized Gibbs ensemble involves a number of temperature-like parameters, the determination of which is practically difficult. Here we argue that in a number of simple models these parameters can be effectively determined by using fluctuation-dissipation relationships between response and correlation functions of natural observables, quantities which are accessible in experiments.

  2. The Effects of Climate Model Similarity on Local, Risk-Based Adaptation Planning

    NASA Astrophysics Data System (ADS)

    Steinschneider, S.; Brown, C. M.

    2014-12-01

    The climate science community has recently proposed techniques to develop probabilistic projections of climate change from ensemble climate model output. These methods provide a means to incorporate the formal concept of risk, i.e., the product of impact and probability, into long-term planning assessments for local systems under climate change. However, approaches for pdf development often assume that different climate models provide independent information for the estimation of probabilities, despite model similarities that stem from a common genealogy. Here we utilize an ensemble of projections from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to develop probabilistic climate information, with and without an accounting of inter-model correlations, and use it to estimate climate-related risks to a local water utility in Colorado, U.S. We show that the tail risk of extreme climate changes in both mean precipitation and temperature is underestimated if model correlations are ignored. When coupled with impact models of the hydrology and infrastructure of the water utility, the underestimation of extreme climate changes substantially alters the quantification of risk for water supply shortages by mid-century. We argue that progress in climate change adaptation for local systems requires the recognition that there is less information in multi-model climate ensembles than previously thought. Importantly, adaptation decisions cannot be limited to the spread in one generation of climate models.

  3. Using Chou's pseudo amino acid composition based on approximate entropy and an ensemble of AdaBoost classifiers to predict protein subnuclear location.

    PubMed

    Jiang, Xiaoying; Wei, Rong; Zhao, Yanjun; Zhang, Tongliang

    2008-05-01

    The knowledge of subnuclear localization in eukaryotic cells is essential for understanding the life function of nucleus. Developing prediction methods and tools for proteins subnuclear localization become important research fields in protein science for special characteristics in cell nuclear. In this study, a novel approach has been proposed to predict protein subnuclear localization. Sample of protein is represented by Pseudo Amino Acid (PseAA) composition based on approximate entropy (ApEn) concept, which reflects the complexity of time series. A novel ensemble classifier is designed incorporating three AdaBoost classifiers. The base classifier algorithms in three AdaBoost are decision stumps, fuzzy K nearest neighbors classifier, and radial basis-support vector machines, respectively. Different PseAA compositions are used as input data of different AdaBoost classifier in ensemble. Genetic algorithm is used to optimize the dimension and weight factor of PseAA composition. Two datasets often used in published works are used to validate the performance of the proposed approach. The obtained results of Jackknife cross-validation test are higher and more balance than them of other methods on same datasets. The promising results indicate that the proposed approach is effective and practical. It might become a useful tool in protein subnuclear localization. The software in Matlab and supplementary materials are available freely by contacting the corresponding author.

  4. Collective Awareness and the New Institution Science

    NASA Astrophysics Data System (ADS)

    Pitt, Jeremy; Nowak, Andrzej

    The following sections are included: * Introduction * Challenges for Institutions * Collective Awareness * A New Science of Institutions * Complex social ensembles * Interoceptive collective awareness * Planned emergence * Self-organising electronic institutions * Transformative Impact on Society * Social attitudes and processes * Innovative service creation and social innovation * Scientific impact * Big data * Self-regulation * Summary and Conclusions

  5. Typical event horizons in AdS/CFT

    DOE PAGES

    Avery, Steven G.; Lowe, David A.

    2016-01-14

    We consider the construction of local bulk operators in a black hole background dual to a pure state in conformal field theory. The properties of these operators in a microcanonical ensemble are studied. It has been argued in the literature that typical states in such an ensemble contain firewalls, or otherwise singular horizons. Here, we argue this conclusion can be avoided with a proper definition of the interior operators.

  6. Decoding of Human Movements Based on Deep Brain Local Field Potentials Using Ensemble Neural Networks

    PubMed Central

    2017-01-01

    Decoding neural activities related to voluntary and involuntary movements is fundamental to understanding human brain motor circuits and neuromotor disorders and can lead to the development of neuromotor prosthetic devices for neurorehabilitation. This study explores using recorded deep brain local field potentials (LFPs) for robust movement decoding of Parkinson's disease (PD) and Dystonia patients. The LFP data from voluntary movement activities such as left and right hand index finger clicking were recorded from patients who underwent surgeries for implantation of deep brain stimulation electrodes. Movement-related LFP signal features were extracted by computing instantaneous power related to motor response in different neural frequency bands. An innovative neural network ensemble classifier has been proposed and developed for accurate prediction of finger movement and its forthcoming laterality. The ensemble classifier contains three base neural network classifiers, namely, feedforward, radial basis, and probabilistic neural networks. The majority voting rule is used to fuse the decisions of the three base classifiers to generate the final decision of the ensemble classifier. The overall decoding performance reaches a level of agreement (kappa value) at about 0.729 ± 0.16 for decoding movement from the resting state and about 0.671 ± 0.14 for decoding left and right visually cued movements. PMID:29201041

  7. EL_PSSM-RT: DNA-binding residue prediction by integrating ensemble learning with PSSM Relation Transformation.

    PubMed

    Zhou, Jiyun; Lu, Qin; Xu, Ruifeng; He, Yulan; Wang, Hongpeng

    2017-08-29

    Prediction of DNA-binding residue is important for understanding the protein-DNA recognition mechanism. Many computational methods have been proposed for the prediction, but most of them do not consider the relationships of evolutionary information between residues. In this paper, we first propose a novel residue encoding method, referred to as the Position Specific Score Matrix (PSSM) Relation Transformation (PSSM-RT), to encode residues by utilizing the relationships of evolutionary information between residues. PDNA-62 and PDNA-224 are used to evaluate PSSM-RT and two existing PSSM encoding methods by five-fold cross-validation. Performance evaluations indicate that PSSM-RT is more effective than previous methods. This validates the point that the relationship of evolutionary information between residues is indeed useful in DNA-binding residue prediction. An ensemble learning classifier (EL_PSSM-RT) is also proposed by combining ensemble learning model and PSSM-RT to better handle the imbalance between binding and non-binding residues in datasets. EL_PSSM-RT is evaluated by five-fold cross-validation using PDNA-62 and PDNA-224 as well as two independent datasets TS-72 and TS-61. Performance comparisons with existing predictors on the four datasets demonstrate that EL_PSSM-RT is the best-performing method among all the predicting methods with improvement between 0.02-0.07 for MCC, 4.18-21.47% for ST and 0.013-0.131 for AUC. Furthermore, we analyze the importance of the pair-relationships extracted by PSSM-RT and the results validates the usefulness of PSSM-RT for encoding DNA-binding residues. We propose a novel prediction method for the prediction of DNA-binding residue with the inclusion of relationship of evolutionary information and ensemble learning. Performance evaluation shows that the relationship of evolutionary information between residues is indeed useful in DNA-binding residue prediction and ensemble learning can be used to address the data imbalance issue between binding and non-binding residues. A web service of EL_PSSM-RT ( http://hlt.hitsz.edu.cn:8080/PSSM-RT_SVM/ ) is provided for free access to the biological research community.

  8. Ensembles of physical states and random quantum circuits on graphs

    NASA Astrophysics Data System (ADS)

    Hamma, Alioscia; Santra, Siddhartha; Zanardi, Paolo

    2012-11-01

    In this paper we continue and extend the investigations of the ensembles of random physical states introduced in Hamma [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.040502 109, 040502 (2012)]. These ensembles are constructed by finite-length random quantum circuits (RQC) acting on the (hyper)edges of an underlying (hyper)graph structure. The latter encodes for the locality structure associated with finite-time quantum evolutions generated by physical, i.e., local, Hamiltonians. Our goal is to analyze physical properties of typical states in these ensembles; in particular here we focus on proxies of quantum entanglement as purity and α-Renyi entropies. The problem is formulated in terms of matrix elements of superoperators which depend on the graph structure, choice of probability measure over the local unitaries, and circuit length. In the α=2 case these superoperators act on a restricted multiqubit space generated by permutation operators associated to the subsets of vertices of the graph. For permutationally invariant interactions the dynamics can be further restricted to an exponentially smaller subspace. We consider different families of RQCs and study their typical entanglement properties for finite time as well as their asymptotic behavior. We find that area law holds in average and that the volume law is a typical property (that is, it holds in average and the fluctuations around the average are vanishing for the large system) of physical states. The area law arises when the evolution time is O(1) with respect to the size L of the system, while the volume law arises as is typical when the evolution time scales like O(L).

  9. Ensemble Clustering using Semidefinite Programming with Applications

    PubMed Central

    Singh, Vikas; Mukherjee, Lopamudra; Peng, Jiming; Xu, Jinhui

    2011-01-01

    In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better captured using a 2D string encoding rather than a voting strategy, which is common among existing approaches. Our optimization proceeds by first constructing a non-linear objective function which is then transformed into a 0–1 Semidefinite program (SDP) using novel convexification techniques. This model can be subsequently relaxed to a polynomial time solvable SDP. In addition to the theoretical contributions, our experimental results on standard machine learning and synthetic datasets show that this approach leads to improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. In addition, we identify several new application scenarios for this problem. These include combining multiple image segmentations and generating tissue maps from multiple-channel Diffusion Tensor brain images to identify the underlying structure of the brain. PMID:21927539

  10. Ensemble Clustering using Semidefinite Programming with Applications.

    PubMed

    Singh, Vikas; Mukherjee, Lopamudra; Peng, Jiming; Xu, Jinhui

    2010-05-01

    In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better captured using a 2D string encoding rather than a voting strategy, which is common among existing approaches. Our optimization proceeds by first constructing a non-linear objective function which is then transformed into a 0-1 Semidefinite program (SDP) using novel convexification techniques. This model can be subsequently relaxed to a polynomial time solvable SDP. In addition to the theoretical contributions, our experimental results on standard machine learning and synthetic datasets show that this approach leads to improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. In addition, we identify several new application scenarios for this problem. These include combining multiple image segmentations and generating tissue maps from multiple-channel Diffusion Tensor brain images to identify the underlying structure of the brain.

  11. A GLM Post-processor to Adjust Ensemble Forecast Traces

    NASA Astrophysics Data System (ADS)

    Thiemann, M.; Day, G. N.; Schaake, J. C.; Draijer, S.; Wang, L.

    2011-12-01

    The skill of hydrologic ensemble forecasts has improved in the last years through a better understanding of climate variability, better climate forecasts and new data assimilation techniques. Having been extensively utilized for probabilistic water supply forecasting, interest is developing to utilize these forecasts in operational decision making. Hydrologic ensemble forecast members typically have inherent biases in flow timing and volume caused by (1) structural errors in the models used, (2) systematic errors in the data used to calibrate those models, (3) uncertain initial hydrologic conditions, and (4) uncertainties in the forcing datasets. Furthermore, hydrologic models have often not been developed for operational decision points and ensemble forecasts are thus not always available where needed. A statistical post-processor can be used to address these issues. The post-processor should (1) correct for systematic biases in flow timing and volume, (2) preserve the skill of the available raw forecasts, (3) preserve spatial and temporal correlation as well as the uncertainty in the forecasted flow data, (4) produce adjusted forecast ensembles that represent the variability of the observed hydrograph to be predicted, and (5) preserve individual forecast traces as equally likely. The post-processor should also allow for the translation of available ensemble forecasts to hydrologically similar locations where forecasts are not available. This paper introduces an ensemble post-processor (EPP) developed in support of New York City water supply operations. The EPP employs a general linear model (GLM) to (1) adjust available ensemble forecast traces and (2) create new ensembles for (nearby) locations where only historic flow observations are available. The EPP is calibrated by developing daily and aggregated statistical relationships form historical flow observations and model simulations. These are then used in operation to obtain the conditional probability density function (PDF) of the observations to be predicted, thus jointly adjusting individual ensemble members. These steps are executed in a normalized transformed space ('z'-space) to account for the strong non-linearity in the flow observations involved. A data window centered on each calibration date is used to minimize impacts from sampling errors and data noise. Testing on datasets from California and New York suggests that the EPP can successfully minimize biases in ensemble forecasts, while preserving the raw forecast skill in a 'days to weeks' forecast horizon and reproducing the variability of climatology for 'weeks to years' forecast horizons.

  12. The role of ensemble-based statistics in variational assimilation of cloud-affected observations from infrared imagers

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris

    2017-04-01

    Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the apparent benefit from using ensembles for cloudy radiance assimilation in an EnVar context.

  13. Using random forests for assistance in the curation of G-protein coupled receptor databases.

    PubMed

    Shkurin, Aleksei; Vellido, Alfredo

    2017-08-18

    Biology is experiencing a gradual but fast transformation from a laboratory-centred science towards a data-centred one. As such, it requires robust data engineering and the use of quantitative data analysis methods as part of database curation. This paper focuses on G protein-coupled receptors, a large and heterogeneous super-family of cell membrane proteins of interest to biology in general. One of its families, Class C, is of particular interest to pharmacology and drug design. This family is quite heterogeneous on its own, and the discrimination of its several sub-families is a challenging problem. In the absence of known crystal structure, such discrimination must rely on their primary amino acid sequences. We are interested not as much in achieving maximum sub-family discrimination accuracy using quantitative methods, but in exploring sequence misclassification behavior. Specifically, we are interested in isolating those sequences showing consistent misclassification, that is, sequences that are very often misclassified and almost always to the same wrong sub-family. Random forests are used for this analysis due to their ensemble nature, which makes them naturally suited to gauge the consistency of misclassification. This consistency is here defined through the voting scheme of their base tree classifiers. Detailed consistency results for the random forest ensemble classification were obtained for all receptors and for all data transformations of their unaligned primary sequences. Shortlists of the most consistently misclassified receptors for each subfamily and transformation, as well as an overall shortlist including those cases that were consistently misclassified across transformations, were obtained. The latter should be referred to experts for further investigation as a data curation task. The automatic discrimination of the Class C sub-families of G protein-coupled receptors from their unaligned primary sequences shows clear limits. This study has investigated in some detail the consistency of their misclassification using random forest ensemble classifiers. Different sub-families have been shown to display very different discrimination consistency behaviors. The individual identification of consistently misclassified sequences should provide a tool for quality control to GPCR database curators.

  14. Towards an Australian ensemble streamflow forecasting system for flood prediction and water management

    NASA Astrophysics Data System (ADS)

    Bennett, J.; David, R. E.; Wang, Q.; Li, M.; Shrestha, D. L.

    2016-12-01

    Flood forecasting in Australia has historically relied on deterministic forecasting models run only when floods are imminent, with considerable forecaster input and interpretation. These now co-existed with a continually available 7-day streamflow forecasting service (also deterministic) aimed at operational water management applications such as environmental flow releases. The 7-day service is not optimised for flood prediction. We describe progress on developing a system for ensemble streamflow forecasting that is suitable for both flood prediction and water management applications. Precipitation uncertainty is handled through post-processing of Numerical Weather Prediction (NWP) output with a Bayesian rainfall post-processor (RPP). The RPP corrects biases, downscales NWP output, and produces reliable ensemble spread. Ensemble precipitation forecasts are used to force a semi-distributed conceptual rainfall-runoff model. Uncertainty in precipitation forecasts is insufficient to reliably describe streamflow forecast uncertainty, particularly at shorter lead-times. We characterise hydrological prediction uncertainty separately with a 4-stage error model. The error model relies on data transformation to ensure residuals are homoscedastic and symmetrically distributed. To ensure streamflow forecasts are accurate and reliable, the residuals are modelled using a mixture-Gaussian distribution with distinct parameters for the rising and falling limbs of the forecast hydrograph. In a case study of the Murray River in south-eastern Australia, we show ensemble predictions of floods generally have lower errors than deterministic forecasting methods. We also discuss some of the challenges in operationalising short-term ensemble streamflow forecasts in Australia, including meeting the needs for accurate predictions across all flow ranges and comparing forecasts generated by event and continuous hydrological models.

  15. Evaluation of quantitative precipitation forecasts by TIGGE ensembles for south China during the presummer rainy season

    NASA Astrophysics Data System (ADS)

    Huang, Ling; Luo, Yali

    2017-08-01

    Based on The Observing System Research and Predictability Experiment Interactive Grand Global Ensemble (TIGGE) data set, this study evaluates the ability of global ensemble prediction systems (EPSs) from the European Centre for Medium-Range Weather Forecasts (ECMWF), U.S. National Centers for Environmental Prediction, Japan Meteorological Agency (JMA), Korean Meteorological Administration, and China Meteorological Administration (CMA) to predict presummer rainy season (April-June) precipitation in south China. Evaluation of 5 day forecasts in three seasons (2013-2015) demonstrates the higher skill of probability matching forecasts compared to simple ensemble mean forecasts and shows that the deterministic forecast is a close second. The EPSs overestimate light-to-heavy rainfall (0.1 to 30 mm/12 h) and underestimate heavier rainfall (>30 mm/12 h), with JMA being the worst. By analyzing the synoptic situations predicted by the identified more skillful (ECMWF) and less skillful (JMA and CMA) EPSs and the ensemble sensitivity for four representative cases of torrential rainfall, the transport of warm-moist air into south China by the low-level southwesterly flow, upstream of the torrential rainfall regions, is found to be a key synoptic factor that controls the quantitative precipitation forecast. The results also suggest that prediction of locally produced torrential rainfall is more challenging than prediction of more extensively distributed torrential rainfall. A slight improvement in the performance is obtained by shortening the forecast lead time from 30-36 h to 18-24 h to 6-12 h for the cases with large-scale forcing, but not for the locally produced cases.

  16. Reflectance and fast polarization dynamics of GaN/Si nanowire ensemble.

    PubMed

    Korona, Krzysztof Piotr; Zytkiewicz, Zbigniew R; Sobanska, Marta; Sosada, Florentyna; Dróżdż, Piotr Andrzej; Klosek, Kamil; Tchutchulashvili, Giorgi

    2018-06-25

    Optical phenomena in high-quality GaN nanowires (NWs) ensemble grown on Si substrate have been studied by reflectance and time-resolved luminescence. Such NWs form a structure that acts as a virtual layer that specifically reflects and polarizes light and can be characterized by an effective refractive index. In fact we have found that the NW ensembles of high NW density (high filling fraction) behave rather like a layer of effective medium described by Maxwell Garnett approximation. Moreover, light extinction and strong depolarization are observed that we assign to scattering and interference of light inside the NW ensemble. The wavelength range of high extinction and depolarization correlates well with transverse localization wavelength estimated for such ensemble of NWs, so we suppose that these effects are due to Anderson localization of light. We also report results of time-resolved measurements of polarization of individual emission centers including free and bound excitons (D0XA, 3.47 eV), inversion domain boundaries (IDB, 3.45eV) and stacking faults (SF, 3.42 eV). The emission of the D0XA and SF lines is polarized perpendicular to GaN c-axis while the 3.45 eV line is polarized along the c-axis what supports hypothesis that this line is emitted from IDBs. Time-dependent depolarization of luminescence is observed during the first 0.1 ns after excitation and is interpreted as the result of interaction of the emission centers with hot particles existing during short time after excitation. . © 2018 IOP Publishing Ltd.

  17. Global dynamics of oscillator populations under common noise

    NASA Astrophysics Data System (ADS)

    Braun, W.; Pikovsky, A.; Matias, M. A.; Colet, P.

    2012-07-01

    Common noise acting on a population of identical oscillators can synchronize them. We develop a description of this process which is not limited to the states close to synchrony, but provides a global picture of the evolution of the ensembles. The theory is based on the Watanabe-Strogatz transformation, allowing us to obtain closed stochastic equations for the global variables. We show that at the initial stage, the order parameter grows linearly in time, while at the later stages the convergence to synchrony is exponentially fast. Furthermore, we extend the theory to nonidentical ensembles with the Lorentzian distribution of natural frequencies and determine the stationary values of the order parameter in dependence on driving noise and mismatch.

  18. Double-well chimeras in 2D lattice of chaotic bistable elements

    NASA Astrophysics Data System (ADS)

    Shepelev, I. A.; Bukh, A. V.; Vadivasova, T. E.; Anishchenko, V. S.; Zakharova, A.

    2018-01-01

    We investigate spatio-temporal dynamics of a 2D ensemble of nonlocally coupled chaotic cubic maps in a bistability regime. In particular, we perform a detailed study on the transition ;coherence - incoherence; for varying coupling strength for a fixed interaction radius. For the 2D ensemble we show the appearance of amplitude and phase chimera states previously reported for 1D ensembles of nonlocally coupled chaotic systems. Moreover, we uncover a novel type of chimera state, double-well chimera, which occurs due to the interplay of the bistability of the local dynamics and the 2D ensemble structure. Additionally, we find double-well chimera behavior for steady states which we call double-well chimera death. A distinguishing feature of chimera patterns observed in the lattice is that they mainly combine clusters of different chimera types: phase, amplitude and double-well chimeras.

  19. Controllable quantum dynamics of inhomogeneous nitrogen-vacancy center ensembles coupled to superconducting resonators

    PubMed Central

    Song, Wan-lu; Yang, Wan-li; Yin, Zhang-qi; Chen, Chang-yong; Feng, Mang

    2016-01-01

    We explore controllable quantum dynamics of a hybrid system, which consists of an array of mutually coupled superconducting resonators (SRs) with each containing a nitrogen-vacancy center spin ensemble (NVE) in the presence of inhomogeneous broadening. We focus on a three-site model, which compared with the two-site case, shows more complicated and richer dynamical behavior, and displays a series of damped oscillations under various experimental situations, reflecting the intricate balance and competition between the NVE-SR collective coupling and the adjacent-site photon hopping. Particularly, we find that the inhomogeneous broadening of the spin ensemble can suppress the population transfer between the SR and the local NVE. In this context, although the inhomogeneous broadening of the spin ensemble diminishes entanglement among the NVEs, optimal entanglement, characterized by averaging the lower bound of concurrence, could be achieved through accurately adjusting the tunable parameters. PMID:27627994

  20. Post-processing of multi-model ensemble river discharge forecasts using censored EMOS

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian

    2014-05-01

    When forecasting water levels and river discharge, ensemble weather forecasts are used as meteorological input to hydrologic process models. As hydrologic models are imperfect and the input ensembles tend to be biased and underdispersed, the output ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, statistical post-processing is required in order to achieve calibrated and sharp predictions. Standard post-processing methods such as Ensemble Model Output Statistics (EMOS) that have their origins in meteorological forecasting are now increasingly being used in hydrologic applications. Here we consider two sub-catchments of River Rhine, for which the forecasting system of the Federal Institute of Hydrology (BfG) uses runoff data that are censored below predefined thresholds. To address this methodological challenge, we develop a censored EMOS method that is tailored to such data. The censored EMOS forecast distribution can be understood as a mixture of a point mass at the censoring threshold and a continuous part based on a truncated normal distribution. Parameter estimates of the censored EMOS model are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over the training dataset. Model fitting on Box-Cox transformed data allows us to take account of the positive skewness of river discharge distributions. In order to achieve realistic forecast scenarios over an entire range of lead-times, there is a need for multivariate extensions. To this end, we smooth the marginal parameter estimates over lead-times. In order to obtain realistic scenarios of discharge evolution over time, the marginal distributions have to be linked with each other. To this end, the multivariate dependence structure can either be adopted from the raw ensemble like in Ensemble Copula Coupling (ECC), or be estimated from observations in a training period. The censored EMOS model has been applied to multi-model ensemble forecasts issued on a daily basis over a period of three years. For the two catchments considered, this resulted in well calibrated and sharp forecast distributions over all lead-times from 1 to 114 h. Training observations tended to be better indicators for the dependence structure than the raw ensemble.

  1. Measuring effective temperatures in a generalized Gibbs ensemble

    DOE PAGES

    Foini, Laura; Gambassi, Andrea; Konik, Robert; ...

    2017-05-11

    The local physical properties of an isolated quantum statistical system in the stationary state reached long after a quench are generically described by the Gibbs ensemble, which involves only its Hamiltonian and the temperature as a parameter. Additional quantities conserved by the dynamics intervene in the description of the stationary state, if the system is instead integrable. The resulting generalized Gibbs ensemble involves a number of temperature-like parameters, the determination of which is practically difficult. We argue that in a number of simple models these parameters can be effectively determined by using fluctuation-dissipation relationships between response and correlation functions ofmore » natural observables, quantities which are accessible in experiments.« less

  2. A Technical Analysis Information Fusion Approach for Stock Price Analysis and Modeling

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    In this paper, we address the problem of technical analysis information fusion in improving stock market index-level prediction. We present an approach for analyzing stock market price behavior based on different categories of technical analysis metrics and a multiple predictive system. Each category of technical analysis measures is used to characterize stock market price movements. The presented predictive system is based on an ensemble of neural networks (NN) coupled with particle swarm intelligence for parameter optimization where each single neural network is trained with a specific category of technical analysis measures. The experimental evaluation on three international stock market indices and three individual stocks show that the presented ensemble-based technical indicators fusion system significantly improves forecasting accuracy in comparison with single NN. Also, it outperforms the classical neural network trained with index-level lagged values and NN trained with stationary wavelet transform details and approximation coefficients. As a result, technical information fusion in NN ensemble architecture helps improving prediction accuracy.

  3. High strength films from oriented, hydrogen-bonded "graphamid" 2D polymer molecular ensembles.

    PubMed

    Sandoz-Rosado, Emil; Beaudet, Todd D; Andzelm, Jan W; Wetzel, Eric D

    2018-02-27

    The linear polymer poly(p-phenylene terephthalamide), better known by its tradename Kevlar, is an icon of modern materials science due to its remarkable strength, stiffness, and environmental resistance. Here, we propose a new two-dimensional (2D) polymer, "graphamid", that closely resembles Kevlar in chemical structure, but is mechanically advantaged by virtue of its 2D structure. Using atomistic calculations, we show that graphamid comprises covalently-bonded sheets bridged by a high population of strong intermolecular hydrogen bonds. Molecular and micromechanical calculations predict that these strong intermolecular interactions allow stiff, high strength (6-8 GPa), and tough films from ensembles of finite graphamid molecules. In contrast, traditional 2D materials like graphene have weak intermolecular interactions, leading to ensembles of low strength (0.1-0.5 GPa) and brittle fracture behavior. These results suggest that hydrogen-bonded 2D polymers like graphamid would be transformative in enabling scalable, lightweight, high performance polymer films of unprecedented mechanical performance.

  4. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Hooper, Russell

    2016-11-01

    Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility.

  5. Teaching Poetry through Collaborative Art: An Analysis of Multimodal Ensembles for Transformative Learning

    ERIC Educational Resources Information Center

    Wandera, David B.

    2016-01-01

    This study is anchored on two positions: that every communication is multimodal and that different modalities within multimodal communication have particular affordances. Written and oral language and other modalities, such as body language and audio/visual media, are interwoven in classroom communication. What might it look like to strategically…

  6. Evaluating gambles using dynamics

    NASA Astrophysics Data System (ADS)

    Peters, O.; Gell-Mann, M.

    2016-02-01

    Gambles are random variables that model possible changes in wealth. Classic decision theory transforms money into utility through a utility function and defines the value of a gamble as the expectation value of utility changes. Utility functions aim to capture individual psychological characteristics, but their generality limits predictive power. Expectation value maximizers are defined as rational in economics, but expectation values are only meaningful in the presence of ensembles or in systems with ergodic properties, whereas decision-makers have no access to ensembles, and the variables representing wealth in the usual growth models do not have the relevant ergodic properties. Simultaneously addressing the shortcomings of utility and those of expectations, we propose to evaluate gambles by averaging wealth growth over time. No utility function is needed, but a dynamic must be specified to compute time averages. Linear and logarithmic "utility functions" appear as transformations that generate ergodic observables for purely additive and purely multiplicative dynamics, respectively. We highlight inconsistencies throughout the development of decision theory, whose correction clarifies that our perspective is legitimate. These invalidate a commonly cited argument for bounded utility functions.

  7. Classification of pulmonary pathology from breath sounds using the wavelet packet transform and an extreme learning machine.

    PubMed

    Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian; Huliraj, N; Revadi, S S

    2017-06-08

    Auscultation is a medical procedure used for the initial diagnosis and assessment of lung and heart diseases. From this perspective, we propose assessing the performance of the extreme learning machine (ELM) classifiers for the diagnosis of pulmonary pathology using breath sounds. Energy and entropy features were extracted from the breath sound using the wavelet packet transform. The statistical significance of the extracted features was evaluated by one-way analysis of variance (ANOVA). The extracted features were inputted into the ELM classifier. The maximum classification accuracies obtained for the conventional validation (CV) of the energy and entropy features were 97.36% and 98.37%, respectively, whereas the accuracies obtained for the cross validation (CRV) of the energy and entropy features were 96.80% and 97.91%, respectively. In addition, maximum classification accuracies of 98.25% and 99.25% were obtained for the CV and CRV of the ensemble features, respectively. The results indicate that the classification accuracy obtained with the ensemble features was higher than those obtained with the energy and entropy features.

  8. Three-model ensemble wind prediction in southern Italy

    NASA Astrophysics Data System (ADS)

    Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo

    2016-03-01

    Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.

  9. "Big Data Assimilation" for 30-second-update 100-m-mesh Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Miyoshi, Takemasa; Lien, Guo-Yuan; Kunii, Masaru; Ruiz, Juan; Maejima, Yasumitsu; Otsuka, Shigenori; Kondo, Keiichi; Seko, Hiromu; Satoh, Shinsuke; Ushio, Tomoo; Bessho, Kotaro; Kamide, Kazumi; Tomita, Hirofumi; Nishizawa, Seiya; Yamaura, Tsuyoshi; Ishikawa, Yutaka

    2017-04-01

    A typical lifetime of a single cumulonimbus is within an hour, and radar observations often show rapid changes in only a 5-minute period. For precise prediction of such rapidly-changing local severe storms, we have developed what we call a "Big Data Assimilation" (BDA) system that performs 30-second-update data assimilation cycles at 100-m grid spacing. The concept shares that of NOAA's Warn-on-Forecast (WoF), in which rapidly-updated high-resolution NWP will play a central role in issuing severe-storm warnings even only minutes in advance. The 100-m resolution and 30-second update frequency are a leap above typical recent research settings, and it was possible by the fortunate combination of Japan's most advanced supercomputing and sensing technologies: the 10-petaflops K computer and the Phased Array Weather Radar (PAWR). The X-band PAWR is capable of a dense three-dimensional volume scan at 100-m range resolution with 100 elevation angles and 300 azimuth angles, up to 60-km range within 30 seconds. The PAWR data show temporally-smooth evolution of convective rainstorms. This gives us a hope that we may assume the Gaussian error distribution in 30-second forecasts before strong nonlinear dynamics distort the error distribution for rapidly-changing convective storms. With this in mind, we apply the Local Ensemble Transform Kalman Filter (LETKF) that considers flow-dependent error covariance explicitly under the Gaussian-error assumption. The flow-dependence would be particularly important in rapidly-changing convective weather. Using a 100-member ensemble at 100-m resolution, we have tested the Big Data Assimilation system in real-world cases of sudden local rainstorms, and obtained promising results. However, the real-time application is a big challenge, and currently it takes 10 minutes for a cycle. We explore approaches to accelerating the computations, such as using single-precision arrays in the model computation and developing an efficient I/O middleware for passing the large data between model and data assimilation as quickly as possible. In this presentation, we will present the most up-to-date progress of our Big Data Assimilation research.

  10. Modelling dynamics in protein crystal structures by ensemble refinement

    PubMed Central

    Burnley, B Tom; Afonine, Pavel V; Adams, Paul D; Gros, Piet

    2012-01-01

    Single-structure models derived from X-ray data do not adequately account for the inherent, functionally important dynamics of protein molecules. We generated ensembles of structures by time-averaged refinement, where local molecular vibrations were sampled by molecular-dynamics (MD) simulation whilst global disorder was partitioned into an underlying overall translation–libration–screw (TLS) model. Modeling of 20 protein datasets at 1.1–3.1 Å resolution reduced cross-validated Rfree values by 0.3–4.9%, indicating that ensemble models fit the X-ray data better than single structures. The ensembles revealed that, while most proteins display a well-ordered core, some proteins exhibit a ‘molten core’ likely supporting functionally important dynamics in ligand binding, enzyme activity and protomer assembly. Order–disorder changes in HIV protease indicate a mechanism of entropy compensation for ordering the catalytic residues upon ligand binding by disordering specific core residues. Thus, ensemble refinement extracts dynamical details from the X-ray data that allow a more comprehensive understanding of structure–dynamics–function relationships. DOI: http://dx.doi.org/10.7554/eLife.00311.001 PMID:23251785

  11. Precision bounds for gradient magnetometry with atomic ensembles

    NASA Astrophysics Data System (ADS)

    Apellaniz, Iagoba; Urizar-Lanz, Iñigo; Zimborás, Zoltán; Hyllus, Philipp; Tóth, Géza

    2018-05-01

    We study gradient magnetometry with an ensemble of atoms with arbitrary spin. We calculate precision bounds for estimating the gradient of the magnetic field based on the quantum Fisher information. For quantum states that are invariant under homogeneous magnetic fields, we need to measure a single observable to estimate the gradient. On the other hand, for states that are sensitive to homogeneous fields, a simultaneous measurement is needed, as the homogeneous field must also be estimated. We prove that for the cases studied in this paper, such a measurement is feasible. We present a method to calculate precision bounds for gradient estimation with a chain of atoms or with two spatially separated atomic ensembles. We also consider a single atomic ensemble with an arbitrary density profile, where the atoms cannot be addressed individually, and which is a very relevant case for experiments. Our model can take into account even correlations between particle positions. While in most of the discussion we consider an ensemble of localized particles that are classical with respect to their spatial degree of freedom, we also discuss the case of gradient metrology with a single Bose-Einstein condensate.

  12. Quantum entanglement at ambient conditions in a macroscopic solid-state spin ensemble.

    PubMed

    Klimov, Paul V; Falk, Abram L; Christle, David J; Dobrovitski, Viatcheslav V; Awschalom, David D

    2015-11-01

    Entanglement is a key resource for quantum computers, quantum-communication networks, and high-precision sensors. Macroscopic spin ensembles have been historically important in the development of quantum algorithms for these prospective technologies and remain strong candidates for implementing them today. This strength derives from their long-lived quantum coherence, strong signal, and ability to couple collectively to external degrees of freedom. Nonetheless, preparing ensembles of genuinely entangled spin states has required high magnetic fields and cryogenic temperatures or photochemical reactions. We demonstrate that entanglement can be realized in solid-state spin ensembles at ambient conditions. We use hybrid registers comprising of electron-nuclear spin pairs that are localized at color-center defects in a commercial SiC wafer. We optically initialize 10(3) identical registers in a 40-μm(3) volume (with [Formula: see text] fidelity) and deterministically prepare them into the maximally entangled Bell states (with 0.88 ± 0.07 fidelity). To verify entanglement, we develop a register-specific quantum-state tomography protocol. The entanglement of a macroscopic solid-state spin ensemble at ambient conditions represents an important step toward practical quantum technology.

  13. Application of the LEPS technique for Quantitative Precipitation Forecasting (QPF) in Southern Italy: a preliminary study

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Walko, R. L.

    2006-03-01

    This paper reports preliminary results for a Limited area model Ensemble Prediction System (LEPS), based on RAMS (Regional Atmospheric Modelling System), for eight case studies of moderate-intense precipitation over Calabria, the southernmost tip of the Italian peninsula. LEPS aims to transfer the benefits of a probabilistic forecast from global to regional scales in countries where local orographic forcing is a key factor to force convection. To accomplish this task and to limit computational time in an operational implementation of LEPS, we perform a cluster analysis of ECMWF-EPS runs. Starting from the 51 members that form the ECMWF-EPS we generate five clusters. For each cluster a representative member is selected and used to provide initial and dynamic boundary conditions to RAMS, whose integrations generate LEPS. RAMS runs have 12-km horizontal resolution. To analyze the impact of enhanced horizontal resolution on quantitative precipitation forecasts, LEPS forecasts are compared to a full Brute Force (BF) ensemble. This ensemble is based on RAMS, has 36 km horizontal resolution and is generated by 51 members, nested in each ECMWF-EPS member. LEPS and BF results are compared subjectively and by objective scores. Subjective analysis is based on precipitation and probability maps of case studies whereas objective analysis is made by deterministic and probabilistic scores. Scores and maps are calculated by comparing ensemble precipitation forecasts against reports from the Calabria regional raingauge network. Results show that LEPS provided better rainfall predictions than BF for all case studies selected. This strongly suggests the importance of the enhanced horizontal resolution, compared to ensemble population, for Calabria for these cases. To further explore the impact of local physiographic features on QPF (Quantitative Precipitation Forecasting), LEPS results are also compared with a 6-km horizontal resolution deterministic forecast. Due to local and mesoscale forcing, the high resolution forecast (Hi-Res) has better performance compared to the ensemble mean for rainfall thresholds larger than 10mm but it tends to overestimate precipitation for lower amounts. This yields larger false alarms that have a detrimental effect on objective scores for lower thresholds. To exploit the advantages of a probabilistic forecast compared to a deterministic one, the relation between the ECMWF-EPS 700 hPa geopotential height spread and LEPS performance is analyzed. Results are promising even if additional studies are required.

  14. Cloudy Windows: What GCM Ensembles, Reanalyses and Observations Tell Us About Uncertainty in Greenland's Future Climate and Surface Melting

    NASA Astrophysics Data System (ADS)

    Reusch, D. B.

    2016-12-01

    Any analysis that wants to use a GCM-based scenario of future climate benefits from knowing how much uncertainty the GCM's inherent variability adds to the development of climate change predictions. This is extra relevant in the polar regions due to the potential of global impacts (e.g., sea level rise) from local (ice sheet) climate changes such as more frequent/intense surface melting. High-resolution, regional-scale models using GCMs for boundary/initial conditions in future scenarios inherit a measure of GCM-derived externally-driven uncertainty. We investigate these uncertainties for the Greenland ice sheet using the 30-member CESM1.0-CAM5-BGC Large Ensemble (CESMLE) for recent (1981-2000) and future (2081-2100, RCP 8.5) decades. Recent simulations are skill-tested against the ERA-Interim reanalysis and AWS observations with results informing future scenarios. We focus on key variables influencing surface melting through decadal climatologies, nonlinear analysis of variability with self-organizing maps (SOMs), regional-scale modeling (Polar WRF), and simple melt models. Relative to the ensemble average, spatially averaged climatological July temperature anomalies over a Greenland ice-sheet/ocean domain are mostly between +/- 0.2 °C. The spatial average hides larger local anomalies of up to +/- 2 °C. The ensemble average itself is 2 °C cooler than ERA-Interim. SOMs extend our diagnostics by providing a concise, objective summary of model variability as a set of generalized patterns. For CESMLE, the SOM patterns summarize the variability of multiple realizations of climate. Changes in pattern frequency by ensemble member show the influence of initial conditions. For example, basic statistical analysis of pattern frequency yields interquartile ranges of 2-4% for individual patterns across the ensemble. In climate terms, this tells us about climate state variability through the range of the ensemble, a potentially significant source of melt-prediction uncertainty. SOMs can also capture the different trajectories of climate due to intramodel variability over time. Polar WRF provides higher resolution regional modeling with improved, polar-centric model physics. Simple melt models allow us to characterize impacts of the upstream uncertainties on estimates of surface melting.

  15. Anomaly transform methods based on total energy and ocean heat content norms for generating ocean dynamic disturbances for ensemble climate forecasts

    NASA Astrophysics Data System (ADS)

    Romanova, Vanya; Hense, Andreas

    2017-08-01

    In our study we use the anomaly transform, a special case of ensemble transform method, in which a selected set of initial oceanic anomalies in space, time and variables are defined and orthogonalized. The resulting orthogonal perturbation patterns are designed such that they pick up typical balanced anomaly structures in space and time and between variables. The metric used to set up the eigen problem is taken either as the weighted total energy with its zonal, meridional kinetic and available potential energy terms having equal contributions, or the weighted ocean heat content in which a disturbance is applied only to the initial temperature fields. The choices of a reference state for defining the initial anomalies are such that either perturbations on seasonal timescales and or on interannual timescales are constructed. These project a-priori only the slow modes of the ocean physical processes, such that the disturbances grow mainly in the Western Boundary Currents, in the Antarctic Circumpolar Current and the El Nino Southern Oscillation regions. An additional set of initial conditions is designed to fit in a least square sense data from global ocean reanalysis. Applying the AT produced sets of disturbances to oceanic initial conditions initialized by observations of the MPIOM-ESM coupled model on T63L47/GR15 resolution, four ensemble and one hind-cast experiments were performed. The weighted total energy norm is used to monitor the amplitudes and rates of the fastest growing error modes. The results showed minor dependence of the instabilities or error growth on the selected metric but considerable change due to the magnitude of the scaling amplitudes of the perturbation patterns. In contrast to similar atmospheric applications, we find an energy conversion from kinetic to available potential energy, which suggests a different source of uncertainty generation in the ocean than in the atmosphere mainly associated with changes in the density field.

  16. Thermal density functional theory, ensemble density functional theory, and potential functional theory for warm dense matter

    NASA Astrophysics Data System (ADS)

    Pribram-Jones, Aurora

    Warm dense matter (WDM) is a high energy phase between solids and plasmas, with characteristics of both. It is present in the centers of giant planets, within the earth's core, and on the path to ignition of inertial confinement fusion. The high temperatures and pressures of warm dense matter lead to complications in its simulation, as both classical and quantum effects must be included. One of the most successful simulation methods is density functional theory-molecular dynamics (DFT-MD). Despite great success in a diverse array of applications, DFT-MD remains computationally expensive and it neglects the explicit temperature dependence of electron-electron interactions known to exist within exact DFT. Finite-temperature density functional theory (FT DFT) is an extension of the wildly successful ground-state DFT formalism via thermal ensembles, broadening its quantum mechanical treatment of electrons to include systems at non-zero temperatures. Exact mathematical conditions have been used to predict the behavior of approximations in limiting conditions and to connect FT DFT to the ground-state theory. An introduction to FT DFT is given within the context of ensemble DFT and the larger field of DFT is discussed for context. Ensemble DFT is used to describe ensembles of ground-state and excited systems. Exact conditions in ensemble DFT and the performance of approximations depend on ensemble weights. Using an inversion method, exact Kohn-Sham ensemble potentials are found and compared to approximations. The symmetry eigenstate Hartree-exchange approximation is in good agreement with exact calculations because of its inclusion of an ensemble derivative discontinuity. Since ensemble weights in FT DFT are temperature-dependent Fermi weights, this insight may help develop approximations well-suited to both ground-state and FT DFT. A novel, highly efficient approach to free energy calculations, finite-temperature potential functional theory, is derived, which has the potential to transform the simulation of warm dense matter. As a semiclassical method, it connects the normally disparate regimes of cold condensed matter physics and hot plasma physics. This orbital-free approach captures the smooth classical density envelope and quantum density oscillations that are both crucial to accurate modeling of materials where temperature and pressure effects are influential.

  17. Ensemble theory for slightly deformable granular matter.

    PubMed

    Tejada, Ignacio G

    2014-09-01

    Given a granular system of slightly deformable particles, it is possible to obtain different static and jammed packings subjected to the same macroscopic constraints. These microstates can be compared in a mathematical space defined by the components of the force-moment tensor (i.e. the product of the equivalent stress by the volume of the Voronoi cell). In order to explain the statistical distributions observed there, an athermal ensemble theory can be used. This work proposes a formalism (based on developments of the original theory of Edwards and collaborators) that considers both the internal and the external constraints of the problem. The former give the density of states of the points of this space, and the latter give their statistical weight. The internal constraints are those caused by the intrinsic features of the system (e.g. size distribution, friction, cohesion). They, together with the force-balance condition, determine which the possible local states of equilibrium of a particle are. Under the principle of equal a priori probabilities, and when no other constraints are imposed, it can be assumed that particles are equally likely to be found in any one of these local states of equilibrium. Then a flat sampling over all these local states turns into a non-uniform distribution in the force-moment space that can be represented with density of states functions. Although these functions can be measured, some of their features are explored in this paper. The external constraints are those macroscopic quantities that define the ensemble and are fixed by the protocol. The force-moment, the volume, the elastic potential energy and the stress are some examples of quantities that can be expressed as functions of the force-moment. The associated ensembles are included in the formalism presented here.

  18. An ensemble and single-molecule fluorescence microscopy investigation of phase-separated monolayer films stained with Nile Red.

    PubMed

    Lu, Yin; Porterfield, Robyn; Thunder, Terri; Paige, Matthew F

    2011-01-01

    Phase-separated Langmuir-Blodgett monolayer films prepared from mixtures of arachidic acid (C19H39COOH) and perfluorotetradecanoic acid (C13F27COOH) were stained via spin-casting with the polarity sensitive phenoxazine dye Nile Red, and characterized using a combination of ensemble and single-molecule fluorescence microscopy measurements. Ensemble fluorescence microscopy and spectromicroscopy showed that Nile Red preferentially associated with the hydrogenated domains of the phase-separated films, and was strongly fluorescent in these areas of the film. These measurements, in conjunction with single-molecule fluorescence imaging experiments, also indicated that a small sub-population of dye molecules localizes on the perfluorinated regions of the sample, but that this sub-population is spectroscopically indistinguishable from that associated with the hydrogenated domains. The relative importance of selective dye adsorption and local polarity sensitivity of Nile Red for staining applications in phase-separated LB films as well as in cellular environments is discussed in context of the experimental results. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Generalized Gibbs distribution and energy localization in the semiclassical FPU problem

    NASA Astrophysics Data System (ADS)

    Hipolito, Rafael; Danshita, Ippei; Oganesyan, Vadim; Polkovnikov, Anatoli

    2011-03-01

    We investigate dynamics of the weakly interacting quantum mechanical Fermi-Pasta-Ulam (qFPU) model in the semiclassical limit below the stochasticity threshold. Within this limit we find that initial quantum fluctuations lead to the damping of FPU oscillations and relaxation of the system to a slowly evolving steady state with energy localized within few momentum modes. We find that in large systems this state can be described by the generalized Gibbs ensemble (GGE), with the Lagrange multipliers being very weak functions of time. This ensembles gives accurate description of the instantaneous correlation functions, both quadratic and quartic. Based on these results we conjecture that GGE generically appears as a prethermalized state in weakly non-integrable systems.

  20. Design and experimentation of an empirical multistructure framework for accurate, sharp and reliable hydrological ensembles

    NASA Astrophysics Data System (ADS)

    Seiller, G.; Anctil, F.; Roy, R.

    2017-09-01

    This paper outlines the design and experimentation of an Empirical Multistructure Framework (EMF) for lumped conceptual hydrological modeling. This concept is inspired from modular frameworks, empirical model development, and multimodel applications, and encompasses the overproduce and select paradigm. The EMF concept aims to reduce subjectivity in conceptual hydrological modeling practice and includes model selection in the optimisation steps, reducing initial assumptions on the prior perception of the dominant rainfall-runoff transformation processes. EMF generates thousands of new modeling options from, for now, twelve parent models that share their functional components and parameters. Optimisation resorts to ensemble calibration, ranking and selection of individual child time series based on optimal bias and reliability trade-offs, as well as accuracy and sharpness improvement of the ensemble. Results on 37 snow-dominated Canadian catchments and 20 climatically-diversified American catchments reveal the excellent potential of the EMF in generating new individual model alternatives, with high respective performance values, that may be pooled efficiently into ensembles of seven to sixty constitutive members, with low bias and high accuracy, sharpness, and reliability. A group of 1446 new models is highlighted to offer good potential on other catchments or applications, based on their individual and collective interests. An analysis of the preferred functional components reveals the importance of the production and total flow elements. Overall, results from this research confirm the added value of ensemble and flexible approaches for hydrological applications, especially in uncertain contexts, and open up new modeling possibilities.

  1. Identification of DNA-binding proteins by combining auto-cross covariance transformation and ensemble learning.

    PubMed

    Liu, Bin; Wang, Shanyi; Dong, Qiwen; Li, Shumin; Liu, Xuan

    2016-04-20

    DNA-binding proteins play a pivotal role in various intra- and extra-cellular activities ranging from DNA replication to gene expression control. With the rapid development of next generation of sequencing technique, the number of protein sequences is unprecedentedly increasing. Thus it is necessary to develop computational methods to identify the DNA-binding proteins only based on the protein sequence information. In this study, a novel method called iDNA-KACC is presented, which combines the Support Vector Machine (SVM) and the auto-cross covariance transformation. The protein sequences are first converted into profile-based protein representation, and then converted into a series of fixed-length vectors by the auto-cross covariance transformation with Kmer composition. The sequence order effect can be effectively captured by this scheme. These vectors are then fed into Support Vector Machine (SVM) to discriminate the DNA-binding proteins from the non DNA-binding ones. iDNA-KACC achieves an overall accuracy of 75.16% and Matthew correlation coefficient of 0.5 by a rigorous jackknife test. Its performance is further improved by employing an ensemble learning approach, and the improved predictor is called iDNA-KACC-EL. Experimental results on an independent dataset shows that iDNA-KACC-EL outperforms all the other state-of-the-art predictors, indicating that it would be a useful computational tool for DNA binding protein identification. .

  2. Simulating adsorptive expansion of zeolites: application to biomass-derived solutions in contact with silicalite.

    PubMed

    Santander, Julian E; Tsapatsis, Michael; Auerbach, Scott M

    2013-04-16

    We have constructed and applied an algorithm to simulate the behavior of zeolite frameworks during liquid adsorption. We applied this approach to compute the adsorption isotherms of furfural-water and hydroxymethyl furfural (HMF)-water mixtures adsorbing in silicalite zeolite at 300 K for comparison with experimental data. We modeled these adsorption processes under two different statistical mechanical ensembles: the grand canonical (V-Nz-μg-T or GC) ensemble keeping volume fixed, and the P-Nz-μg-T (osmotic) ensemble allowing volume to fluctuate. To optimize accuracy and efficiency, we compared pure Monte Carlo (MC) sampling to hybrid MC-molecular dynamics (MD) simulations. For the external furfural-water and HMF-water phases, we assumed the ideal solution approximation and employed a combination of tabulated data and extended ensemble simulations for computing solvation free energies. We found that MC sampling in the V-Nz-μg-T ensemble (i.e., standard GCMC) does a poor job of reproducing both the Henry's law regime and the saturation loadings of these systems. Hybrid MC-MD sampling of the V-Nz-μg-T ensemble, which includes framework vibrations at fixed total volume, provides better results in the Henry's law region, but this approach still does not reproduce experimental saturation loadings. Pure MC sampling of the osmotic ensemble was found to approach experimental saturation loadings more closely, whereas hybrid MC-MD sampling of the osmotic ensemble quantitatively reproduces such loadings because the MC-MD approach naturally allows for locally anisotropic volume changes wherein some pores expand whereas others contract.

  3. Computer-aided classification of breast microcalcification clusters: merging of features from image processing and radiologists

    NASA Astrophysics Data System (ADS)

    Lo, Joseph Y.; Gavrielides, Marios A.; Markey, Mia K.; Jesneck, Jonathan L.

    2003-05-01

    We developed an ensemble classifier for the task of computer-aided diagnosis of breast microcalcification clusters,which are very challenging to characterize for radiologists and computer models alike. The purpose of this study is to help radiologists identify whether suspicious calcification clusters are benign vs. malignant, such that they may potentially recommend fewer unnecessary biopsies for actually benign lesions. The data consists of mammographic features extracted by automated image processing algorithms as well as manually interpreted by radiologists according to a standardized lexicon. We used 292 cases from a publicly available mammography database. From each cases, we extracted 22 image processing features pertaining to lesion morphology, 5 radiologist features also pertaining to morphology, and the patient age. Linear discriminant analysis (LDA) models were designed using each of the three data types. Each local model performed poorly; the best was one based upon image processing features which yielded ROC area index AZ of 0.59 +/- 0.03 and partial AZ above 90% sensitivity of 0.08 +/- 0.03. We then developed ensemble models using different combinations of those data types, and these models all improved performance compared to the local models. The final ensemble model was based upon 5 features selected by stepwise LDA from all 28 available features. This ensemble performed with AZ of 0.69 +/- 0.03 and partial AZ of 0.21 +/- 0.04, which was statistically significantly better than the model based on the image processing features alone (p<0.001 and p=0.01 for full and partial AZ respectively). This demonstrated the value of the radiologist-extracted features as a source of information for this task. It also suggested there is potential for improved performance using this ensemble classifier approach to combine different sources of currently available data.

  4. Three-dimensional localization of nanoscale battery reactions using soft X-ray tomography

    DOE PAGES

    Yu, Young-Sang; Farmand, Maryam; Kim, Chunjoong; ...

    2018-03-02

    Battery function is determined by the efficiency and reversibility of the electrochemical phase transformations at solid electrodes. The microscopic tools available to study the chemical states of matter with the required spatial resolution and chemical specificity are intrinsically limited when studying complex architectures by their reliance on two-dimensional projections of thick material. Here in this paper, we report the development of soft X-ray ptychographic tomography, which resolves chemical states in three dimensions at 11 nm spatial resolution. We study an ensemble of nano-plates of lithium iron phosphate extracted from a battery electrode at 50% state of charge. Using a setmore » of nanoscale tomograms, we quantify the electrochemical state and resolve phase boundaries throughout the volume of individual nanoparticles. These observations reveal multiple reaction points, intra-particle heterogeneity, and size effects that highlight the importance of multi-dimensional analytical tools in providing novel insight to the design of the next generation of high-performance devices.« less

  5. Three-dimensional localization of nanoscale battery reactions using soft X-ray tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Young-Sang; Farmand, Maryam; Kim, Chunjoong

    Battery function is determined by the efficiency and reversibility of the electrochemical phase transformations at solid electrodes. The microscopic tools available to study the chemical states of matter with the required spatial resolution and chemical specificity are intrinsically limited when studying complex architectures by their reliance on two-dimensional projections of thick material. Here in this paper, we report the development of soft X-ray ptychographic tomography, which resolves chemical states in three dimensions at 11 nm spatial resolution. We study an ensemble of nano-plates of lithium iron phosphate extracted from a battery electrode at 50% state of charge. Using a setmore » of nanoscale tomograms, we quantify the electrochemical state and resolve phase boundaries throughout the volume of individual nanoparticles. These observations reveal multiple reaction points, intra-particle heterogeneity, and size effects that highlight the importance of multi-dimensional analytical tools in providing novel insight to the design of the next generation of high-performance devices.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maughan, Bret; Zahl, Percy; Sutter, Peter

    Switching the magnetic properties of organic semiconductors on a metal surface has thus far largely been limited to molecule-by-molecule tip-induced transformations in scanned probe experiments. Here we demonstrate with molecular resolution that collective control of activated Kondo screening can be achieved in thin-films of the organic semiconductor titanyl phthalocyanine on Cu(110) to obtain tunable concentrations of Kondo impurities. Using low-temperature scanning tunneling microscopy and spectroscopy, we show that a thermally activated molecular distortion dramatically shifts surface–molecule coupling and enables ensemble-level control of Kondo screening in the interfacial spin system. This is accompanied by the formation of a temperature-dependent Abrikosov–Suhl–Kondo resonancemore » in the local density of states of the activated molecules. This enables coverage-dependent control over activation to the Kondo screening state. Finally, our study thus advances the versatility of molecular switching for Kondo physics and opens new avenues for scalable bottom-up tailoring of the electronic structure and magnetic texture of organic semiconductor interfaces at the nanoscale.« less

  7. Energy-dependent topological anti-de Sitter black holes in Gauss-Bonnet Born-Infeld gravity

    NASA Astrophysics Data System (ADS)

    Hendi, S. H.; Behnamifard, H.; Bahrami-Asl, B.

    2018-03-01

    Employing higher-curvature corrections to Einstein-Maxwell gravity has garnered a great deal of attention motivated by the high-energy regime in the quantum nature of black hole physics. In addition, one may employ gravity's rainbow to encode quantum gravity effects into black hole solutions. In this paper, we regard an energy-dependent static spacetime with various topologies and study its black hole solutions in the context of Gauss-Bonnet Born-Infeld (GB-BI) gravity. We study the thermodynamic properties and examine the first law of thermodynamics. Using a suitable local transformation, we endow the Ricci-flat black hole solutions with a global rotation and study the effects of rotation on thermodynamic quantities. We also investigate thermal stability in a canonical ensemble by calculating the heat capacity. We obtain the effects of various parameters on the horizon radius of stable black holes. Finally, we discuss a second-order phase transition in the extended phase space thermodynamics and investigate the critical behavior.

  8. Distinct Mechanisms for Synchronization and Temporal Patterning of Odor-Encoding Neural Assemblies

    NASA Astrophysics Data System (ADS)

    MacLeod, Katrina; Laurent, Gilles

    1996-11-01

    Stimulus-evoked oscillatory synchronization of neural assemblies and temporal patterns of neuronal activity have been observed in many sensory systems, such as the visual and auditory cortices of mammals or the olfactory system of insects. In the locust olfactory system, single odor puffs cause the immediate formation of odor-specific neural assemblies, defined both by their transient synchronized firing and their progressive transformation over the course of a response. The application of an antagonist of ionotropic γ-aminobutyric acid (GABA) receptors to the first olfactory relay neuropil selectively blocked the fast inhibitory synapse between local and projection neurons. This manipulation abolished the synchronization of the odor-coding neural ensembles but did not affect each neuron's temporal response patterns to odors, even when these patterns contained periods of inhibition. Fast GABA-mediated inhibition, therefore, appears to underlie neuronal synchronization but not response tuning in this olfactory system. The selective desynchronization of stimulus-evoked oscillating neural assemblies in vivo is now possible, enabling direct functional tests of their significance for sensation and perception.

  9. Can Atmospheric Reanalysis Data Sets Be Used to Reproduce Flooding Over Large Scales?

    NASA Astrophysics Data System (ADS)

    Andreadis, Konstantinos M.; Schumann, Guy J.-P.; Stampoulis, Dimitrios; Bates, Paul D.; Brakenridge, G. Robert; Kettner, Albert J.

    2017-10-01

    Floods are costly to global economies and can be exceptionally lethal. The ability to produce consistent flood hazard maps over large areas could provide a significant contribution to reducing such losses, as the lack of knowledge concerning flood risk is a major factor in the transformation of river floods into flood disasters. In order to accurately reproduce flooding in river channels and floodplains, high spatial resolution hydrodynamic models are needed. Despite being computationally expensive, recent advances have made their continental to global implementation feasible, although inputs for long-term simulations may require the use of reanalysis meteorological products especially in data-poor regions. We employ a coupled hydrologic/hydrodynamic model cascade forced by the 20CRv2 reanalysis data set and evaluate its ability to reproduce flood inundation area and volume for Australia during the 1973-2012 period. Ensemble simulations using the reanalysis data were performed to account for uncertainty in the meteorology and compared with a validated benchmark simulation. Results show that the reanalysis ensemble capture the inundated areas and volumes relatively well, with correlations for the ensemble mean of 0.82 and 0.85 for area and volume, respectively, although the meteorological ensemble spread propagates in large uncertainty of the simulated flood characteristics.

  10. Reflections on Freirean Pedagogy in a Jazz Combo Lab

    ERIC Educational Resources Information Center

    Shevock, Daniel J.

    2015-01-01

    Paulo Freire was an important figure in adult education whose pedagogy has been used in music education. In this act of praxis (reflection and action upon the world in order to transform it), I share an autoethnography of my teaching of a university-level small ensemble jazz class. The purpose of this autoethnography was to examine my teaching…

  11. Stable discrete representation of relativistically drifting plasmas

    DOE PAGES

    Kirchen, M.; Lehe, R.; Godfrey, B. B.; ...

    2016-10-10

    Representing the electrodynamics of relativistically drifting particle ensembles in discrete, co-propagating Galilean coordinates enables the derivation of a Particle-In-Cell algorithm that is intrinsically free of the numerical Cherenkov instability for plasmas flowing at a uniform velocity. Application of the method is shown by modeling plasma accelerators in a Lorentz-transformed optimal frame of reference.

  12. Validation Test Report for a Genetic Algorithm in the Glider Observation STrategies (GOST 1.0) Project: Sensitivity Studies

    DTIC Science & Technology

    2012-08-15

    Environmental Model ( GDEM ) 72 levels) was conserved in the interpolated profiles and small variations in the vertical field may have lead to large...Planner ETKF Ensemble Transform Kalman Filter G8NCOM 1/8⁰ Global NCOM GA Genetic Algorithm GDEM Generalized Digital Environmental Model GOST

  13. Stable discrete representation of relativistically drifting plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirchen, M.; Lehe, R.; Godfrey, B. B.

    Representing the electrodynamics of relativistically drifting particle ensembles in discrete, co-propagating Galilean coordinates enables the derivation of a Particle-In-Cell algorithm that is intrinsically free of the numerical Cherenkov instability for plasmas flowing at a uniform velocity. Application of the method is shown by modeling plasma accelerators in a Lorentz-transformed optimal frame of reference.

  14. Displacement data assimilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenthal, W. Steven; Venkataramani, Shankar; Mariano, Arthur J.

    We show that modifying a Bayesian data assimilation scheme by incorporating kinematically-consistent displacement corrections produces a scheme that is demonstrably better at estimating partially observed state vectors in a setting where feature information is important. While the displacement transformation is generic, here we implement it within an ensemble Kalman Filter framework and demonstrate its effectiveness in tracking stochastically perturbed vortices.

  15. Detection and Attribution of Temperature Trends in the Presence of Natural Variability

    NASA Astrophysics Data System (ADS)

    Wallace, J. M.

    2014-12-01

    The fingerprint of human-induced global warming stands out clearly above the noise In the time series of global-mean temperature, but not local temperature. At extratropical latitudes over land the standard error of 50-year linear temperature trends at a fixed point is as large as the cumulative rise in global-mean temperature over the past century. Much of the samping variability in local temperature trends is "dynamically-induced", i.e., attributable to the fact that the seasonally-varying mean circulation varies substantially from one year to the next and anomalous circulation patterns are generally accompanied by anomalous temperature patterns. In the presence of such large sampling variability it is virtually impossible to identify the spatial signature of greenhouse warming based on observational data or to partition observed local temperature trends into natural and human-induced components. It follows that previous IPCC assessments, which have focused on the deterministic signature of human-induced climate change, are inherently limited as to what they can tell us about the attribution of the past record of local temperature change or about how much the temperature at a particular place is likely to rise in the next few decades in response to global warming. To obtain more informative assessments of regional and local climate variability and change it will be necessary to take a probabilistic approach. Just as the use of the ensembles has contributed to more informative extended range weather predictions, large ensembles of climate model simulations can provide a statistical context for interpreting observed climate change and for framing projections of future climate. For some purposes, statistics relating to the interannual variability in the historical record can serve as a surrogate for statistics relating to the diversity of climate change scenarios in large ensembles.

  16. Global ensemble texture representations are critical to rapid scene perception.

    PubMed

    Brady, Timothy F; Shafer-Skelton, Anna; Alvarez, George A

    2017-06-01

    Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Dynamics of interacting Dicke model in a coupled-cavity array

    NASA Astrophysics Data System (ADS)

    Badshah, Fazal; Qamar, Shahid; Paternostro, Mauro

    2014-09-01

    We consider the dynamics of an array of mutually interacting cavities, each containing an ensemble of N two-level atoms. By exploring the possibilities offered by ensembles of various dimensions and a range of atom-light and photon-hopping values, we investigate the generation of multisite entanglement, as well as the performance of excitation transfer across the array, resulting from the competition between on-site nonlinearities of the matter-light interaction and intersite photon hopping. In particular, for a three-cavity interacting system it is observed that the initial excitation in the first cavity completely transfers to the ensemble in the third cavity through the hopping of photons between the adjacent cavities. Probabilities of the transfer of excitation of the cavity modes and ensembles exhibit characteristics of fast and slow oscillations governed by coupling and hopping parameters, respectively. In the large-hopping case, by seeding an initial excitation in the cavity at the center of the array, a tripartite W state, as well as a bipartite maximally entangled state, is obtained, depending on the interaction time. Population of the ensemble in a cavity has a positive impact on the rate of excitation transfer between the ensembles and their local cavity modes. In particular, for ensembles of five to seven atoms, tripartite W states can be produced even when the hopping rate is comparable to the cavity-atom coupling rate. A similar behavior of the transfer of excitation is observed for a four-coupled-cavity system with two initial excitations.

  18. An interplanetary magnetic field ensemble at 1 AU

    NASA Technical Reports Server (NTRS)

    Matthaeus, W. H.; Goldstein, M. L.; King, J. H.

    1985-01-01

    A method for calculation ensemble averages from magnetic field data is described. A data set comprising approximately 16 months of nearly continuous ISEE-3 magnetic field data is used in this study. Individual subintervals of this data, ranging from 15 hours to 15.6 days comprise the ensemble. The sole condition for including each subinterval in the averages is the degree to which it represents a weakly time-stationary process. Averages obtained by this method are appropriate for a turbulence description of the interplanetary medium. The ensemble average correlation length obtained from all subintervals is found to be 4.9 x 10 to the 11th cm. The average value of the variances of the magnetic field components are in the approximate ratio 8:9:10, where the third component is the local mean field direction. The correlation lengths and variances are found to have a systematic variation with subinterval duration, reflecting the important role of low-frequency fluctuations in the interplanetary medium.

  19. Fracture of disordered solids in compression as a critical phenomenon. I. Statistical mechanics formalism.

    PubMed

    Toussaint, Renaud; Pride, Steven R

    2002-09-01

    This is the first of a series of three articles that treats fracture localization as a critical phenomenon. This first article establishes a statistical mechanics based on ensemble averages when fluctuations through time play no role in defining the ensemble. Ensembles are obtained by dividing a huge rock sample into many mesoscopic volumes. Because rocks are a disordered collection of grains in cohesive contact, we expect that once shear strain is applied and cracks begin to arrive in the system, the mesoscopic volumes will have a wide distribution of different crack states. These mesoscopic volumes are the members of our ensembles. We determine the probability of observing a mesoscopic volume to be in a given crack state by maximizing Shannon's measure of the emergent-crack disorder subject to constraints coming from the energy balance of brittle fracture. The laws of thermodynamics, the partition function, and the quantification of temperature are obtained for such cracking systems.

  20. Effect of pellet-cladding interaction (PCI) and degradation mechanisms on spent nuclear fuel rod mechanical performance during transportation

    NASA Astrophysics Data System (ADS)

    Peterson, Brittany Ann

    Winter storms can affect millions of people, with impacts such as disruptions to transportation, hazards to human health, reduction in retail sales, and structural damage. Blizzard forecasts for Alberta Clippers can be a particular challenge in the Northern Plains, as these systems typically depart from the Canadian Rockies, intensify, and impact the Northern Plains all within 24 hours. The purpose of this study is to determine whether probabilistic forecasts derived from a local physics-based ensemble can improve specific aspects of winter storm forecasts for three Alberta Clipper cases. Verification is performed on the ensemble members and ensemble mean with a focus on quantifying uncertainty in the storm track, two-meter winds, and precipitation using the MERRA and NOHRSC SNODAS datasets. This study finds that addition improvements are needed to proceed with operational use of the ensemble blizzard products, but the use of a proxy for blizzard conditions yields promising results.

  1. Ensemble Grouping Strategies for Embedded Stochastic Collocation Methods Applied to Anisotropic Diffusion Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Elia, M.; Edwards, H. C.; Hu, J.

    Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less

  2. Ensemble Grouping Strategies for Embedded Stochastic Collocation Methods Applied to Anisotropic Diffusion Problems

    DOE PAGES

    D'Elia, M.; Edwards, H. C.; Hu, J.; ...

    2018-01-18

    Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less

  3. Quantum entanglement at ambient conditions in a macroscopic solid-state spin ensemble

    PubMed Central

    Klimov, Paul V.; Falk, Abram L.; Christle, David J.; Dobrovitski, Viatcheslav V.; Awschalom, David D.

    2015-01-01

    Entanglement is a key resource for quantum computers, quantum-communication networks, and high-precision sensors. Macroscopic spin ensembles have been historically important in the development of quantum algorithms for these prospective technologies and remain strong candidates for implementing them today. This strength derives from their long-lived quantum coherence, strong signal, and ability to couple collectively to external degrees of freedom. Nonetheless, preparing ensembles of genuinely entangled spin states has required high magnetic fields and cryogenic temperatures or photochemical reactions. We demonstrate that entanglement can be realized in solid-state spin ensembles at ambient conditions. We use hybrid registers comprising of electron-nuclear spin pairs that are localized at color-center defects in a commercial SiC wafer. We optically initialize 103 identical registers in a 40-μm3 volume (with 0.95−0.07+0.05 fidelity) and deterministically prepare them into the maximally entangled Bell states (with 0.88 ± 0.07 fidelity). To verify entanglement, we develop a register-specific quantum-state tomography protocol. The entanglement of a macroscopic solid-state spin ensemble at ambient conditions represents an important step toward practical quantum technology. PMID:26702444

  4. Sensory processing patterns predict the integration of information held in visual working memory.

    PubMed

    Lowe, Matthew X; Stevenson, Ryan A; Wilson, Kristin E; Ouslis, Natasha E; Barense, Morgan D; Cant, Jonathan S; Ferber, Susanne

    2016-02-01

    Given the limited resources of visual working memory, multiple items may be remembered as an averaged group or ensemble. As a result, local information may be ill-defined, but these ensemble representations provide accurate diagnostics of the natural world by combining gist information with item-level information held in visual working memory. Some neurodevelopmental disorders are characterized by sensory processing profiles that predispose individuals to avoid or seek-out sensory stimulation, fundamentally altering their perceptual experience. Here, we report such processing styles will affect the computation of ensemble statistics in the general population. We identified stable adult sensory processing patterns to demonstrate that individuals with low sensory thresholds who show a greater proclivity to engage in active response strategies to prevent sensory overstimulation are less likely to integrate mean size information across a set of similar items and are therefore more likely to be biased away from the mean size representation of an ensemble display. We therefore propose the study of ensemble processing should extend beyond the statistics of the display, and should also consider the statistics of the observer. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. A coupled-oscillator model of olfactory bulb gamma oscillations

    PubMed Central

    2017-01-01

    The olfactory bulb transforms not only the information content of the primary sensory representation, but also its underlying coding metric. High-variance, slow-timescale primary odor representations are transformed by bulbar circuitry into secondary representations based on principal neuron spike patterns that are tightly regulated in time. This emergent fast timescale for signaling is reflected in gamma-band local field potentials, presumably serving to efficiently integrate olfactory sensory information into the temporally regulated information networks of the central nervous system. To understand this transformation and its integration with interareal coordination mechanisms requires that we understand its fundamental dynamical principles. Using a biophysically explicit, multiscale model of olfactory bulb circuitry, we here demonstrate that an inhibition-coupled intrinsic oscillator framework, pyramidal resonance interneuron network gamma (PRING), best captures the diversity of physiological properties exhibited by the olfactory bulb. Most importantly, these properties include global zero-phase synchronization in the gamma band, the phase-restriction of informative spikes in principal neurons with respect to this common clock, and the robustness of this synchronous oscillatory regime to multiple challenging conditions observed in the biological system. These conditions include substantial heterogeneities in afferent activation levels and excitatory synaptic weights, high levels of uncorrelated background activity among principal neurons, and spike frequencies in both principal neurons and interneurons that are irregular in time and much lower than the gamma frequency. This coupled cellular oscillator architecture permits stable and replicable ensemble responses to diverse sensory stimuli under various external conditions as well as to changes in network parameters arising from learning-dependent synaptic plasticity. PMID:29140973

  6. The FLAME-slab method for electromagnetic wave scattering in aperiodic slabs

    NASA Astrophysics Data System (ADS)

    Mansha, Shampy; Tsukerman, Igor; Chong, Y. D.

    2017-12-01

    The proposed numerical method, "FLAME-slab," solves electromagnetic wave scattering problems for aperiodic slab structures by exploiting short-range regularities in these structures. The computational procedure involves special difference schemes with high accuracy even on coarse grids. These schemes are based on Trefftz approximations, utilizing functions that locally satisfy the governing differential equations, as is done in the Flexible Local Approximation Method (FLAME). Radiation boundary conditions are implemented via Fourier expansions in the air surrounding the slab. When applied to ensembles of slab structures with identical short-range features, such as amorphous or quasicrystalline lattices, the method is significantly more efficient, both in runtime and in memory consumption, than traditional approaches. This efficiency is due to the fact that the Trefftz functions need to be computed only once for the whole ensemble.

  7. An inquiry into the cirrus-cloud thermostat effect for tropical sea surface temperature

    NASA Technical Reports Server (NTRS)

    Lau, K.-M.; Sui, C.-H.; Chou, M.-D.; Tao, W.-K.

    1994-01-01

    In this paper, we investigate the relative importance of local vs remote control on cloud radiative forcing using a cumulus ensemble model. It is found that cloud and surface radiation forcings are much more sensitive to the mean vertical motion assoicated with large scale tropical circulation than to the local SST (sea surface temperature). When the local SST is increased with the mean vertical motion held constant, increased surface latent and sensible heat flux associated with enhanced moisture recycling is found to be the primary mechanism for cooling the ocean surface. Large changes in surface shortwave fluxes are related to changes in cloudiness induced by changes in the large scale circulation. These results are consistent with a number of earlier empirical studies, which raised concerns regarding the validity of the cirrus-thermostat hypothesis (Ramanathan and Collins, 1991). It is argued that for a better understanding of cloud feedback, both local and remote controls need to be considered and that a cumulus ensemble model is a powerful tool that should be explored for such purpose.

  8. Constraining a Coastal Ocean Model by Surface Observations Using an Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    De Mey, P. J.; Ayoub, N. K.

    2016-02-01

    We explore the impact of assimilating sea surface temperature (SST) and sea surface height (SSH) observations in the Bay of Biscay (North-East Atlantic). The study is conducted in the SYMPHONIE coastal circulation model (Marsaleix et al., 2009) on a 3kmx3km grid, with 43 sigma levels. Ensembles are generated by perturbing the wind forcing to analyze the model error subspace spanned by its response to wind forcing uncertainties. The assimilation method is a 4D Ensemble Kalman Filter algorithm with localization. We use the SDAP code developed in the team (https://sourceforge.net/projects/sequoia-dap/). In a first step before the assimilation of real observations, we set up an Ensemble twin experiment protocol where a nature run as well as noisy pseudo-observations of SST and SSH are generated from an Ensemble member (later discarded from the assimilative Ensemble). Our objectives are to assess (1) the adequacy of the choice of error source and perturbation strategy and (2) how effective the surface observational constraint is at constraining the surface and subsurface fields. We first illustrate characteristics of the error subspace generated by the perturbation strategy. We then show that, while the EnKF solves a single seamless problem regardless of the region within our domain, the nature and effectiveness of the data constraint over the shelf differ from those over the abyssal plain.

  9. Considerations for Using Hybrid Ensemble-Variational Data Assimilation in NASA-GMAO's Next Reanalysis System

    NASA Technical Reports Server (NTRS)

    El Akkraoui, Amal; Todling, Ricardo

    2017-01-01

    The Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2) is the latest reanalysis produced by GMAO, and provides global data spanning the period 1980-present. The atmospheric data assimilation component of MERRA-2 used a 3D-Var scheme, which was operational at the time of its design. Since then, a Hybrid 3D-Var, then a Hybrid 4D-EnVar were implemented, adding an ensemble component to the data assimilation scheme. In this work, we will be examining the benefits of using hybrid ensemble flow-dependent covariances to represent errors and uncertainties in historic periods. Specifically, periods of pre- and post-satellites, as well as periods of active tropical cyclone seasons. Finally, we will also be exploring the use of adaptive localization scales.

  10. WONKA: objective novel complex analysis for ensembles of protein-ligand structures.

    PubMed

    Bradley, A R; Wall, I D; von Delft, F; Green, D V S; Deane, C M; Marsden, B D

    2015-10-01

    WONKA is a tool for the systematic analysis of an ensemble of protein-ligand structures. It makes the identification of conserved and unusual features within such an ensemble straightforward. WONKA uses an intuitive workflow to process structural co-ordinates. Ligand and protein features are summarised and then presented within an interactive web application. WONKA's power in consolidating and summarising large amounts of data is described through the analysis of three bromodomain datasets. Furthermore, and in contrast to many current methods, WONKA relates analysis to individual ligands, from which we find unusual and erroneous binding modes. Finally the use of WONKA as an annotation tool to share observations about structures is demonstrated. WONKA is freely available to download and install locally or can be used online at http://wonka.sgc.ox.ac.uk.

  11. Land-total and Ocean-total Precipitation and Evaporation from a Community Atmosphere Model version 5 Perturbed Parameter Ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Covey, Curt; Lucas, Donald D.; Trenberth, Kevin E.

    2016-03-02

    This document presents the large scale water budget statistics of a perturbed input-parameter ensemble of atmospheric model runs. The model is Version 5.1.02 of the Community Atmosphere Model (CAM). These runs are the “C-Ensemble” described by Qian et al., “Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5” (Journal of Advances in Modeling the Earth System, 2015). As noted by Qian et al., the simulations are “AMIP type” with temperature and sea ice boundary conditions chosen to match surface observations for the five year period 2000-2004. There are 1100 ensemble members in additionmore » to one run with default inputparameter values.« less

  12. Instanton-dyon ensembles reproduce deconfinement and chiral restoration phase transitions

    NASA Astrophysics Data System (ADS)

    Shuryak, Edward

    2018-03-01

    Paradigm shift in gauge topology at finite temperatures, from the instantons to their constituents - instanton-dyons - has recently lead to studies of their ensembles and very significant advances. Like instantons, they have fermionic zero modes, and their collectivization at suffciently high density explains the chiral symmetry breaking transition. Unlike instantons, these objects have electric and magnetic charges. Simulations of the instanton-dyon ensembles have demonstrated that their back reaction on the Polyakov line modifies its potential and generates the deconfinement phase transition. For the Nc = 2 gauge theory the transition is second order, for QCD-like theory with Nc = 2 and two light quark flavors Nf = 2 both transitions are weak crossovers at happening at about the same condition. Introduction of quark-flavor-dependent periodicity phases (imaginary chemical potentials) leads to drastic changes in both transitions. In particulaly, in the so called Z(Nc) - QCD model the deconfinement transforms to strong first order transition, while the chiral condensate does not disappear at all. The talk will also cover more detailed studies of correlations between the dyons, effective eta' mass and other screening masses.

  13. Application of a Split-Fiber Probe to Velocity Measurement in the NASA Research Compressor

    NASA Technical Reports Server (NTRS)

    Lepicovsky, Jan

    2003-01-01

    A split-fiber probe was used to acquire unsteady data in a research compressor. The probe has two thin films deposited on a quartz cylinder 200 microns in diameter. A split-fiber probe allows simultaneous measurement of velocity magnitude and direction in a plane that is perpendicular to the sensing cylinder, because it has its circumference divided into two independent parts. Local heat transfer considerations indicated that the probe direction characteristic is linear in the range of flow incidence angles of +/- 35. Calibration tests confirmed this assumption. Of course, the velocity characteristic is nonlinear as is typical in thermal anemometry. The probe was used extensively in the NASA Glenn Research Center (GRC) low-speed, multistage axial compressor, and worked reliably during a test program of several months duration. The velocity and direction characteristics of the probe showed only minute changes during the entire test program. An algorithm was developed to decompose the probe signals into velocity magnitude and velocity direction. The averaged unsteady data were compared with data acquired by pneumatic probes. An overall excellent agreement between the averaged data acquired by a split-fiber probe and a pneumatic probe boosts confidence in the reliability of the unsteady content of the split-fiber probe data. To investigate the features of unsteady data, two methods were used: ensemble averaging and frequency analysis. The velocity distribution in a rotor blade passage was retrieved using the ensemble averaging method. Frequencies of excitation forces that may contribute to high cycle fatigue problems were identified by applying a fast Fourier transform to the absolute velocity data.

  14. Learners' Ensemble Based Security Conceptual Model for M-Learning System in Malaysian Higher Learning Institution

    ERIC Educational Resources Information Center

    Mahalingam, Sheila; Abdollah, Faizal Mohd; Sahib, Shahrin

    2014-01-01

    M-Learning has a potential to improve efficiency in the education sector and has a tendency to grow advance and transform the learning environment in the future. Yet there are challenges in many areas faced when introducing and implementing m-learning. The learner centered attribute in mobile learning implies deployment in untrustworthy learning…

  15. Determination of knock characteristics in spark ignition engines: an approach based on ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Li, Ning; Yang, Jianguo; Zhou, Rui; Liang, Caiping

    2016-04-01

    Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner-Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated.

  16. Interpolation and analyses of EURO-Cordex data for the characterization of local and regional climate change impact

    NASA Astrophysics Data System (ADS)

    Fink, Manfred; Pfannschmidt, Kai; Knevels, Raphel; Fischer, Christian; Brenning, Alexander

    2017-04-01

    Decisions on measures for adapting to possible climate impacts are critical at both regional and local levels of authority. Currently, the data from EURO-CORDEX is only provided at resolutions (0.11 and 0.44 degrees) that are sufficient for climate analysis in larger scale regions. Therefore, there is a need for more detailed climate information that can assist decision making at the county and town levels. To tackle this challenge, we have developed a tool for the Just Another Modelling System (JAMS; Kralisch et al. 2007) that produces approx. 50 climate characterizing parameters (e.g. average temperature, ice days, climatic water balance, among others) for different time intervals. This tool is combined within the JAMS environment with the J2000g distributed conceptual hydrological model (Krause and Hanisch 2009) to additionally calculate hydro-meteorological parameters, such as actual evapotranspiration, ground water recharge and runoff generation. The resolution of the data was transformed to a higher resolution (250 m) by applying an inverse distance weights (IDW) interpolation. The IDW was combined with an altitude regression approach using digital elevation model data to represent more detailed information of the land surface. We applied this downscaling approach for the federal state of Thuringia, Germany, which is represented by 371206 model units. An ensemble of 10 different EURO-CORDEX models (0.11 degree resolution) in a time period from 1961 to 2100 and measured data from 1960 to 1990 were analyzed. The climate change impacts were estimated by analyzing the changes between historical periods (1960 - 1990) and future periods (2020 - 2050, 2070 -2100) within the modeled EURO-CORDEX ensemble members. We also improved our interpolation approach by replacing IDW with kriging; this approach was especially an advantage for the interpolation of irregularly distributed measurement stations. The results were used to estimate the effects of climate change for the federal state of Thuringia and to support Thuringian climate-change mitigation and adaptation strategies. Future work will concentrate on bias correction of the ensemble members using the measured data. References Kralisch, S., P. Krause, M. Fink, C. Fischer, and W. Flügel (2007): Component based environmental modelling using the JAMS framework, in Proceedings of the MODSIM 2007 International Congress on Modelling and Simulation, edited by D. Kulasiri and L. Oxley, Christchurch, New Zealand Krause P, Hanisch S (2009): Simulation and analysis of the impact of projected climate change on the spatially distributed water balance in Thuringia, Germany. Adv Geosci 21:33-48. doi:10.5194/adgeo-21-33-2009

  17. Automatic categorization of anatomical landmark-local appearances based on diffeomorphic demons and spectral clustering for constructing detector ensembles.

    PubMed

    Hanaoka, Shouhei; Masutani, Yoshitaka; Nemoto, Mitsutaka; Nomura, Yukihiro; Yoshikawa, Takeharu; Hayashi, Naoto; Ohtomo, Kuni

    2012-01-01

    A method for categorizing landmark-local appearances extracted from computed tomography (CT) datasets is presented. Anatomical landmarks in the human body inevitably have inter-individual variations that cause difficulty in automatic landmark detection processes. The goal of this study is to categorize subjects (i.e., training datasets) according to local shape variations of such a landmark so that each subgroup has less shape variation and thus the machine learning of each landmark detector is much easier. The similarity between each subject pair is measured based on the non-rigid registration result between them. These similarities are used by the spectral clustering process. After the clustering, all training datasets in each cluster, as well as synthesized intermediate images calculated from all subject-pairs in the cluster, are used to train the corresponding subgroup detector. All of these trained detectors compose a detector ensemble to detect the target landmark. Evaluation with clinical CT datasets showed great improvement in the detection performance.

  18. Decoding complete reach and grasp actions from local primary motor cortex populations.

    PubMed

    Vargas-Irwin, Carlos E; Shakhnarovich, Gregory; Yadollahpour, Payman; Mislow, John M K; Black, Michael J; Donoghue, John P

    2010-07-21

    How the activity of populations of cortical neurons generates coordinated multijoint actions of the arm, wrist, and hand is poorly understood. This study combined multielectrode recording techniques with full arm motion capture to relate neural activity in primary motor cortex (M1) of macaques (Macaca mulatta) to arm, wrist, and hand postures during movement. We find that the firing rate of individual M1 neurons is typically modulated by the kinematics of multiple joints and that small, local ensembles of M1 neurons contain sufficient information to reconstruct 25 measured joint angles (representing an estimated 10 functionally independent degrees of freedom). Beyond showing that the spiking patterns of local M1 ensembles represent a rich set of naturalistic movements involving the entire upper limb, the results also suggest that achieving high-dimensional reach and grasp actions with neuroprosthetic devices may be possible using small intracortical arrays like those already being tested in human pilot clinical trials.

  19. Weak constrained localized ensemble transform Kalman filter for radar data assimilation

    NASA Astrophysics Data System (ADS)

    Janjic, Tijana; Lange, Heiner

    2015-04-01

    The applications on convective scales require data assimilation with a numerical model with single digit horizontal resolution in km and time evolving error covariances. The ensemble Kalman filter (EnKF) algorithm incorporates these two requirements. However, some challenges for the convective scale applications remain unresolved when using the EnKF approach. These include a need on convective scale to estimate fields that are nonnegative (as rain, graupel, snow) and use of data sets as radar reflectivity or cloud products that have the same property. What underlines these examples are errors that are non-Gaussian in nature causing a problem with EnKF, which uses Gaussian error assumptions to produce the estimates from the previous forecast and the incoming data. Since the proper estimates of hydrometeors are crucial for prediction on convective scales, question arises whether EnKF method can be modified to improve these estimates and whether there is a way of optimizing use of radar observations to initialize NWP models due to importance of this data set for prediction of connective storms. In order to deal with non-Gaussian errors different approaches can be taken in the EnKF framework. For example, variables can be transformed by assuming the relevant state variables follow an appropriate pre-specified non-Gaussian distribution, such as the lognormal and truncated Gaussian distribution or, more generally, by carrying out a parameterized change of state variables known as Gaussian anamorphosis. In a recent work by Janjic et al. 2014, it was shown on a simple example how conservation of mass could be beneficial for assimilation of positive variables. The method developed in the paper outperformed the EnKF as well as the EnKF with the lognormal change of variables. As argued in the paper the reason for this, is that each of these methods preserves mass (EnKF) or positivity (lognormal EnKF) but not both. Only once both positivity and mass were preserved in a new algorithm, the good estimates of the fields were obtained. The alternative to strong constraint formulation in Janjic et al. 2014 is to modify LETKF algorithm to take into the account physical properties only approximately. In this work we will include the weak constraints in the LETKF algorithm for estimation of hydrometers. The benefit on prediction is illustrated in an idealized setup (Lange and Craig, 2013). This setup uses the non hydrostatic COSMO model with a 2 km horizontal resolution, and the LETKF as implemented in KENDA (Km-scale Ensemble Data Assimilation) system of German Weather Service (Reich et al. 2011). Due to the Gaussian assumptions that underline the LETKF algorithm, the analyses of water species will become negative in some grid points of the COSMO model. These values are set to zero currently in KENDA after the LETKF analysis step. The tests done within this setup show that such a procedure introduces a bias in the analysis ensemble with respect to the true, that increases in time due to the cycled data assimilation. The benefits of including the constraints in LETKF are illustrated on the bias values during assimilation and the prediction.

  20. HEPEX - achievements and challenges!

    NASA Astrophysics Data System (ADS)

    Pappenberger, Florian; Ramos, Maria-Helena; Thielen, Jutta; Wood, Andy; Wang, Qj; Duan, Qingyun; Collischonn, Walter; Verkade, Jan; Voisin, Nathalie; Wetterhall, Fredrik; Vuillaume, Jean-Francois Emmanuel; Lucatero Villasenor, Diana; Cloke, Hannah L.; Schaake, John; van Andel, Schalk-Jan

    2014-05-01

    HEPEX is an international initiative bringing together hydrologists, meteorologists, researchers and end-users to develop advanced probabilistic hydrological forecast techniques for improved flood, drought and water management. HEPEX was launched in 2004 as an independent, cooperative international scientific activity. During the first meeting, the overarching goal was defined as: "to develop and test procedures to produce reliable hydrological ensemble forecasts, and to demonstrate their utility in decision making related to the water, environmental and emergency management sectors." The applications of hydrological ensemble predictions span across large spatio-temporal scales, ranging from short-term and localized predictions to global climate change and regional modeling. Within the HEPEX community, information is shared through its blog (www.hepex.org), meetings, testbeds and intercompaison experiments, as well as project reportings. Key questions of HEPEX are: * What adaptations are required for meteorological ensemble systems to be coupled with hydrological ensemble systems? * How should the existing hydrological ensemble prediction systems be modified to account for all sources of uncertainty within a forecast? * What is the best way for the user community to take advantage of ensemble forecasts and to make better decisions based on them? This year HEPEX celebrates its 10th year anniversary and this poster will present a review of the main operational and research achievements and challenges prepared by Hepex contributors on data assimilation, post-processing of hydrologic predictions, forecast verification, communication and use of probabilistic forecasts in decision-making. Additionally, we will present the most recent activities implemented by Hepex and illustrate how everyone can join the community and participate to the development of new approaches in hydrologic ensemble prediction.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ullrich, C. A.; Kohn, W.

    An electron density distribution n(r) which can be represented by that of a single-determinant ground state of noninteracting electrons in an external potential v(r) is called pure-state v -representable (P-VR). Most physical electronic systems are P-VR. Systems which require a weighted sum of several such determinants to represent their density are called ensemble v -representable (E-VR). This paper develops formal Kohn-Sham equations for E-VR physical systems, using the appropriate coupling constant integration. It also derives local density- and generalized gradient approximations, and conditions and corrections specific to ensembles.

  2. Radar Polarimetry: Theory, Analysis, and Applications

    NASA Astrophysics Data System (ADS)

    Hubbert, John Clark

    The fields of radar polarimetry and optical polarimetry are compared. The mathematics of optic polarimetry are formulated such that a local right handed coordinate system is always used to describe the polarization states. This is not done in radar polarimetry. Radar optimum polarization theory is redeveloped within the framework of optical polarimetry. The radar optimum polarizations and optic eigenvalues of common scatterers are compared. In addition a novel definition of an eigenpolarization state is given and the accompanying mathematics is developed. The polarization response calculated using optic, radar and novel definitions is presented for a variety of scatterers. Polarimetric transformation provides a means to characterize scatters in more than one polarization basis. Polarimetric transformation for an ensemble of scatters is obtained via two methods: (1) the covariance method and (2) the instantaneous scattering matrix (ISM) method. The covariance method is used to relate the mean radar parameters of a +/-45^circ linear polarization basis to those of a horizontal and vertical polarization basis. In contrast the ISM method transforms the individual time samples. Algorithms are developed for transforming the time series from fully polarimetric radars that switch between orthogonal states. The transformed time series are then used to calculate the mean radar parameters of interest. It is also shown that propagation effects do not need to be removed from the ISM's before transformation. The techniques are demonstrated using data collected by POLDIRAD, the German Aerospace Research Establishment's fully polarimetric C-band radar. The differential phase observed between two copolar states, Psi_{CO}, is composed of two phases: (1) differential propagation phase, phi_{DP}, and (2) differential backscatter phase, delta. The slope of phi_{DP } with range is an estimate of the specific differential phase, K_{DP}. The process of estimating K_{DP} is complicated when delta is present. Algorithms are presented for estimating delta and K_{DP} from range profiles of Psi_ {CO}. Also discussed are procedures for the estimation and interpretation of other radar measurables such as reflectivity, Z_{HH}, differential reflectivity, Z_{DR }, the magnitude of the copolar correlation coefficient, rho_{HV}(0), and Doppler spectrum width, sigma _{v}. The techniques are again illustrated with data collected by POLDIRAD.

  3. Recognition of medication information from discharge summaries using ensembles of classifiers.

    PubMed

    Doan, Son; Collier, Nigel; Xu, Hua; Pham, Hoang Duy; Tu, Minh Phuong

    2012-05-07

    Extraction of clinical information such as medications or problems from clinical text is an important task of clinical natural language processing (NLP). Rule-based methods are often used in clinical NLP systems because they are easy to adapt and customize. Recently, supervised machine learning methods have proven to be effective in clinical NLP as well. However, combining different classifiers to further improve the performance of clinical entity recognition systems has not been investigated extensively. Combining classifiers into an ensemble classifier presents both challenges and opportunities to improve performance in such NLP tasks. We investigated ensemble classifiers that used different voting strategies to combine outputs from three individual classifiers: a rule-based system, a support vector machine (SVM) based system, and a conditional random field (CRF) based system. Three voting methods were proposed and evaluated using the annotated data sets from the 2009 i2b2 NLP challenge: simple majority, local SVM-based voting, and local CRF-based voting. Evaluation on 268 manually annotated discharge summaries from the i2b2 challenge showed that the local CRF-based voting method achieved the best F-score of 90.84% (94.11% Precision, 87.81% Recall) for 10-fold cross-validation. We then compared our systems with the first-ranked system in the challenge by using the same training and test sets. Our system based on majority voting achieved a better F-score of 89.65% (93.91% Precision, 85.76% Recall) than the previously reported F-score of 89.19% (93.78% Precision, 85.03% Recall) by the first-ranked system in the challenge. Our experimental results using the 2009 i2b2 challenge datasets showed that ensemble classifiers that combine individual classifiers into a voting system could achieve better performance than a single classifier in recognizing medication information from clinical text. It suggests that simple strategies that can be easily implemented such as majority voting could have the potential to significantly improve clinical entity recognition.

  4. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  5. Comparison of surface freshwater fluxes from different climate forecasts produced through different ensemble generation schemes.

    NASA Astrophysics Data System (ADS)

    Romanova, Vanya; Hense, Andreas; Wahl, Sabrina; Brune, Sebastian; Baehr, Johanna

    2016-04-01

    The decadal variability and its predictability of the surface net freshwater fluxes is compared in a set of retrospective predictions, all using the same model setup, and only differing in the implemented ocean initialisation method and ensemble generation method. The basic aim is to deduce the differences between the initialization/ensemble generation methods in view of the uncertainty of the verifying observational data sets. The analysis will give an approximation of the uncertainties of the net freshwater fluxes, which up to now appear to be one of the most uncertain products in observational data and model outputs. All ensemble generation methods are implemented into the MPI-ESM earth system model in the framework of the ongoing MiKlip project (www.fona-miklip.de). Hindcast experiments are initialised annually between 2000-2004, and from each start year 10 ensemble members are initialized for 5 years each. Four different ensemble generation methods are compared: (i) a method based on the Anomaly Transform method (Romanova and Hense, 2015) in which the initial oceanic perturbations represent orthogonal and balanced anomaly structures in space and time and between the variables taken from a control run, (ii) one-day-lagged ocean states from the MPI-ESM-LR baseline system (iii) one-day-lagged of ocean and atmospheric states with preceding full-field nudging to re-analysis in both the atmospheric and the oceanic component of the system - the baseline one MPI-ESM-LR system, (iv) an Ensemble Kalman Filter (EnKF) implemented into oceanic part of MPI-ESM (Brune et al. 2015), assimilating monthly subsurface oceanic temperature and salinity (EN3) using the Parallel Data Assimilation Framework (PDAF). The hindcasts are evaluated probabilistically using fresh water flux data sets from four different reanalysis data sets: MERRA, NCEP-R1, GFDL ocean reanalysis and GECCO2. The assessments show no clear differences in the evaluations scores on regional scales. However, on the global scale the physically motivated methods (i) and (iv) provide probabilistic hindcasts with a consistently higher reliability than the lagged initialization methods (ii)/(iii) despite the large uncertainties in the verifying observations and in the simulations.

  6. Significance of model credibility in estimating climate projection distributions for regional hydroclimatological risk assessments

    USGS Publications Warehouse

    Brekke, L.D.; Dettinger, M.D.; Maurer, E.P.; Anderson, M.

    2008-01-01

    Ensembles of historical climate simulations and climate projections from the World Climate Research Programme's (WCRP's) Coupled Model Intercomparison Project phase 3 (CMIP3) multi-model dataset were investigated to determine how model credibility affects apparent relative scenario likelihoods in regional risk assessments. Methods were developed and applied in a Northern California case study. An ensemble of 59 twentieth century climate simulations from 17 WCRP CMIP3 models was analyzed to evaluate relative model credibility associated with a 75-member projection ensemble from the same 17 models. Credibility was assessed based on how models realistically reproduced selected statistics of historical climate relevant to California climatology. Metrics of this credibility were used to derive relative model weights leading to weight-threshold culling of models contributing to the projection ensemble. Density functions were then estimated for two projected quantities (temperature and precipitation), with and without considering credibility-based ensemble reductions. An analysis for Northern California showed that, while some models seem more capable at recreating limited aspects twentieth century climate, the overall tendency is for comparable model performance when several credibility measures are combined. Use of these metrics to decide which models to include in density function development led to local adjustments to function shapes, but led to limited affect on breadth and central tendency, which were found to be more influenced by 'completeness' of the original ensemble in terms of models and emissions pathways. ?? 2007 Springer Science+Business Media B.V.

  7. Robustness of the far-field response of nonlocal plasmonic ensembles.

    PubMed

    Tserkezis, Christos; Maack, Johan R; Liu, Zhaowei; Wubs, Martijn; Mortensen, N Asger

    2016-06-22

    Contrary to classical predictions, the optical response of few-nm plasmonic particles depends on particle size due to effects such as nonlocality and electron spill-out. Ensembles of such nanoparticles are therefore expected to exhibit a nonclassical inhomogeneous spectral broadening due to size distribution. For a normal distribution of free-electron nanoparticles, and within the simple nonlocal hydrodynamic Drude model, both the nonlocal blueshift and the plasmon linewidth are shown to be considerably affected by ensemble averaging. Size-variance effects tend however to conceal nonlocality to a lesser extent when the homogeneous size-dependent broadening of individual nanoparticles is taken into account, either through a local size-dependent damping model or through the Generalized Nonlocal Optical Response theory. The role of ensemble averaging is further explored in realistic distributions of isolated or weakly-interacting noble-metal nanoparticles, as encountered in experiments, while an analytical expression to evaluate the importance of inhomogeneous broadening through measurable quantities is developed. Our findings are independent of the specific nonclassical theory used, thus providing important insight into a large range of experiments on nanoscale and quantum plasmonics.

  8. Peculiar spectral statistics of ensembles of trees and star-like graphs

    NASA Astrophysics Data System (ADS)

    Kovaleva, V.; Maximov, Yu; Nechaev, S.; Valba, O.

    2017-07-01

    In this paper we investigate the eigenvalue statistics of exponentially weighted ensembles of full binary trees and p-branching star graphs. We show that spectral densities of corresponding adjacency matrices demonstrate peculiar ultrametric structure inherent to sparse systems. In particular, the tails of the distribution for binary trees share the ‘Lifshitz singularity’ emerging in the one-dimensional localization, while the spectral statistics of p-branching star-like graphs is less universal, being strongly dependent on p. The hierarchical structure of spectra of adjacency matrices is interpreted as sets of resonance frequencies, that emerge in ensembles of fully branched tree-like systems, known as dendrimers. However, the relaxational spectrum is not determined by the cluster topology, but has rather the number-theoretic origin, reflecting the peculiarities of the rare-event statistics typical for one-dimensional systems with a quenched structural disorder. The similarity of spectral densities of an individual dendrimer and of an ensemble of linear chains with exponential distribution in lengths, demonstrates that dendrimers could be served as simple disorder-less toy models of one-dimensional systems with quenched disorder.

  9. Impact of a regional drought on terrestrial carbon fluxes and atmospheric carbon: results from a coupled carbon cycle model

    NASA Astrophysics Data System (ADS)

    Lee, E.; Koster, R. D.; Ott, L. E.; Weir, B.; Mahanama, S. P. P.; Chang, Y.; Zeng, F.

    2017-12-01

    Understanding the underlying processes that control the carbon cycle is key to predicting future global change. Much of the uncertainty in the magnitude and variability of the atmospheric carbon dioxide (CO2) stems from uncertainty in terrestrial carbon fluxes. Budget-based analyses show that such fluxes exhibit substantial interannual variability, but the relative impacts of temperature and moisture variations on regional and global scales are poorly understood. Here we investigate the impact of a regional drought on terrestrial carbon fluxes and CO2 mixing ratios over North America using the NASA Goddard Earth Observing System (GEOS) Model. Two 48-member ensembles of NASA GEOS-5 simulations with fully coupled land and atmosphere carbon components are performed - a control ensemble and an ensemble with an artificially imposed dry land surface anomaly for three months (April-June) over the lower Mississippi River Valley. Comparison of the results using the ensemble approach allows a direct quantification of the impact of the regional drought on local and proximate carbon exchange at the land surface via the carbon-water feedback processes.

  10. Peculiar spectral statistics of ensembles of trees and star-like graphs

    DOE PAGES

    Kovaleva, V.; Maximov, Yu; Nechaev, S.; ...

    2017-07-11

    In this paper we investigate the eigenvalue statistics of exponentially weighted ensembles of full binary trees and p-branching star graphs. We show that spectral densities of corresponding adjacency matrices demonstrate peculiar ultrametric structure inherent to sparse systems. In particular, the tails of the distribution for binary trees share the \\Lifshitz singularity" emerging in the onedimensional localization, while the spectral statistics of p-branching star-like graphs is less universal, being strongly dependent on p. The hierarchical structure of spectra of adjacency matrices is interpreted as sets of resonance frequencies, that emerge in ensembles of fully branched tree-like systems, known as dendrimers. However,more » the relaxational spectrum is not determined by the cluster topology, but has rather the number-theoretic origin, re ecting the peculiarities of the rare-event statistics typical for one-dimensional systems with a quenched structural disorder. The similarity of spectral densities of an individual dendrimer and of ensemble of linear chains with exponential distribution in lengths, demonstrates that dendrimers could be served as simple disorder-less toy models of one-dimensional systems with quenched disorder.« less

  11. Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction

    NASA Astrophysics Data System (ADS)

    Mons, Vincent; Wang, Qi; Zaki, Tamer

    2017-11-01

    Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).

  12. Peculiar spectral statistics of ensembles of trees and star-like graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovaleva, V.; Maximov, Yu; Nechaev, S.

    In this paper we investigate the eigenvalue statistics of exponentially weighted ensembles of full binary trees and p-branching star graphs. We show that spectral densities of corresponding adjacency matrices demonstrate peculiar ultrametric structure inherent to sparse systems. In particular, the tails of the distribution for binary trees share the \\Lifshitz singularity" emerging in the onedimensional localization, while the spectral statistics of p-branching star-like graphs is less universal, being strongly dependent on p. The hierarchical structure of spectra of adjacency matrices is interpreted as sets of resonance frequencies, that emerge in ensembles of fully branched tree-like systems, known as dendrimers. However,more » the relaxational spectrum is not determined by the cluster topology, but has rather the number-theoretic origin, re ecting the peculiarities of the rare-event statistics typical for one-dimensional systems with a quenched structural disorder. The similarity of spectral densities of an individual dendrimer and of ensemble of linear chains with exponential distribution in lengths, demonstrates that dendrimers could be served as simple disorder-less toy models of one-dimensional systems with quenched disorder.« less

  13. Local-scale changes in mean and heavy precipitation in Western Europe, climate change or internal variability?

    NASA Astrophysics Data System (ADS)

    Aalbers, Emma E.; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart J. J. M.

    2018-06-01

    High-resolution climate information provided by e.g. regional climate models (RCMs) is valuable for exploring the changing weather under global warming, and assessing the local impact of climate change. While there is generally more confidence in the representativeness of simulated processes at higher resolutions, internal variability of the climate system—`noise', intrinsic to the chaotic nature of atmospheric and oceanic processes—is larger at smaller spatial scales as well, limiting the predictability of the climate signal. To quantify the internal variability and robustly estimate the climate signal, large initial-condition ensembles of climate simulations conducted with a single model provide essential information. We analyze a regional downscaling of a 16-member initial-condition ensemble over western Europe and the Alps at 0.11° resolution, similar to the highest resolution EURO-CORDEX simulations. We examine the strength of the forced climate response (signal) in mean and extreme daily precipitation with respect to noise due to internal variability, and find robust small-scale geographical features in the forced response, indicating regional differences in changes in the probability of events. However, individual ensemble members provide only limited information on the forced climate response, even for high levels of global warming. Although the results are based on a single RCM-GCM chain, we believe that they have general value in providing insight in the fraction of the uncertainty in high-resolution climate information that is irreducible, and can assist in the correct interpretation of fine-scale information in multi-model ensembles in terms of a forced response and noise due to internal variability.

  14. Local-scale changes in mean and heavy precipitation in Western Europe, climate change or internal variability?

    NASA Astrophysics Data System (ADS)

    Aalbers, Emma E.; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart J. J. M.

    2017-09-01

    High-resolution climate information provided by e.g. regional climate models (RCMs) is valuable for exploring the changing weather under global warming, and assessing the local impact of climate change. While there is generally more confidence in the representativeness of simulated processes at higher resolutions, internal variability of the climate system—`noise', intrinsic to the chaotic nature of atmospheric and oceanic processes—is larger at smaller spatial scales as well, limiting the predictability of the climate signal. To quantify the internal variability and robustly estimate the climate signal, large initial-condition ensembles of climate simulations conducted with a single model provide essential information. We analyze a regional downscaling of a 16-member initial-condition ensemble over western Europe and the Alps at 0.11° resolution, similar to the highest resolution EURO-CORDEX simulations. We examine the strength of the forced climate response (signal) in mean and extreme daily precipitation with respect to noise due to internal variability, and find robust small-scale geographical features in the forced response, indicating regional differences in changes in the probability of events. However, individual ensemble members provide only limited information on the forced climate response, even for high levels of global warming. Although the results are based on a single RCM-GCM chain, we believe that they have general value in providing insight in the fraction of the uncertainty in high-resolution climate information that is irreducible, and can assist in the correct interpretation of fine-scale information in multi-model ensembles in terms of a forced response and noise due to internal variability.

  15. How uncertain are climate model projections of water availability indicators across the Middle East?

    PubMed

    Hemming, Debbie; Buontempo, Carlo; Burke, Eleanor; Collins, Mat; Kaye, Neil

    2010-11-28

    The projection of robust regional climate changes over the next 50 years presents a considerable challenge for the current generation of climate models. Water cycle changes are particularly difficult to model in this area because major uncertainties exist in the representation of processes such as large-scale and convective rainfall and their feedback with surface conditions. We present climate model projections and uncertainties in water availability indicators (precipitation, run-off and drought index) for the 1961-1990 and 2021-2050 periods. Ensembles from two global climate models (GCMs) and one regional climate model (RCM) are used to examine different elements of uncertainty. Although all three ensembles capture the general distribution of observed annual precipitation across the Middle East, the RCM is consistently wetter than observations, especially over the mountainous areas. All future projections show decreasing precipitation (ensemble median between -5 and -25%) in coastal Turkey and parts of Lebanon, Syria and Israel and consistent run-off and drought index changes. The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) GCM ensemble exhibits drying across the north of the region, whereas the Met Office Hadley Centre work Quantifying Uncertainties in Model ProjectionsAtmospheric (QUMP-A) GCM and RCM ensembles show slight drying in the north and significant wetting in the south. RCM projections also show greater sensitivity (both wetter and drier) and a wider uncertainty range than QUMP-A. The nature of these uncertainties suggests that both large-scale circulation patterns, which influence region-wide drying/wetting patterns, and regional-scale processes, which affect localized water availability, are important sources of uncertainty in these projections. To reduce large uncertainties in water availability projections, it is suggested that efforts would be well placed to focus on the understanding and modelling of both large-scale processes and their teleconnections with Middle East climate and localized processes involved in orographic precipitation.

  16. Simulation of an ensemble of future climate time series with an hourly weather generator

    NASA Astrophysics Data System (ADS)

    Caporali, E.; Fatichi, S.; Ivanov, V. Y.; Kim, J.

    2010-12-01

    There is evidence that climate change is occurring in many regions of the world. The necessity of climate change predictions at the local scale and fine temporal resolution is thus warranted for hydrological, ecological, geomorphological, and agricultural applications that can provide thematic insights into the corresponding impacts. Numerous downscaling techniques have been proposed to bridge the gap between the spatial scales adopted in General Circulation Models (GCM) and regional analyses. Nevertheless, the time and spatial resolutions obtained as well as the type of meteorological variables may not be sufficient for detailed studies of climate change effects at the local scales. In this context, this study presents a stochastic downscaling technique that makes use of an hourly weather generator to simulate time series of predicted future climate. Using a Bayesian approach, the downscaling procedure derives distributions of factors of change for several climate statistics from a multi-model ensemble of GCMs. Factors of change are sampled from their distributions using a Monte Carlo technique to entirely account for the probabilistic information obtained with the Bayesian multi-model ensemble. Factors of change are subsequently applied to the statistics derived from observations to re-evaluate the parameters of the weather generator. The weather generator can reproduce a wide set of climate variables and statistics over a range of temporal scales, from extremes, to the low-frequency inter-annual variability. The final result of such a procedure is the generation of an ensemble of hourly time series of meteorological variables that can be considered as representative of future climate, as inferred from GCMs. The generated ensemble of scenarios also accounts for the uncertainty derived from multiple GCMs used in downscaling. Applications of the procedure in reproducing present and future climates are presented for different locations world-wide: Tucson (AZ), Detroit (MI), and Firenze (Italy). The stochastic downscaling is carried out with eight GCMs from the CMIP3 multi-model dataset (IPCC 4AR, A1B scenario).

  17. Dilatancy of Shear Transformations in a Colloidal Glass

    NASA Astrophysics Data System (ADS)

    Lu, Y. Z.; Jiang, M. Q.; Lu, X.; Qin, Z. X.; Huang, Y. J.; Shen, J.

    2018-01-01

    Shear transformations, as fundamental rearrangement events operating in local regions, hold the key of plastic flow of amorphous solids. Despite their importance, the dynamic features of shear transformations are far from clear, which is the focus of the present study. Here, we use a colloidal glass under shear as the prototype to directly observe the shear-transformation events in real space. By tracing the colloidal-particle rearrangements, we quantitatively determine two basic properties of shear transformations: local shear strain and dilatation (or free volume). It is revealed that the local free volume undergoes a significantly temporary increase prior to shear transformations, eventually leading to a jump of local shear strain. We clearly demonstrate that shear transformations have no memory of the initial free volume of local regions. Instead, their emergence strongly depends on the dilatancy ability of these local regions, i.e., the dynamic creation of free volume. More specifically, the particles processing the high dilatancy ability directly participate in subsequent shear transformations. These results experimentally enrich Argon's statement about the dilatancy nature of shear transformations and also shed insight into the structural origin of amorphous plasticity.

  18. Ensemble predictive model for more accurate soil organic carbon spectroscopic estimation

    NASA Astrophysics Data System (ADS)

    Vašát, Radim; Kodešová, Radka; Borůvka, Luboš

    2017-07-01

    A myriad of signal pre-processing strategies and multivariate calibration techniques has been explored in attempt to improve the spectroscopic prediction of soil organic carbon (SOC) over the last few decades. Therefore, to come up with a novel, more powerful, and accurate predictive approach to beat the rank becomes a challenging task. However, there may be a way, so that combine several individual predictions into a single final one (according to ensemble learning theory). As this approach performs best when combining in nature different predictive algorithms that are calibrated with structurally different predictor variables, we tested predictors of two different kinds: 1) reflectance values (or transforms) at each wavelength and 2) absorption feature parameters. Consequently we applied four different calibration techniques, two per each type of predictors: a) partial least squares regression and support vector machines for type 1, and b) multiple linear regression and random forest for type 2. The weights to be assigned to individual predictions within the ensemble model (constructed as a weighted average) were determined by an automated procedure that ensured the best solution among all possible was selected. The approach was tested at soil samples taken from surface horizon of four sites differing in the prevailing soil units. By employing the ensemble predictive model the prediction accuracy of SOC improved at all four sites. The coefficient of determination in cross-validation (R2cv) increased from 0.849, 0.611, 0.811 and 0.644 (the best individual predictions) to 0.864, 0.650, 0.824 and 0.698 for Site 1, 2, 3 and 4, respectively. Generally, the ensemble model affected the final prediction so that the maximal deviations of predicted vs. observed values of the individual predictions were reduced, and thus the correlation cloud became thinner as desired.

  19. Ensemble perception in autism spectrum disorder: Member-identification versus mean-discrimination.

    PubMed

    Van der Hallen, Ruth; Lemmens, Lisa; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2017-07-01

    To efficiently represent the outside world our brain compresses sets of similar items into a summarized representation, a phenomenon known as ensemble perception. While most studies on ensemble perception investigate this perceptual mechanism in typically developing (TD) adults, more recently, researchers studying perceptual organization in individuals with autism spectrum disorder (ASD) have turned their attention toward ensemble perception. The current study is the first to investigate the use of ensemble perception for size in children with and without ASD (N = 42, 8-16 years). We administered a pair of tasks pioneered by Ariely [2001] evaluating both member-identification and mean-discrimination. In addition, we varied the distribution types of our sets to allow a more detailed evaluation of task performance. Results show that, overall, both groups performed similarly in the member-identification task, a test of "local perception," and similarly in the mean identification task, a test of "gist perception." However, in both tasks performance of the TD group was affected more strongly by the degree of stimulus variability in the set, than performance of the ASD group. These findings indicate that both TD children and children with ASD use ensemble statistics to represent a set of similar items, illustrating the fundamental nature of ensemble coding in visual perception. Differences in sensitivity to stimulus variability between both groups are discussed in relation to recent theories of information processing in ASD (e.g., increased sampling, decreased priors, increased precision). Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1291-1299. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  20. An operational mesoscale ensemble data assimilation and prediction system: E-RTFDDA

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Hopson, T.; Roux, G.; Hacker, J.; Xu, M.; Warner, T.; Swerdlin, S.

    2009-04-01

    Mesoscale (2-2000 km) meteorological processes differ from synoptic circulations in that mesoscale weather changes rapidly in space and time, and physics processes that are parameterized in NWP models play a great role. Complex interactions of synoptic circulations, regional and local terrain, land-surface heterogeneity, and associated physical properties, and the physical processes of radiative transfer, cloud and precipitation and boundary layer mixing, are crucial in shaping regional weather and climate. Mesoscale ensemble analysis and prediction should sample the uncertainties of mesoscale modeling systems in representing these factors. An innovative mesoscale Ensemble Real-Time Four Dimensional Data Assimilation (E-RTFDDA) and forecasting system has been developed at NCAR. E-RTFDDA contains diverse ensemble perturbation approaches that consider uncertainties in all major system components to produce multi-scale continuously-cycling probabilistic data assimilation and forecasting. A 30-member E-RTFDDA system with three nested domains with grid sizes of 30, 10 and 3.33 km has been running on a Department of Defense high-performance computing platform since September 2007. It has been applied at two very different US geographical locations; one in the western inter-mountain area and the other in the northeastern states, producing 6 hour analyses and 48 hour forecasts, with 4 forecast cycles a day. The operational model outputs are analyzed to a) assess overall ensemble performance and properties, b) study terrain effect on mesoscale predictability, c) quantify the contribution of different ensemble perturbation approaches to the overall forecast skill, and d) assess the additional contributed skill from an ensemble calibration process based on a quantile-regression algorithm. The system and the results will be reported at the meeting.

  1. Impact assessment of climate change on tourism in the Pacific small islands based on the database of long-term high-resolution climate ensemble experiments

    NASA Astrophysics Data System (ADS)

    Watanabe, S.; Utsumi, N.; Take, M.; Iida, A.

    2016-12-01

    This study aims to develop a new approach to assess the impact of climate change on the small oceanic islands in the Pacific. In the new approach, the change of the probabilities of various situations was projected with considering the spread of projection derived from ensemble simulations, instead of projecting the most probable situation. The database for Policy Decision making for Future climate change (d4PDF) is a database of long-term high-resolution climate ensemble experiments, which has the results of 100 ensemble simulations. We utilized the database for Policy Decision making for Future climate change (d4PDF), which was (a long-term and high-resolution database) composed of results of 100 ensemble experiments. A new methodology, Multi Threshold Ensemble Assessment (MTEA), was developed using the d4PDF in order to assess the impact of climate change. We focused on the impact of climate change on tourism because it has played an important role in the economy of the Pacific Islands. The Yaeyama Region, one of the tourist destinations in Okinawa, Japan, was selected as the case study site. Two kinds of impact were assessed: change in probability of extreme climate phenomena and tourist satisfaction associated with weather. The database of long-term high-resolution climate ensemble experiments and the questionnaire survey conducted by a local government were used for the assessment. The result indicated that the strength of extreme events would be increased, whereas the probability of occurrence would be decreased. This change should result in increase of the number of clear days and it could contribute to improve the tourist satisfaction.

  2. Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo

    By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.

  3. Complex structural dynamics of nanocatalysts revealed in Operando conditions by correlated imaging and spectroscopy probes

    DOE PAGES

    Li, Y.; Zakharov, D.; Zhao, S.; ...

    2015-06-29

    Understanding how heterogeneous catalysts change size, shape and structure during chemical reactions is limited by the paucity of methods for studying catalytic ensembles in working state, that is, in operando conditions. Here by a correlated use of synchrotron X-ray absorption spectroscopy and scanning transmission electron microscopy in operando conditions, we quantitatively describe the complex structural dynamics of supported Pt catalysts exhibited during an exemplary catalytic reaction—ethylene hydrogenation. This work exploits a microfabricated catalytic reactor compatible with both probes. The results demonstrate dynamic transformations of the ensemble of Pt clusters that spans a broad size range throughout changing reaction conditions. Lastly,more » this method is generalizable to quantitative operando studies of complex systems using a wide variety of X-ray and electron-based experimental probes.« less

  4. Stability evaluation of short-circuiting gas metal arc welding based on ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Wang, Kehong; Zhou, Zhilan; Zhou, Xiaoxiao; Fang, Jimi

    2017-03-01

    The arc of gas metal arc welding (GMAW) contains abundant information about its stability and droplet transition, which can be effectively characterized by extracting the arc electrical signals. In this study, ensemble empirical mode decomposition (EEMD) was used to evaluate the stability of electrical current signals. The welding electrical signals were first decomposed by EEMD, and then transformed to a Hilbert-Huang spectrum and a marginal spectrum. The marginal spectrum is an approximate distribution of amplitude with frequency of signals, and can be described by a marginal index. Analysis of various welding process parameters showed that the marginal index of current signals increased when the welding process was more stable, and vice versa. Thus EEMD combined with the marginal index can effectively uncover the stability and droplet transition of GMAW.

  5. Relaxation in a two-body Fermi-Pasta-Ulam system in the canonical ensemble

    NASA Astrophysics Data System (ADS)

    Sen, Surajit; Barrett, Tyler

    The study of the dynamics of the Fermi-Pasta-Ulam (FPU) chain remains a challenging problem. Inspired by the recent work of Onorato et al. on thermalization in the FPU system, we report a study of relaxation processes in a two-body FPU system in the canonical ensemble. The studies have been carried out using the Recurrence Relations Method introduced by Zwanzig, Mori, Lee and others. We have obtained exact analytical expressions for the first thirteen levels of the continued fraction representation of the Laplace transformed velocity autocorrelation function of the system. Using simple and reasonable extrapolation schemes and known limits we are able to estimate the relaxation behavior of the oscillators in the two-body FPU system and recover the expected behavior in the harmonic limit. Generalizations of the calculations to larger systems will be discussed.

  6. DART: Tools and Support for Ensemble Data Assimilation Research, Operations, and Education

    NASA Astrophysics Data System (ADS)

    Hoar, T. J.; Anderson, J. L.; Collins, N.; Raeder, K.; Kershaw, H.; Romine, G. S.; Mizzi, A. P.; Chatterjee, A.; Karspeck, A. R.; Zarzycki, C. M.; Ha, S. Y.; Barre, J.; Gaubert, B.

    2014-12-01

    The Data Assimilation Research Testbed (DART) is a community facility for ensemble data assimilation developed and supported by the National Center for Atmospheric Research. DART provides a comprehensive suite of software, documentation, examples and tutorials that can be used for ensemble data assimilation research, operations, and education. Scientists and software engineers from the Data Assimilation Research Section at NCAR are available to actively support DART users who want to use existing DART products or develop their own new applications. Current DART users range from university professors teaching data assimilation, to individual graduate students working with simple models, through national laboratories doing operational prediction with large state-of-the-art models. DART runs efficiently on many computational platforms ranging from laptops through thousands of cores on the newest supercomputers. This poster focuses on several recent research activities using DART with geophysical models. First, DART is being used with the Community Atmosphere Model Spectral Element (CAM-SE) and Model for Prediction Across Scales (MPAS) global atmospheric models that support locally enhanced grid resolution. Initial results from ensemble assimilation with both models are presented. DART is also being used to produce ensemble analyses of atmospheric tracers, in particular CO, in both the global CAM-Chem model and the regional Weather Research and Forecast with chemistry (WRF-Chem) model by assimilating observations from the Measurements of Pollution in the Troposphere (MOPITT) and Infrared Atmospheric Sounding Interferometer (IASI) instruments. Results from ensemble analyses in both models are presented. An interface between DART and the Community Atmosphere Biosphere Land Exchange (CABLE) model has been completed and ensemble land surface analyses with DART/CABLE will be discussed. Finally, an update on ensemble analyses in the fully-coupled Community Earth System (CESM) is presented. The poster includes instructions on how to get started using DART for research or educational applications.

  7. An evaluation of soil water outlooks for winter wheat in south-eastern Australia

    NASA Astrophysics Data System (ADS)

    Western, A. W.; Dassanayake, K. B.; Perera, K. C.; Alves, O.; Young, G.; Argent, R.

    2015-12-01

    Abstract: Soil moisture is a key limiting resource for rain-fed cropping in Australian broad-acre cropping zones. Seasonal rainfall and temperature outlooks are standard operational services offered by the Australian Bureau of Meteorology and are routinely used to support agricultural decisions. This presentation examines the performance of proposed soil water seasonal outlooks in the context of wheat cropping in south-eastern Australia (autumn planting, late spring harvest). We used weather ensembles simulated by the Predictive Ocean-Atmosphere Model for Australia (POAMA), as input to the Agricultural Production Simulator (APSIM) to construct ensemble soil water "outlooks" at twenty sites. Hindcasts were made over a 33 year period using the 33 POAMA ensemble members. The overall modelling flow involved: 1. Downscaling of the daily weather series (rainfall, minimum and maximum temperature, humidity, radiation) from the ~250km POAMA grid scale to a local weather station using quantile-quantile correction. This was based on a 33 year observation record extracted from the SILO data drill product. 2. Using APSIM to produce soil water ensembles from the downscaled weather ensembles. A warm up period of 5 years of observed weather was followed by a 9 month hindcast period based on each ensemble member. 3. The soil water ensembles were summarized by estimating the proportion of outlook ensembles in each climatological tercile, where the climatology was constructed using APSIM and observed weather from the 33 years of hindcasts at the relevant site. 4. The soil water outlooks were evaluated for different lead times and months using a "truth" run of APSIM based on observed weather. Outlooks generally have useful some forecast skill for lead times of up to two-three months, except late spring; in line with current useful lead times for rainfall outlooks. Better performance was found in summer and autumn when vegetation cover and water use is low.

  8. Ensemble averaging and stacking of ARIMA and GSTAR model for rainfall forecasting

    NASA Astrophysics Data System (ADS)

    Anggraeni, D.; Kurnia, I. F.; Hadi, A. F.

    2018-04-01

    Unpredictable rainfall changes can affect human activities, such as in agriculture, aviation, shipping which depend on weather forecasts. Therefore, we need forecasting tools with high accuracy in predicting the rainfall in the future. This research focus on local forcasting of the rainfall at Jember in 2005 until 2016, from 77 rainfall stations. The rainfall here was not only related to the occurrence of the previous of its stations, but also related to others, it’s called the spatial effect. The aim of this research is to apply the GSTAR model, to determine whether there are some correlations of spatial effect between one to another stations. The GSTAR model is an expansion of the space-time model that combines the time-related effects, the locations (stations) in a time series effects, and also the location it self. The GSTAR model will also be compared to the ARIMA model that completely ignores the independent variables. The forcested value of the ARIMA and of the GSTAR models then being combined using the ensemble forecasting technique. The averaging and stacking method of ensemble forecasting method here provide us the best model with higher acuracy model that has the smaller RMSE (Root Mean Square Error) value. Finally, with the best model we can offer a better local rainfall forecasting in Jember for the future.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou Fengji; Hogg, David W.; Goodman, Jonathan

    Markov chain Monte Carlo (MCMC) proves to be powerful for Bayesian inference and in particular for exoplanet radial velocity fitting because MCMC provides more statistical information and makes better use of data than common approaches like chi-square fitting. However, the nonlinear density functions encountered in these problems can make MCMC time-consuming. In this paper, we apply an ensemble sampler respecting affine invariance to orbital parameter extraction from radial velocity data. This new sampler has only one free parameter, and does not require much tuning for good performance, which is important for automatization. The autocorrelation time of this sampler is approximatelymore » the same for all parameters and far smaller than Metropolis-Hastings, which means it requires many fewer function calls to produce the same number of independent samples. The affine-invariant sampler speeds up MCMC by hundreds of times compared with Metropolis-Hastings in the same computing situation. This novel sampler would be ideal for projects involving large data sets such as statistical investigations of planet distribution. The biggest obstacle to ensemble samplers is the existence of multiple local optima; we present a clustering technique to deal with local optima by clustering based on the likelihood of the walkers in the ensemble. We demonstrate the effectiveness of the sampler on real radial velocity data.« less

  10. Brain science: from the very small to the very large.

    PubMed

    Kreiman, Gabriel

    2007-09-04

    We still lack a clear understanding of how brain imaging signals relate to neuronal activity. Recent work shows that the simultaneous activity of neuronal ensembles strongly correlates with local field potentials and imaging measurements.

  11. Forecasting seasonal outbreaks of influenza.

    PubMed

    Shaman, Jeffrey; Karspeck, Alicia

    2012-12-11

    Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003-2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza.

  12. Forecasting seasonal outbreaks of influenza

    PubMed Central

    Shaman, Jeffrey; Karspeck, Alicia

    2012-01-01

    Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003–2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza. PMID:23184969

  13. Inertial electrostatic confinement and nuclear fusion in the interelectrode plasma of a nanosecond vacuum discharge. I: Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurilenkov, Yu. K.; Skowronek, M.

    2010-12-15

    Properties of an aerosol substance with a high power density in the interelectrode space of a nano- second vacuum discharge are studied. The possibilities of emission and/or trapping of fast ions and hard X-rays by ensembles of clusters and microparticles are analyzed. The possibility of simultaneous partial trapping (diffusion) of X-rays and complete trapping of fast ions by a cluster ensemble is demonstrated experimentally. Due to such trapping, the aerosol ensemble transforms into a 'dusty' microreactor that can be used to investigate a certain class of nuclear processes, including collisional DD microfusion. Operating regimes of such a microreactor and theirmore » reproducibility were studied. On the whole, the generation efficiency of hard X-rays and neutrons in the proposed vacuum discharge with a hollow cathode can be higher by two orders of magnitude than that in a system 'high-power laser pulse-cluster cloud.' Multiply repeated nuclear fusion accompanied by pulsating DD neutron emission was reproducibly detected in experiment. Ion acceleration mechanisms in the interelectrode space and the fundamental role of the virtual cathode in observed nuclear fusion processes are discussed.« less

  14. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  15. Mechanochemical formation of heterogeneous diamond structures during rapid uniaxial compression in graphite

    NASA Astrophysics Data System (ADS)

    Kroonblawd, Matthew P.; Goldman, Nir

    2018-05-01

    We predict mechanochemical formation of heterogeneous diamond structures from rapid uniaxial compression in graphite using quantum molecular dynamics simulations. Ensembles of simulations reveal the formation of different diamondlike products starting from thermal graphite crystal configurations. We identify distinct classes of final products with characteristic probabilities of formation, stress states, and electrical properties and show through simulations of rapid quenching that these products are nominally stable and can be recovered at room temperature and pressure. Some of the diamond products exhibit significant disorder and partial closure of the energy gap between the highest-occupied and lowest-unoccupied molecular orbitals (i.e., the HOMO-LUMO gap). Seeding atomic vacancies in graphite significantly biases toward forming products with small HOMO-LUMO gap. We show that a strong correlation between the HOMO-LUMO gap and disorder in tetrahedral bonding configurations informs which kinds of structural defects are associated with gap closure. The rapid diffusionless transformation of graphite is found to lock vacancy defects into the final diamond structure, resulting in configurations that prevent s p3 bonding and lead to localized HOMO and LUMO states with a small gap.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William D; Johansen, Hans; Evans, Katherine J

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  17. Optical and structural properties of ensembles of colloidal Ag{sub 2}S quantum dots in gelatin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovchinnikov, O. V., E-mail: Ovchinnikov-O-V@rambler.ru; Smirnov, M. S.; Shapiro, B. I.

    2015-03-15

    The size dependences of the absorption and luminescence spectra of ensembles of hydrophilic colloidal Ag{sub 2}S quantum dots produced by the sol-gel method and dispersed in gelatin are analyzed. By X-ray diffraction analysis and transmission electron microscopy, the formation of core/shell nanoparticles is detected. The characteristic feature of the nanoparticles is the formation of crystalline cores, 1.5–2.0 nm in dimensions, and shells of gelatin and its complexes with the components of synthesis. The observed slight size dependence of the position of infrared photoluminescence bands (in the range 1000–1400 nm) in the ensembles of hydrophilic colloidal Ag{sub 2}S quantum dots ismore » explained within the context of the model of the radiative recombination of electrons localized at structural and impurity defects with free holes.« less

  18. The Copernicus Atmosphere Monitoring Service: facilitating the prediction of air quality from global to local scales

    NASA Astrophysics Data System (ADS)

    Engelen, R. J.; Peuch, V. H.

    2017-12-01

    The European Copernicus Atmosphere Monitoring Service (CAMS) operationally provides daily forecasts of global atmospheric composition and regional air quality. The global forecasting system is using ECMWF's Integrated Forecasting System (IFS), which is used for numerical weather prediction and which has been extended with modules for atmospheric chemistry, aerosols and greenhouse gases. The regional forecasts are produced by an ensemble of seven operational European air quality models that take their boundary conditions from the global system and provide an ensemble median with ensemble spread as their main output. Both the global and regional forecasting systems are feeding their output into air quality models on a variety of scales in various parts of the world. We will introduce the CAMS service chain and provide illustrations of its use in downstream applications. Both the usage of the daily forecasts and the usage of global and regional reanalyses will be addressed.

  19. Large-scale recording of neuronal ensembles.

    PubMed

    Buzsáki, György

    2004-05-01

    How does the brain orchestrate perceptions, thoughts and actions from the spiking activity of its neurons? Early single-neuron recording research treated spike pattern variability as noise that needed to be averaged out to reveal the brain's representation of invariant input. Another view is that variability of spikes is centrally coordinated and that this brain-generated ensemble pattern in cortical structures is itself a potential source of cognition. Large-scale recordings from neuronal ensembles now offer the opportunity to test these competing theoretical frameworks. Currently, wire and micro-machined silicon electrode arrays can record from large numbers of neurons and monitor local neural circuits at work. Achieving the full potential of massively parallel neuronal recordings, however, will require further development of the neuron-electrode interface, automated and efficient spike-sorting algorithms for effective isolation and identification of single neurons, and new mathematical insights for the analysis of network properties.

  20. Delineating Facies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics with Level Set Transformation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Glenn Edward; Song, Xuehang; Ye, Ming

    A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. Themore » spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.« less

  1. Explosion localization via infrasound.

    PubMed

    Szuberla, Curt A L; Olson, John V; Arnoult, Kenneth M

    2009-11-01

    Two acoustic source localization techniques were applied to infrasonic data and their relative performance was assessed. The standard approach for low-frequency localization uses an ensemble of small arrays to separately estimate far-field source bearings, resulting in a solution from the various back azimuths. This method was compared to one developed by the authors that treats the smaller subarrays as a single, meta-array. In numerical simulation and a field experiment, the latter technique was found to provide improved localization precision everywhere in the vicinity of a 3-km-aperture meta-array, often by an order of magnitude.

  2. The Rise of Complexity in Flood Forecasting: Opportunities, Challenges and Tradeoffs

    NASA Astrophysics Data System (ADS)

    Wood, A. W.; Clark, M. P.; Nijssen, B.

    2017-12-01

    Operational flood forecasting is currently undergoing a major transformation. Most national flood forecasting services have relied for decades on lumped, highly calibrated conceptual hydrological models running on local office computing resources, providing deterministic streamflow predictions at gauged river locations that are important to stakeholders and emergency managers. A variety of recent technological advances now make it possible to run complex, high-to-hyper-resolution models for operational hydrologic prediction over large domains, and the US National Weather Service is now attempting to use hyper-resolution models to create new forecast services and products. Yet other `increased-complexity' forecasting strategies also exist that pursue different tradeoffs between model complexity (i.e., spatial resolution, physics) and streamflow forecast system objectives. There is currently a pressing need for a greater understanding in the hydrology community of the opportunities, challenges and tradeoffs associated with these different forecasting approaches, and for a greater participation by the hydrology community in evaluating, guiding and implementing these approaches. Intermediate-resolution forecast systems, for instance, use distributed land surface model (LSM) physics but retain the agility to deploy ensemble methods (including hydrologic data assimilation and hindcast-based post-processing). Fully coupled numerical weather prediction (NWP) systems, another example, use still coarser LSMs to produce ensemble streamflow predictions either at the model scale or after sub-grid scale runoff routing. Based on the direct experience of the authors and colleagues in research and operational forecasting, this presentation describes examples of different streamflow forecast paradigms, from the traditional to the recent hyper-resolution, to illustrate the range of choices facing forecast system developers. We also discuss the degree to which the strengths and weaknesses of each strategy map onto the requirements for different types of forecasting services (e.g., flash flooding, river flooding, seasonal water supply prediction).

  3. Local chemical potential, local hardness, and dual descriptors in temperature dependent chemical reactivity theory.

    PubMed

    Franco-Pérez, Marco; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2017-05-31

    In this work we establish a new temperature dependent procedure within the grand canonical ensemble, to avoid the Dirac delta function exhibited by some of the second order chemical reactivity descriptors based on density functional theory, at a temperature of 0 K. Through the definition of a local chemical potential designed to integrate to the global temperature dependent electronic chemical potential, the local chemical hardness is expressed in terms of the derivative of this local chemical potential with respect to the average number of electrons. For the three-ground-states ensemble model, this local hardness contains a term that is equal to the one intuitively proposed by Meneses, Tiznado, Contreras and Fuentealba, which integrates to the global hardness given by the difference in the first ionization potential, I, and the electron affinity, A, at any temperature. However, in the present approach one finds an additional temperature-dependent term that introduces changes at the local level and integrates to zero. Additionally, a τ-hard dual descriptor and a τ-soft dual descriptor given in terms of the product of the global hardness and the global softness multiplied by the dual descriptor, respectively, are derived. Since all these reactivity indices are given by expressions composed of terms that correspond to products of the global properties multiplied by the electrophilic or nucleophilic Fukui functions, they may be useful for studying and comparing equivalent sites in different chemical environments.

  4. Fast multidimensional ensemble empirical mode decomposition for the analysis of big spatio-temporal datasets.

    PubMed

    Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min

    2016-04-13

    In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. © 2016 The Authors.

  5. Improvement of Disease Prediction and Modeling through the Use of Meteorological Ensembles: Human Plague in Uganda

    PubMed Central

    Moore, Sean M.; Monaghan, Andrew; Griffith, Kevin S.; Apangu, Titus; Mead, Paul S.; Eisen, Rebecca J.

    2012-01-01

    Climate and weather influence the occurrence, distribution, and incidence of infectious diseases, particularly those caused by vector-borne or zoonotic pathogens. Thus, models based on meteorological data have helped predict when and where human cases are most likely to occur. Such knowledge aids in targeting limited prevention and control resources and may ultimately reduce the burden of diseases. Paradoxically, localities where such models could yield the greatest benefits, such as tropical regions where morbidity and mortality caused by vector-borne diseases is greatest, often lack high-quality in situ local meteorological data. Satellite- and model-based gridded climate datasets can be used to approximate local meteorological conditions in data-sparse regions, however their accuracy varies. Here we investigate how the selection of a particular dataset can influence the outcomes of disease forecasting models. Our model system focuses on plague (Yersinia pestis infection) in the West Nile region of Uganda. The majority of recent human cases have been reported from East Africa and Madagascar, where meteorological observations are sparse and topography yields complex weather patterns. Using an ensemble of meteorological datasets and model-averaging techniques we find that the number of suspected cases in the West Nile region was negatively associated with dry season rainfall (December-February) and positively with rainfall prior to the plague season. We demonstrate that ensembles of available meteorological datasets can be used to quantify climatic uncertainty and minimize its impacts on infectious disease models. These methods are particularly valuable in regions with sparse observational networks and high morbidity and mortality from vector-borne diseases. PMID:23024750

  6. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    NASA Astrophysics Data System (ADS)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  7. Stimuli Reduce the Dimensionality of Cortical Activity

    PubMed Central

    Mazzucato, Luca; Fontanini, Alfredo; La Camera, Giancarlo

    2016-01-01

    The activity of ensembles of simultaneously recorded neurons can be represented as a set of points in the space of firing rates. Even though the dimension of this space is equal to the ensemble size, neural activity can be effectively localized on smaller subspaces. The dimensionality of the neural space is an important determinant of the computational tasks supported by the neural activity. Here, we investigate the dimensionality of neural ensembles from the sensory cortex of alert rats during periods of ongoing (inter-trial) and stimulus-evoked activity. We find that dimensionality grows linearly with ensemble size, and grows significantly faster during ongoing activity compared to evoked activity. We explain these results using a spiking network model based on a clustered architecture. The model captures the difference in growth rate between ongoing and evoked activity and predicts a characteristic scaling with ensemble size that could be tested in high-density multi-electrode recordings. Moreover, we present a simple theory that predicts the existence of an upper bound on dimensionality. This upper bound is inversely proportional to the amount of pair-wise correlations and, compared to a homogeneous network without clusters, it is larger by a factor equal to the number of clusters. The empirical estimation of such bounds depends on the number and duration of trials and is well predicted by the theory. Together, these results provide a framework to analyze neural dimensionality in alert animals, its behavior under stimulus presentation, and its theoretical dependence on ensemble size, number of clusters, and correlations in spiking network models. PMID:26924968

  8. Flood susceptibility mapping using a novel ensemble weights-of-evidence and support vector machine models in GIS

    NASA Astrophysics Data System (ADS)

    Tehrany, Mahyat Shafapour; Pradhan, Biswajeet; Jebur, Mustafa Neamah

    2014-05-01

    Flood is one of the most devastating natural disasters that occur frequently in Terengganu, Malaysia. Recently, ensemble based techniques are getting extremely popular in flood modeling. In this paper, weights-of-evidence (WoE) model was utilized first, to assess the impact of classes of each conditioning factor on flooding through bivariate statistical analysis (BSA). Then, these factors were reclassified using the acquired weights and entered into the support vector machine (SVM) model to evaluate the correlation between flood occurrence and each conditioning factor. Through this integration, the weak point of WoE can be solved and the performance of the SVM will be enhanced. The spatial database included flood inventory, slope, stream power index (SPI), topographic wetness index (TWI), altitude, curvature, distance from the river, geology, rainfall, land use/cover (LULC), and soil type. Four kernel types of SVM (linear kernel (LN), polynomial kernel (PL), radial basis function kernel (RBF), and sigmoid kernel (SIG)) were used to investigate the performance of each kernel type. The efficiency of the new ensemble WoE and SVM method was tested using area under curve (AUC) which measured the prediction and success rates. The validation results proved the strength and efficiency of the ensemble method over the individual methods. The best results were obtained from RBF kernel when compared with the other kernel types. Success rate and prediction rate for ensemble WoE and RBF-SVM method were 96.48% and 95.67% respectively. The proposed ensemble flood susceptibility mapping method could assist researchers and local governments in flood mitigation strategies.

  9. Stimuli Reduce the Dimensionality of Cortical Activity.

    PubMed

    Mazzucato, Luca; Fontanini, Alfredo; La Camera, Giancarlo

    2016-01-01

    The activity of ensembles of simultaneously recorded neurons can be represented as a set of points in the space of firing rates. Even though the dimension of this space is equal to the ensemble size, neural activity can be effectively localized on smaller subspaces. The dimensionality of the neural space is an important determinant of the computational tasks supported by the neural activity. Here, we investigate the dimensionality of neural ensembles from the sensory cortex of alert rats during periods of ongoing (inter-trial) and stimulus-evoked activity. We find that dimensionality grows linearly with ensemble size, and grows significantly faster during ongoing activity compared to evoked activity. We explain these results using a spiking network model based on a clustered architecture. The model captures the difference in growth rate between ongoing and evoked activity and predicts a characteristic scaling with ensemble size that could be tested in high-density multi-electrode recordings. Moreover, we present a simple theory that predicts the existence of an upper bound on dimensionality. This upper bound is inversely proportional to the amount of pair-wise correlations and, compared to a homogeneous network without clusters, it is larger by a factor equal to the number of clusters. The empirical estimation of such bounds depends on the number and duration of trials and is well predicted by the theory. Together, these results provide a framework to analyze neural dimensionality in alert animals, its behavior under stimulus presentation, and its theoretical dependence on ensemble size, number of clusters, and correlations in spiking network models.

  10. Assessment of Mars Atmospheric Temperature Retrievals from the Thermal Emission Spectrometer Radiances

    NASA Technical Reports Server (NTRS)

    Hoffman, Matthew J.; Eluszkiewicz, Janusz; Weisenstein, Deborah; Uymin, Gennady; Moncet, Jean-Luc

    2012-01-01

    Motivated by the needs of Mars data assimilation. particularly quantification of measurement errors and generation of averaging kernels. we have evaluated atmospheric temperature retrievals from Mars Global Surveyor (MGS) Thermal Emission Spectrometer (TES) radiances. Multiple sets of retrievals have been considered in this study; (1) retrievals available from the Planetary Data System (PDS), (2) retrievals based on variants of the retrieval algorithm used to generate the PDS retrievals, and (3) retrievals produced using the Mars 1-Dimensional Retrieval (M1R) algorithm based on the Optimal Spectral Sampling (OSS ) forward model. The retrieved temperature profiles are compared to the MGS Radio Science (RS) temperature profiles. For the samples tested, the M1R temperature profiles can be made to agree within 2 K with the RS temperature profiles, but only after tuning the prior and error statistics. Use of a global prior that does not take into account the seasonal dependence leads errors of up 6 K. In polar samples. errors relative to the RS temperature profiles are even larger. In these samples, the PDS temperature profiles also exhibit a poor fit with RS temperatures. This fit is worse than reported in previous studies, indicating that the lack of fit is due to a bias correction to TES radiances implemented after 2004. To explain the differences between the PDS and Ml R temperatures, the algorithms are compared directly, with the OSS forward model inserted into the PDS algorithm. Factors such as the filtering parameter, the use of linear versus nonlinear constrained inversion, and the choice of the forward model, are found to contribute heavily to the differences in the temperature profiles retrieved in the polar regions, resulting in uncertainties of up to 6 K. Even outside the poles, changes in the a priori statistics result in different profile shapes which all fit the radiances within the specified error. The importance of the a priori statistics prevents reliable global retrievals based a single a priori and strongly implies that a robust science analysis must instead rely on retrievals employing localized a priori information, for example from an ensemble based data assimilation system such as the Local Ensemble Transform Kalman Filter (LETKF).

  11. A continuum theory of edge dislocations

    NASA Astrophysics Data System (ADS)

    Berdichevsky, V. L.

    2017-09-01

    Continuum theory of dislocation aims to describe the behavior of large ensembles of dislocations. This task is far from completion, and, most likely, does not have a "universal solution", which is applicable to any dislocation ensemble. In this regards it is important to have guiding lines set by benchmark cases, where the transition from a discrete set of dislocations to a continuum description is made rigorously. Two such cases have been considered recently: equilibrium of dislocation walls and screw dislocations in beams. In this paper one more case is studied, equilibrium of a large set of 2D edge dislocations placed randomly in a 2D bounded region. The major characteristic of interest is energy of dislocation ensemble, because it determines the structure of continuum equations. The homogenized energy functional is obtained for the periodic dislocation ensembles with a random contents of the periodic cell. Parameters of the periodic structure can change slowly on distances of order of the size of periodic cells. The energy functional is obtained by the variational-asymptotic method. Equilibrium positions are local minima of energy. It is confirmed the earlier assertion that energy density of the system is the sum of elastic energy of averaged elastic strains and microstructure energy, which is elastic energy of the neutralized dislocation system, i.e. the dislocation system placed in a constant dislocation density field making the averaged dislocation density zero. The computation of energy is reduced to solution of a variational cell problem. This problem is solved analytically. The solution is used to investigate stability of simple dislocation arrays, i.e. arrays with one dislocation in the periodic cell. The relations obtained yield two outcomes: First, there is a state parameter of the system, dislocation polarization; averaged stresses affect only dislocation polarization and cannot change other characteristics of the system. Second, the structure of dislocation phase space is strikingly simple. Dislocation phase space is split in a family of subspaces corresponding to constant values of dislocation polarizations; in each equipolarization subspace there are many local minima of energy; for zero external stresses the system is stuck in a local minimum of energy; for non-zero slowly changing external stress, dislocation polarization evolves, while the system moves over local energy minima of equipolarization subspaces. Such a simple picture of dislocation dynamics is due to the presence of two time scales, slow evolution of dislocation polarization and fast motion of the system over local minima of energy. The existence of two time scales is justified for a neutral system of edge dislocations.

  12. Determination of ensemble-average pairwise root mean-square deviation from experimental B-factors.

    PubMed

    Kuzmanic, Antonija; Zagrovic, Bojan

    2010-03-03

    Root mean-square deviation (RMSD) after roto-translational least-squares fitting is a measure of global structural similarity of macromolecules used commonly. On the other hand, experimental x-ray B-factors are used frequently to study local structural heterogeneity and dynamics in macromolecules by providing direct information about root mean-square fluctuations (RMSF) that can also be calculated from molecular dynamics simulations. We provide a mathematical derivation showing that, given a set of conservative assumptions, a root mean-square ensemble-average of an all-against-all distribution of pairwise RMSD for a single molecular species, (1/2), is directly related to average B-factors () and (1/2). We show this relationship and explore its limits of validity on a heterogeneous ensemble of structures taken from molecular dynamics simulations of villin headpiece generated using distributed-computing techniques and the Folding@Home cluster. Our results provide a basis for quantifying global structural diversity of macromolecules in crystals directly from x-ray experiments, and we show this on a large set of structures taken from the Protein Data Bank. In particular, we show that the ensemble-average pairwise backbone RMSD for a microscopic ensemble underlying a typical protein x-ray structure is approximately 1.1 A, under the assumption that the principal contribution to experimental B-factors is conformational variability. 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. Determination of Ensemble-Average Pairwise Root Mean-Square Deviation from Experimental B-Factors

    PubMed Central

    Kuzmanic, Antonija; Zagrovic, Bojan

    2010-01-01

    Abstract Root mean-square deviation (RMSD) after roto-translational least-squares fitting is a measure of global structural similarity of macromolecules used commonly. On the other hand, experimental x-ray B-factors are used frequently to study local structural heterogeneity and dynamics in macromolecules by providing direct information about root mean-square fluctuations (RMSF) that can also be calculated from molecular dynamics simulations. We provide a mathematical derivation showing that, given a set of conservative assumptions, a root mean-square ensemble-average of an all-against-all distribution of pairwise RMSD for a single molecular species, 1/2, is directly related to average B-factors () and 1/2. We show this relationship and explore its limits of validity on a heterogeneous ensemble of structures taken from molecular dynamics simulations of villin headpiece generated using distributed-computing techniques and the Folding@Home cluster. Our results provide a basis for quantifying global structural diversity of macromolecules in crystals directly from x-ray experiments, and we show this on a large set of structures taken from the Protein Data Bank. In particular, we show that the ensemble-average pairwise backbone RMSD for a microscopic ensemble underlying a typical protein x-ray structure is ∼1.1 Å, under the assumption that the principal contribution to experimental B-factors is conformational variability. PMID:20197040

  14. Avalanching strain dynamics during the hydriding phase transformation in individual palladium nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulvestad, A.; Welland, M. J.; Collins, S. S. E.

    2015-12-11

    Phase transitions in reactive environments are crucially important in energy and information storage, catalysis and sensors. Nanostructuring active particles can yield faster charging/ discharging kinetics, increased lifespan and record catalytic activities. However, establishing the causal link between structure and function is challenging for nanoparticles, as ensemble measurements convolve intrinsic single-particle properties with sample diversity. Here we study the hydriding phase transformation in individual palladium nanocubes in situ using coherent X-ray diffractive imaging. The phase transformation dynamics, which involve the nucleation and propagation of a hydrogen-rich region, are dependent on absolute time (aging) and involve intermittent dynamics (avalanching). A hydrogen-rich surfacemore » layer dominates the crystal strain in the hydrogen-poor phase, while strain inversion occurs at the cube corners in the hydrogen-rich phase. A three-dimensional phase-field model is used to interpret the experimental results. In conclusion, our experimental and theoretical approach provides a general framework for designing and optimizing phase transformations for single nanocrystals in reactive environments.« less

  15. Avalanching strain dynamics during the hydriding phase transformation in individual palladium nanoparticles

    NASA Astrophysics Data System (ADS)

    Ulvestad, A.; Welland, M. J.; Collins, S. S. E.; Harder, R.; Maxey, E.; Wingert, J.; Singer, A.; Hy, S.; Mulvaney, P.; Zapol, P.; Shpyrko, O. G.

    2015-12-01

    Phase transitions in reactive environments are crucially important in energy and information storage, catalysis and sensors. Nanostructuring active particles can yield faster charging/discharging kinetics, increased lifespan and record catalytic activities. However, establishing the causal link between structure and function is challenging for nanoparticles, as ensemble measurements convolve intrinsic single-particle properties with sample diversity. Here we study the hydriding phase transformation in individual palladium nanocubes in situ using coherent X-ray diffractive imaging. The phase transformation dynamics, which involve the nucleation and propagation of a hydrogen-rich region, are dependent on absolute time (aging) and involve intermittent dynamics (avalanching). A hydrogen-rich surface layer dominates the crystal strain in the hydrogen-poor phase, while strain inversion occurs at the cube corners in the hydrogen-rich phase. A three-dimensional phase-field model is used to interpret the experimental results. Our experimental and theoretical approach provides a general framework for designing and optimizing phase transformations for single nanocrystals in reactive environments.

  16. Avalanching strain dynamics during the hydriding phase transformation in individual palladium nanoparticles

    PubMed Central

    Ulvestad, A.; Welland, M. J.; Collins, S. S. E.; Harder, R.; Maxey, E.; Wingert, J.; Singer, A.; Hy, S.; Mulvaney, P.; Zapol, P.; Shpyrko, O. G.

    2015-01-01

    Phase transitions in reactive environments are crucially important in energy and information storage, catalysis and sensors. Nanostructuring active particles can yield faster charging/discharging kinetics, increased lifespan and record catalytic activities. However, establishing the causal link between structure and function is challenging for nanoparticles, as ensemble measurements convolve intrinsic single-particle properties with sample diversity. Here we study the hydriding phase transformation in individual palladium nanocubes in situ using coherent X-ray diffractive imaging. The phase transformation dynamics, which involve the nucleation and propagation of a hydrogen-rich region, are dependent on absolute time (aging) and involve intermittent dynamics (avalanching). A hydrogen-rich surface layer dominates the crystal strain in the hydrogen-poor phase, while strain inversion occurs at the cube corners in the hydrogen-rich phase. A three-dimensional phase-field model is used to interpret the experimental results. Our experimental and theoretical approach provides a general framework for designing and optimizing phase transformations for single nanocrystals in reactive environments. PMID:26655832

  17. Entanglement entropy in a one-dimensional disordered interacting system: the role of localization.

    PubMed

    Berkovits, Richard

    2012-04-27

    The properties of the entanglement entropy (EE) in one-dimensional disordered interacting systems are studied. Anderson localization leaves a clear signature on the average EE, as it saturates on the length scale exceeding the localization length. This is verified by numerically calculating the EE for an ensemble of disordered realizations using the density matrix renormalization group method. A heuristic expression describing the dependence of the EE on the localization length, which takes into account finite-size effects, is proposed. This is used to extract the localization length as a function of the interaction strength. The localization length dependence on the interaction fits nicely with the expectations.

  18. Transient Calibration of a Variably-Saturated Groundwater Flow Model By Iterative Ensemble Smoothering: Synthetic Case and Application to the Flow Induced During Shaft Excavation and Operation of the Bure Underground Research Laboratory

    NASA Astrophysics Data System (ADS)

    Lam, D. T.; Kerrou, J.; Benabderrahmane, H.; Perrochet, P.

    2017-12-01

    The calibration of groundwater flow models in transient state can be motivated by the expected improved characterization of the aquifer hydraulic properties, especially when supported by a rich transient dataset. In the prospect of setting up a calibration strategy for a variably-saturated transient groundwater flow model of the area around the ANDRA's Bure Underground Research Laboratory, we wish to take advantage of the long hydraulic head and flowrate time series collected near and at the access shafts in order to help inform the model hydraulic parameters. A promising inverse approach for such high-dimensional nonlinear model, and which applicability has been illustrated more extensively in other scientific fields, could be an iterative ensemble smoother algorithm initially developed for a reservoir engineering problem. Furthermore, the ensemble-based stochastic framework will allow to address to some extent the uncertainty of the calibration for a subsequent analysis of a flow process dependent prediction. By assimilating the available data in one single step, this method iteratively updates each member of an initial ensemble of stochastic realizations of parameters until the minimization of an objective function. However, as it is well known for ensemble-based Kalman methods, this correction computed from approximations of covariance matrices is most efficient when the ensemble realizations are multi-Gaussian. As shown by the comparison of the updated ensemble mean obtained for our simplified synthetic model of 2D vertical flow by using either multi-Gaussian or multipoint simulations of parameters, the ensemble smoother fails to preserve the initial connectivity of the facies and the parameter bimodal distribution. Given the geological structures depicted by the multi-layered geological model built for the real case, our goal is to find how to still best leverage the performance of the ensemble smoother while using an initial ensemble of conditional multi-Gaussian simulations or multipoint simulations as conceptually consistent as possible. Performance of the algorithm including additional steps to help mitigate the effects of non-Gaussian patterns, such as Gaussian anamorphosis, or resampling of facies from the training image using updated local probability constraints will be assessed.

  19. LDFT-based watermarking resilient to local desynchronization attacks.

    PubMed

    Tian, Huawei; Zhao, Yao; Ni, Rongrong; Qin, Lunming; Li, Xuelong

    2013-12-01

    Up to now, a watermarking scheme that is robust against desynchronization attacks (DAs) is still a grand challenge. Most image watermarking resynchronization schemes in literature can survive individual global DAs (e.g., rotation, scaling, translation, and other affine transforms), but few are resilient to challenging cropping and local DAs. The main reason is that robust features for watermark synchronization are only globally invariable rather than locally invariable. In this paper, we present a blind image watermarking resynchronization scheme against local transform attacks. First, we propose a new feature transform named local daisy feature transform (LDFT), which is not only globally but also locally invariable. Then, the binary space partitioning (BSP) tree is used to partition the geometrically invariant LDFT space. In the BSP tree, the location of each pixel is fixed under global transform, local transform, and cropping. Lastly, the watermarking sequence is embedded bit by bit into each leaf node of the BSP tree by using the logarithmic quantization index modulation watermarking embedding method. Simulation results show that the proposed watermarking scheme can survive numerous kinds of distortions, including common image-processing attacks, local and global DAs, and noninvertible cropping.

  20. Total probabilities of ensemble runoff forecasts

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2017-04-01

    Ensemble forecasting has a long history from meteorological modelling, as an indication of the uncertainty of the forecasts. However, it is necessary to calibrate and post-process the ensembles as the they often exhibit both bias and dispersion errors. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters varying in space and time, while giving a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, which makes it unsuitable for our purpose. Our post-processing method of the ensembles is developed in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu), where we are making forecasts for whole Europe, and based on observations from around 700 catchments. As the target is flood forecasting, we are also more interested in improving the forecast skill for high-flows rather than in a good prediction of the entire flow regime. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different meteorological forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to estimate the total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but we are adding a spatial penalty in the calibration process to force a spatial correlation of the parameters. The penalty takes distance, stream-connectivity and size of the catchment areas into account. This can in some cases have a slight negative impact on the calibration error, but avoids large differences between parameters of nearby locations, whether stream connected or not. The spatial calibration also makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.

  1. Conformational Fluctuations in G-Protein-Coupled Receptors

    NASA Astrophysics Data System (ADS)

    Brown, Michael F.

    2014-03-01

    G-protein-coupled receptors (GPCRs) comprise almost 50% of pharmaceutical drug targets, where rhodopsin is an important prototype and occurs naturally in a lipid membrane. Rhodopsin photoactivation entails 11-cis to all-trans isomerization of the retinal cofactor, yielding an equilibrium between inactive Meta-I and active Meta-II states. Two important questions are: (1) Is rhodopsin is a simple two-state switch? Or (2) does isomerization of retinal unlock an activated conformational ensemble? For an ensemble-based activation mechanism (EAM) a role for conformational fluctuations is clearly indicated. Solid-state NMR data together with theoretical molecular dynamics (MD) simulations detect increased local mobility of retinal after light activation. Resultant changes in local dynamics of the cofactor initiate large-scale fluctuations of transmembrane helices that expose recognition sites for the signal-transducing G-protein. Time-resolved FTIR studies and electronic spectroscopy further show the conformational ensemble is strongly biased by the membrane lipid composition, as well as pH and osmotic pressure. A new flexible surface model (FSM) describes how the curvature stress field of the membrane governs the energetics of active rhodopsin, due to the spontaneous monolayer curvature of the lipids. Furthermore, influences of osmotic pressure dictate that a large number of bulk water molecules are implicated in rhodopsin activation. Around 60 bulk water molecules activate rhodopsin, which is much larger than the number of structural waters seen in X-ray crystallography, or inferred from studies of bulk hydrostatic pressure. Conformational selection and promoting vibrational motions of rhodopsin lead to activation of the G-protein (transducin). Our biophysical data give a paradigm shift in understanding GPCR activation. The new view is: dynamics and conformational fluctuations involve an ensemble of substates that activate the cognate G-protein in the amplified visual response.

  2. Transient ensemble dynamics in time-independent galactic potentials

    NASA Astrophysics Data System (ADS)

    Mahon, M. Elaine; Abernathy, Robert A.; Bradley, Brendan O.; Kandrup, Henry E.

    1995-07-01

    This paper summarizes a numerical investigation of the short-time, possibly transient, behaviour of ensembles of stochastic orbits evolving in fixed non-integrable potentials, with the aim of deriving insights into the structure and evolution of galaxies. The simulations involved three different two-dimensional potentials, quite different in appearance. However, despite these differences, ensembles in all three potentials exhibit similar behaviour. This suggests that the conclusions inferred from the simulations are robust, relying only on basic topological properties, e.g., the existence of KAM tori and cantori. Generic ensembles of initial conditions, corresponding to stochastic orbits, exhibit a rapid coarse-grained approach towards a near-invariant distribution on a time-scale <>t_H, although various irregularities associated with external and/or internal irregularities can drastically accelerate this process. A principal tool in the analysis is the notion of a local Liapounov exponent, which provides a statistical characterization of the overall instability of stochastic orbits over finite time intervals. In particular, there is a precise sense in which confined stochastic orbits are less unstable, with smaller local Liapounov exponents, than are unconfined stochastic orbits.

  3. Data assimilation experiment of precipitable water vapor observed by a hyper-dense GNSS receiver network using a nested NHM-LETKF system

    NASA Astrophysics Data System (ADS)

    Oigawa, Masanori; Tsuda, Toshitaka; Seko, Hiromu; Shoji, Yoshinori; Realini, Eugenio

    2018-05-01

    We studied the assimilation of high-resolution precipitable water vapor (PWV) data derived from a hyper-dense global navigation satellite system network around Uji city, Kyoto, Japan, which had a mean inter-station distance of about 1.7 km. We focused on a heavy rainfall event that occurred on August 13-14, 2012, around Uji city. We employed a local ensemble transform Kalman filter as the data assimilation method. The inhomogeneity of the observed PWV increased on a scale of less than 10 km in advance of the actual rainfall detected by the rain gauge. Zenith wet delay data observed by the Uji network showed that the characteristic length scale of water vapor distribution during the rainfall ranged from 1.9 to 3.5 km. It is suggested that the assimilation of PWV data with high horizontal resolution (a few km) improves the forecast accuracy. We conducted the assimilation experiment of high-resolution PWV data, using both small horizontal localization radii and a conventional horizontal localization radius. We repeated the sensitivity experiment, changing the mean horizontal spacing of the PWV data from 1.7 to 8.0 km. When the horizontal spacing of assimilated PWV data was decreased from 8.0 to 3.5 km, the accuracy of the simulated hourly rainfall amount worsened in the experiment that used the conventional localization radius for the assimilation of PWV. In contrast, the accuracy of hourly rainfall amounts improved when we applied small horizontal localization radii. In the experiment that used the small horizontal localization radii, the accuracy of the hourly rainfall amount was most improved when the horizontal resolution of the assimilated PWV data was 3.5 km. The optimum spatial resolution of PWV data was related to the characteristic length scale of water vapor variability.[Figure not available: see fulltext.

  4. Intercalation pathway in many-particle LiFePO4 electrode revealed by nanoscale state-of-charge mapping.

    PubMed

    Chueh, William C; El Gabaly, Farid; Sugar, Joshua D; Bartelt, Norman C; McDaniel, Anthony H; Fenton, Kyle R; Zavadil, Kevin R; Tyliszczak, Tolek; Lai, Wei; McCarty, Kevin F

    2013-03-13

    The intercalation pathway of lithium iron phosphate (LFP) in the positive electrode of a lithium-ion battery was probed at the ∼40 nm length scale using oxidation-state-sensitive X-ray microscopy. Combined with morphological observations of the same exact locations using transmission electron microscopy, we quantified the local state-of-charge of approximately 450 individual LFP particles over nearly the entire thickness of the porous electrode. With the electrode charged to 50% state-of-charge in 0.5 h, we observed that the overwhelming majority of particles were either almost completely delithiated or lithiated. Specifically, only ∼2% of individual particles were at an intermediate state-of-charge. From this small fraction of particles that were actively undergoing delithiation, we conclude that the time needed to charge a particle is ∼1/50 the time needed to charge the entire particle ensemble. Surprisingly, we observed a very weak correlation between the sequence of delithiation and the particle size, contrary to the common expectation that smaller particles delithiate before larger ones. Our quantitative results unambiguously confirm the mosaic (particle-by-particle) pathway of intercalation and suggest that the rate-limiting process of charging is initiating the phase transformation by, for example, a nucleation-like event. Therefore, strategies for further enhancing the performance of LFP electrodes should not focus on increasing the phase-boundary velocity but on the rate of phase-transformation initiation.

  5. A Framework to Analyze the Performance of Load Balancing Schemes for Ensembles of Stochastic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Tae-Hyuk; Sandu, Adrian; Watson, Layne T.

    2015-08-01

    Ensembles of simulations are employed to estimate the statistics of possible future states of a system, and are widely used in important applications such as climate change and biological modeling. Ensembles of runs can naturally be executed in parallel. However, when the CPU times of individual simulations vary considerably, a simple strategy of assigning an equal number of tasks per processor can lead to serious work imbalances and low parallel efficiency. This paper presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms for ensembles of simulations where many tasks are mapped onto each processor, andmore » where the individual compute times vary considerably among tasks. Four load balancing strategies are discussed: most-dividing, all-redistribution, random-polling, and neighbor-redistribution. Simulation results with a stochastic budding yeast cell cycle model are consistent with the theoretical analysis. It is especially significant that there is a provable global decrease in load imbalance for the local rebalancing algorithms due to scalability concerns for the global rebalancing algorithms. The overall simulation time is reduced by up to 25 %, and the total processor idle time by 85 %.« less

  6. A Thermal Physiological Comparison of Two HazMat Protective Ensembles With and Without Active Convective Cooling

    NASA Technical Reports Server (NTRS)

    Williamson, Rebecca; Carbo, Jorge; Luna, Bernadette; Webbon, Bruce W.

    1998-01-01

    Wearing impermeable garments for hazardous materials clean up can often present a health and safety problem for the wearer. Even short duration clean up activities can produce heat stress injuries in hazardous materials workers. It was hypothesized that an internal cooling system might increase worker productivity and decrease likelihood of heat stress injuries in typical HazMat operations. Two HazMat protective ensembles were compared during treadmill exercise. The different ensembles were created using two different suits: a Trelleborg VPS suit representative of current HazMat suits and a prototype suit developed by NASA engineers. The two life support systems used were a current technology Interspiro Spirolite breathing apparatus and a liquid air breathing system that also provided convective cooling. Twelve local members of a HazMat team served as test subjects. They were fully instrumented to allow a complete physiological comparison of their thermal responses to the different ensembles. Results showed that cooling from the liquid air system significantly decreased thermal stress. The results of the subjective evaluations of new design features in the prototype suit were also highly favorable. Incorporation of these new design features could lead to significant operational advantages in the future.

  7. Ensemble stacking mitigates biases in inference of synaptic connectivity.

    PubMed

    Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N

    2018-01-01

    A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.

  8. Near-optimal protocols in complex nonequilibrium transformations

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Crooks, Gavin E.; ...

    2016-08-29

    The development of sophisticated experimental means to control nanoscale systems has motivated efforts to design driving protocols that minimize the energy dissipated to the environment. Computational models are a crucial tool in this practical challenge. In this paper, we describe a general method for sampling an ensemble of finite-time, nonequilibrium protocols biased toward a low average dissipation. In addition, we show that this scheme can be carried out very efficiently in several limiting cases. As an application, we sample the ensemble of low-dissipation protocols that invert the magnetization of a 2D Ising model and explore how the diversity of themore » protocols varies in response to constraints on the average dissipation. In this example, we find that there is a large set of protocols with average dissipation close to the optimal value, which we argue is a general phenomenon.« less

  9. An Ensemble Multilabel Classification for Disease Risk Prediction

    PubMed Central

    Liu, Wei; Zhao, Hongling; Zhang, Chaoyang

    2017-01-01

    It is important to identify and prevent disease risk as early as possible through regular physical examinations. We formulate the disease risk prediction into a multilabel classification problem. A novel Ensemble Label Power-set Pruned datasets Joint Decomposition (ELPPJD) method is proposed in this work. First, we transform the multilabel classification into a multiclass classification. Then, we propose the pruned datasets and joint decomposition methods to deal with the imbalance learning problem. Two strategies size balanced (SB) and label similarity (LS) are designed to decompose the training dataset. In the experiments, the dataset is from the real physical examination records. We contrast the performance of the ELPPJD method with two different decomposition strategies. Moreover, the comparison between ELPPJD and the classic multilabel classification methods RAkEL and HOMER is carried out. The experimental results show that the ELPPJD method with label similarity strategy has outstanding performance. PMID:29065647

  10. Fast Computation of Solvation Free Energies with Molecular Density Functional Theory: Thermodynamic-Ensemble Partial Molar Volume Corrections.

    PubMed

    Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel

    2014-06-05

    Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently.

  11. An Ensemble Deep Convolutional Neural Network Model with Improved D-S Evidence Fusion for Bearing Fault Diagnosis.

    PubMed

    Li, Shaobo; Liu, Guokai; Tang, Xianghong; Lu, Jianguang; Hu, Jianjun

    2017-07-28

    Intelligent machine health monitoring and fault diagnosis are becoming increasingly important for modern manufacturing industries. Current fault diagnosis approaches mostly depend on expert-designed features for building prediction models. In this paper, we proposed IDSCNN, a novel bearing fault diagnosis algorithm based on ensemble deep convolutional neural networks and an improved Dempster-Shafer theory based evidence fusion. The convolutional neural networks take the root mean square (RMS) maps from the FFT (Fast Fourier Transformation) features of the vibration signals from two sensors as inputs. The improved D-S evidence theory is implemented via distance matrix from evidences and modified Gini Index. Extensive evaluations of the IDSCNN on the Case Western Reserve Dataset showed that our IDSCNN algorithm can achieve better fault diagnosis performance than existing machine learning methods by fusing complementary or conflicting evidences from different models and sensors and adapting to different load conditions.

  12. An Ensemble Deep Convolutional Neural Network Model with Improved D-S Evidence Fusion for Bearing Fault Diagnosis

    PubMed Central

    Li, Shaobo; Liu, Guokai; Tang, Xianghong; Lu, Jianguang

    2017-01-01

    Intelligent machine health monitoring and fault diagnosis are becoming increasingly important for modern manufacturing industries. Current fault diagnosis approaches mostly depend on expert-designed features for building prediction models. In this paper, we proposed IDSCNN, a novel bearing fault diagnosis algorithm based on ensemble deep convolutional neural networks and an improved Dempster–Shafer theory based evidence fusion. The convolutional neural networks take the root mean square (RMS) maps from the FFT (Fast Fourier Transformation) features of the vibration signals from two sensors as inputs. The improved D-S evidence theory is implemented via distance matrix from evidences and modified Gini Index. Extensive evaluations of the IDSCNN on the Case Western Reserve Dataset showed that our IDSCNN algorithm can achieve better fault diagnosis performance than existing machine learning methods by fusing complementary or conflicting evidences from different models and sensors and adapting to different load conditions. PMID:28788099

  13. A Local Forecast of Land Surface Wetness Conditions, Drought, and St. Louis Encephalitis Virus Transmission Derived from Seasonal Climate Predictions

    NASA Astrophysics Data System (ADS)

    Shaman, J.; Stieglitz, M.; Zebiak, S.; Cane, M.; Day, J. F.

    2002-12-01

    We present an ensemble local hydrologic forecast derived from the seasonal forecasts of the International Research Institute (IRI) for Climate Prediction. Three- month seasonal forecasts were used to resample historical meteorological conditions and generate ensemble forcing datasets for a TOPMODEL-based hydrology model. Eleven retrospective forecasts were run at a Florida and New York site. Forecast skill was assessed for mean area modeled water table depth (WTD), i.e. near surface soil wetness conditions, and compared with WTD simulated with observed data. Hydrology model forecast skill was evident at the Florida site but not at the New York site. At the Florida site, persistence of hydrologic conditions and local skill of the IRI seasonal forecast contributed to the local hydrologic forecast skill. This forecast will permit probabilistic prediction of future hydrologic conditions. At the Florida site, we have also quantified the link between modeled WTD (i.e. drought) and the amplification and transmission of St. Louis Encephalitis virus (SLEV). We derive an empirical relationship between modeled land surface wetness and levels of SLEV transmission associated with human clinical cases. We then combine the seasonal forecasts of local, modeled WTD with this empirical relationship and produce retrospective probabilistic seasonal forecasts of epidemic SLEV transmission in Florida. Epidemic SLEV transmission forecast skill is demonstrated. These findings will permit real-time forecast of drought and resultant SLEV transmission in Florida.

  14. Local collective motion analysis for multi-probe dynamic imaging and microrheology

    NASA Astrophysics Data System (ADS)

    Khan, Manas; Mason, Thomas G.

    2016-08-01

    Dynamical artifacts, such as mechanical drift, advection, and hydrodynamic flow, can adversely affect multi-probe dynamic imaging and passive particle-tracking microrheology experiments. Alternatively, active driving by molecular motors can cause interesting non-Brownian motion of probes in local regions. Existing drift-correction techniques, which require large ensembles of probes or fast temporal sampling, are inadequate for handling complex spatio-temporal drifts and non-Brownian motion of localized domains containing relatively few probes. Here, we report an analytical method based on local collective motion (LCM) analysis of as few as two probes for detecting the presence of non-Brownian motion and for accurately eliminating it to reveal the underlying Brownian motion. By calculating an ensemble-average, time-dependent, LCM mean square displacement (MSD) of two or more localized probes and comparing this MSD to constituent single-probe MSDs, we can identify temporal regimes during which either thermal or athermal motion dominates. Single-probe motion, when referenced relative to the moving frame attached to the multi-probe LCM trajectory, provides a true Brownian MSD after scaling by an appropriate correction factor that depends on the number of probes used in LCM analysis. We show that LCM analysis can be used to correct many different dynamical artifacts, including spatially varying drifts, gradient flows, cell motion, time-dependent drift, and temporally varying oscillatory advection, thereby offering a significant improvement over existing approaches.

  15. The correlation of local deformation and stress-assisted local phase transformations in MMC foams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berek, H., E-mail: harry.berek@ikgb.tu-freiberg.de; Ballaschk, U.; Aneziris, C.G.

    2015-09-15

    Cellular structures are of growing interest for industry, and are of particular importance for lightweight applications. In this paper, a special case of metal matrix composite foams (MMCs) is investigated. The investigated foams are composed of austenitic steel exhibiting transformation induced plasticity (TRIP) and magnesia partially stabilized zirconia (Mg-PSZ). Both components exhibit martensitic phase transformation during deformation, thus generating the potential for improved mechanical properties such as strength, ductility, and energy absorption capability. The aim of these investigations was to show that stress-assisted phase transformations within the ceramic reinforcement correspond to strong local deformation, and to determine whether they canmore » trigger martensitic phase transformations in the steel matrix. To this end, in situ interrupted compression experiments were performed in an X-ray computed tomography device (XCT). By using a recently developed registration algorithm, local deformation could be calculated and regions of interest could be defined. Corresponding cross sections were prepared and used to analyze the local phase composition by electron backscatter diffraction (EBSD). The results show a strong correlation between local deformation and phase transformation. - Graphical abstract: Display Omitted - Highlights: • In situ compressive deformation on MMC foams was performed in an XCT. • Local deformation fields and their gradient amplitudes were estimated. • Cross sections were manufactured containing defined regions of interest. • Local EBSD phase analysis was performed. • Local deformation and local phase transformation are correlated.« less

  16. Ideas for a pattern-oriented approach towards a VERA analysis ensemble

    NASA Astrophysics Data System (ADS)

    Gorgas, T.; Dorninger, M.

    2010-09-01

    Ideas for a pattern-oriented approach towards a VERA analysis ensemble For many applications in meteorology and especially for verification purposes it is important to have some information about the uncertainties of observation and analysis data. A high quality of these "reference data" is an absolute necessity as the uncertainties are reflected in verification measures. The VERA (Vienna Enhanced Resolution Analysis) scheme includes a sophisticated quality control tool which accounts for the correction of observational data and provides an estimation of the observation uncertainty. It is crucial for meteorologically and physically reliable analysis fields. VERA is based on a variational principle and does not need any first guess fields. It is therefore NWP model independent and can also be used as an unbiased reference for real time model verification. For downscaling purposes VERA uses an a priori knowledge on small-scale physical processes over complex terrain, the so called "fingerprint technique", which transfers information from rich to data sparse regions. The enhanced Joint D-PHASE and COPS data set forms the data base for the analysis ensemble study. For the WWRP projects D-PHASE and COPS a joint activity has been started to collect GTS and non-GTS data from the national and regional meteorological services in Central Europe for 2007. Data from more than 11.000 stations are available for high resolution analyses. The usage of random numbers as perturbations for ensemble experiments is a common approach in meteorology. In most implementations, like for NWP-model ensemble systems, the focus lies on error growth and propagation on the spatial and temporal scale. When defining errors in analysis fields we have to consider the fact that analyses are not time dependent and that no perturbation method aimed at temporal evolution is possible. Further, the method applied should respect two major sources of analysis errors: Observation errors AND analysis or interpolation errors. With the concept of an analysis ensemble we hope to get a more detailed sight on both sources of analysis errors. For the computation of the VERA ensemble members a sample of Gaussian random perturbations is produced for each station and parameter. The deviation of perturbations is based on the correction proposals by the VERA QC scheme to provide some "natural" limits for the ensemble. In order to put more emphasis on the weather situation we aim to integrate the main synoptic field structures as weighting factors for the perturbations. Two widely approved approaches are used for the definition of these main field structures: The Principal Component Analysis and a 2D-Discrete Wavelet Transform. The results of tests concerning the implementation of this pattern-supported analysis ensemble system and a comparison of the different approaches are given in the presentation.

  17. A Statistical Multimodel Ensemble Approach to Improving Long-Range Forecasting in Pakistan

    DTIC Science & Technology

    2012-03-01

    Impact of global warming on monsoon variability in Pakistan. J. Anim. Pl. Sci., 21, no. 1, 107–110. Gillies, S., T. Murphree, and D. Meyer, 2012...are generated by multiple regression models that relate globally distributed oceanic and atmospheric predictors to local predictands. The...generated by multiple regression models that relate globally distributed oceanic and atmospheric predictors to local predictands. The predictands are

  18. Cosmic structure and dynamics of the local Universe

    NASA Astrophysics Data System (ADS)

    Kitaura, Francisco-Shu; Erdoǧdu, Pirin; Nuza, Sebastián. E.; Khalatyan, Arman; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan

    2012-11-01

    We present a cosmography analysis of the local Universe based on the recently released Two-Micron All-Sky Redshift Survey catalogue. Our method is based on a Bayesian Networks Machine Learning algorithm (the KIGEN-code) which self-consistently samples the initial density fluctuations compatible with the observed galaxy distribution and a structure formation model given by second-order Lagrangian perturbation theory (2LPT). From the initial conditions we obtain an ensemble of reconstructed density and peculiar velocity fields which characterize the local cosmic structure with high accuracy unveiling non-linear structures like filaments and voids in detail. Coherent redshift-space distortions are consistently corrected within 2LPT. From the ensemble of cross-correlations between the reconstructions and the galaxy field and the variance of the recovered density fields, we find that our method is extremely accurate up to k˜ 1 h Mpc-1 and still yields reliable results down to scales of about 3-4 h-1 Mpc. The motion of the Local Group we obtain within ˜80 h-1 Mpc (vLG = 522 ± 86 km s-1, lLG = 291° ± 16°, bLG = 34° ± 8°) is in good agreement with measurements derived from the cosmic microwave background and from direct observations of peculiar motions and is consistent with the predictions of ΛCDM.

  19. The 3-D unstructured mesh generation using local transformations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    1993-01-01

    The topics are presented in viewgraph form and include the following: 3D combinatorial edge swapping; 3D incremental triangulation via local transformations; a new approach to multigrid for unstructured meshes; surface mesh generation using local transforms; volume triangulations; viscous mesh generation; and future directions.

  20. Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses

    NASA Astrophysics Data System (ADS)

    Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong

    2017-04-01

    Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums shows a mean improvement of more than 40% in CRPS when compared to bilinearly interpolated uncalibrated ensemble forecasts. The validation on randomly selected grid points, representing the true height distribution over Austria, still indicates a mean improvement of 35%. The applied statistical model is currently set up for 6-hourly and daily accumulation periods, but will be extended to a temporal resolution of 1-3 hours within a new probabilistic nowcasting system operated by ZAMG.

  1. Decoding 3D reach and grasp from hybrid signals in motor and premotor cortices: spikes, multiunit activity, and local field potentials.

    PubMed

    Bansal, Arjun K; Truccolo, Wilson; Vargas-Irwin, Carlos E; Donoghue, John P

    2012-03-01

    Neural activity in motor cortex during reach and grasp movements shows modulations in a broad range of signals from single-neuron spiking activity (SA) to various frequency bands in broadband local field potentials (LFPs). In particular, spatiotemporal patterns in multiband LFPs are thought to reflect dendritic integration of local and interareal synaptic inputs, attentional and preparatory processes, and multiunit activity (MUA) related to movement representation in the local motor area. Nevertheless, the relationship between multiband LFPs and SA, and their relationship to movement parameters and their relative value as brain-computer interface (BCI) control signals, remain poorly understood. Also, although this broad range of signals may provide complementary information channels in primary (MI) and ventral premotor (PMv) areas, areal differences in information have not been systematically examined. Here, for the first time, the amount of information in SA and multiband LFPs was compared for MI and PMv by recording from dual 96-multielectrode arrays while monkeys made naturalistic reach and grasp actions. Information was assessed as decoding accuracy for 3D arm end point and grip aperture kinematics based on SA or LFPs in MI and PMv, or combinations of signal types across areas. In contrast with previous studies with ≤16 simultaneous electrodes, here ensembles of >16 units (on average) carried more information than multiband, multichannel LFPs. Furthermore, reach and grasp information added by various LFP frequency bands was not independent from that in SA ensembles but rather typically less than and primarily contained within the latter. Notably, MI and PMv did not show a particular bias toward reach or grasp for this task or for a broad range of signal types. For BCIs, our results indicate that neuronal ensemble spiking is the preferred signal for decoding, while LFPs and combined signals from PMv and MI can add robustness to BCI control.

  2. Decoding 3D reach and grasp from hybrid signals in motor and premotor cortices: spikes, multiunit activity, and local field potentials

    PubMed Central

    Truccolo, Wilson; Vargas-Irwin, Carlos E.; Donoghue, John P.

    2012-01-01

    Neural activity in motor cortex during reach and grasp movements shows modulations in a broad range of signals from single-neuron spiking activity (SA) to various frequency bands in broadband local field potentials (LFPs). In particular, spatiotemporal patterns in multiband LFPs are thought to reflect dendritic integration of local and interareal synaptic inputs, attentional and preparatory processes, and multiunit activity (MUA) related to movement representation in the local motor area. Nevertheless, the relationship between multiband LFPs and SA, and their relationship to movement parameters and their relative value as brain-computer interface (BCI) control signals, remain poorly understood. Also, although this broad range of signals may provide complementary information channels in primary (MI) and ventral premotor (PMv) areas, areal differences in information have not been systematically examined. Here, for the first time, the amount of information in SA and multiband LFPs was compared for MI and PMv by recording from dual 96-multielectrode arrays while monkeys made naturalistic reach and grasp actions. Information was assessed as decoding accuracy for 3D arm end point and grip aperture kinematics based on SA or LFPs in MI and PMv, or combinations of signal types across areas. In contrast with previous studies with ≤16 simultaneous electrodes, here ensembles of >16 units (on average) carried more information than multiband, multichannel LFPs. Furthermore, reach and grasp information added by various LFP frequency bands was not independent from that in SA ensembles but rather typically less than and primarily contained within the latter. Notably, MI and PMv did not show a particular bias toward reach or grasp for this task or for a broad range of signal types. For BCIs, our results indicate that neuronal ensemble spiking is the preferred signal for decoding, while LFPs and combined signals from PMv and MI can add robustness to BCI control. PMID:22157115

  3. Revealing Risks in Adaptation Planning: expanding Uncertainty Treatment and dealing with Large Projection Ensembles during Planning Scenario development

    NASA Astrophysics Data System (ADS)

    Brekke, L. D.; Clark, M. P.; Gutmann, E. D.; Wood, A.; Mizukami, N.; Mendoza, P. A.; Rasmussen, R.; Ikeda, K.; Pruitt, T.; Arnold, J. R.; Rajagopalan, B.

    2015-12-01

    Adaptation planning assessments often rely on single methods for climate projection downscaling and hydrologic analysis, do not reveal uncertainties from associated method choices, and thus likely produce overly confident decision-support information. Recent work by the authors has highlighted this issue by identifying strengths and weaknesses of widely applied methods for downscaling climate projections and assessing hydrologic impacts. This work has shown that many of the methodological choices made can alter the magnitude, and even the sign of the climate change signal. Such results motivate consideration of both sources of method uncertainty within an impacts assessment. Consequently, the authors have pursued development of improved downscaling techniques spanning a range of method classes (quasi-dynamical and circulation-based statistical methods) and developed approaches to better account for hydrologic analysis uncertainty (multi-model; regional parameter estimation under forcing uncertainty). This presentation summarizes progress in the development of these methods, as well as implications of pursuing these developments. First, having access to these methods creates an opportunity to better reveal impacts uncertainty through multi-method ensembles, expanding on present-practice ensembles which are often based only on emissions scenarios and GCM choices. Second, such expansion of uncertainty treatment combined with an ever-expanding wealth of global climate projection information creates a challenge of how to use such a large ensemble for local adaptation planning. To address this challenge, the authors are evaluating methods for ensemble selection (considering the principles of fidelity, diversity and sensitivity) that is compatible with present-practice approaches for abstracting change scenarios from any "ensemble of opportunity". Early examples from this development will also be presented.

  4. The Schaake shuffle: A method for reconstructing space-time variability in forecasted precipitation and temperature fields

    USGS Publications Warehouse

    Clark, M.R.; Gangopadhyay, S.; Hay, L.; Rajagopalan, B.; Wilby, R.

    2004-01-01

    A number of statistical methods that are used to provide local-scale ensemble forecasts of precipitation and temperature do not contain realistic spatial covariability between neighboring stations or realistic temporal persistence for subsequent forecast lead times. To demonstrate this point, output from a global-scale numerical weather prediction model is used in a stepwise multiple linear regression approach to downscale precipitation and temperature to individual stations located in and around four study basins in the United States. Output from the forecast model is downscaled for lead times up to 14 days. Residuals in the regression equation are modeled stochastically to provide 100 ensemble forecasts. The precipitation and temperature ensembles from this approach have a poor representation of the spatial variability and temporal persistence. The spatial correlations for downscaled output are considerably lower than observed spatial correlations at short forecast lead times (e.g., less than 5 days) when there is high accuracy in the forecasts. At longer forecast lead times, the downscaled spatial correlations are close to zero. Similarly, the observed temporal persistence is only partly present at short forecast lead times. A method is presented for reordering the ensemble output in order to recover the space-time variability in precipitation and temperature fields. In this approach, the ensemble members for a given forecast day are ranked and matched with the rank of precipitation and temperature data from days randomly selected from similar dates in the historical record. The ensembles are then reordered to correspond to the original order of the selection of historical data. Using this approach, the observed intersite correlations, intervariable correlations, and the observed temporal persistence are almost entirely recovered. This reordering methodology also has applications for recovering the space-time variability in modeled streamflow. ?? 2004 American Meteorological Society.

  5. Total probabilities of ensemble runoff forecasts

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2016-04-01

    Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative impact on the calibration error, but makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.

  6. Conformational Ensembles of Calmodulin Revealed by Nonperturbing Site-Specific Vibrational Probe Groups.

    PubMed

    Kelly, Kristen L; Dalton, Shannon R; Wai, Rebecca B; Ramchandani, Kanika; Xu, Rosalind J; Linse, Sara; Londergan, Casey H

    2018-03-22

    Seven native residues on the regulatory protein calmodulin, including three key methionine residues, were replaced (one by one) by the vibrational probe amino acid cyanylated cysteine, which has a unique CN stretching vibration that reports on its local environment. Almost no perturbation was caused by this probe at any of the seven sites, as reported by CD spectra of calcium-bound and apo calmodulin and binding thermodynamics for the formation of a complex between calmodulin and a canonical target peptide from skeletal muscle myosin light chain kinase measured by isothermal titration. The surprising lack of perturbation suggests that this probe group could be applied directly in many protein-protein binding interfaces. The infrared absorption bands for the probe groups reported many dramatic changes in the probes' local environments as CaM went from apo- to calcium-saturated to target peptide-bound conditions, including large frequency shifts and a variety of line shapes from narrow (interpreted as a rigid and invariant local environment) to symmetric to broad and asymmetric (likely from multiple coexisting and dynamically exchanging structures). The fast intrinsic time scale of infrared spectroscopy means that the line shapes report directly on site-specific details of calmodulin's variable structural distribution. Though quantitative interpretation of the probe line shapes depends on a direct connection between simulated ensembles and experimental data that does not yet exist, formation of such a connection to data such as that reported here would provide a new way to evaluate conformational ensembles from data that directly contains the structural distribution. The calmodulin probe sites developed here will also be useful in evaluating the binding mode of calmodulin with many uncharacterized regulatory targets.

  7. Short-term ensemble radar rainfall forecasts for hydrological applications

    NASA Astrophysics Data System (ADS)

    Codo de Oliveira, M.; Rico-Ramirez, M. A.

    2016-12-01

    Flooding is a very common natural disaster around the world, putting local population and economy at risk. Forecasting floods several hours ahead and issuing warnings are of main importance to permit proper response in emergency situations. However, it is important to know the uncertainties related to the rainfall forecasting in order to produce more reliable forecasts. Nowcasting models (short-term rainfall forecasts) are able to produce high spatial and temporal resolution predictions that are useful in hydrological applications. Nonetheless, they are subject to uncertainties mainly due to the nowcasting model used, errors in radar rainfall estimation, temporal development of the velocity field and to the fact that precipitation processes such as growth and decay are not taken into account. In this study an ensemble generation scheme using rain gauge data as a reference to estimate radars errors is used to produce forecasts with up to 3h lead-time. The ensembles try to assess in a realistic way the residual uncertainties that remain even after correction algorithms are applied in the radar data. The ensembles produced are compered to a stochastic ensemble generator. Furthermore, the rainfall forecast output was used as an input in a hydrodynamic sewer network model and also in hydrological model for catchments of different sizes in north England. A comparative analysis was carried of how was carried out to assess how the radar uncertainties propagate into these models. The first named author is grateful to CAPES - Ciencia sem Fronteiras for funding this PhD research.

  8. Emergence of a Stable Cortical Map for Neuroprosthetic Control

    PubMed Central

    Ganguly, Karunesh; Carmena, Jose M.

    2009-01-01

    Cortical control of neuroprosthetic devices is known to require neuronal adaptations. It remains unclear whether a stable cortical representation for prosthetic function can be stored and recalled in a manner that mimics our natural recall of motor skills. Especially in light of the mixed evidence for a stationary neuron-behavior relationship in cortical motor areas, understanding this relationship during long-term neuroprosthetic control can elucidate principles of neural plasticity as well as improve prosthetic function. Here, we paired stable recordings from ensembles of primary motor cortex neurons in macaque monkeys with a constant decoder that transforms neural activity to prosthetic movements. Proficient control was closely linked to the emergence of a surprisingly stable pattern of ensemble activity, indicating that the motor cortex can consolidate a neural representation for prosthetic control in the presence of a constant decoder. The importance of such a cortical map was evident in that small perturbations to either the size of the neural ensemble or to the decoder could reversibly disrupt function. Moreover, once a cortical map became consolidated, a second map could be learned and stored. Thus, long-term use of a neuroprosthetic device is associated with the formation of a cortical map for prosthetic function that is stable across time, readily recalled, resistant to interference, and resembles a putative memory engram. PMID:19621062

  9. Stress fields and energy of disclination-type defects in zones of localized elastic distortions

    NASA Astrophysics Data System (ADS)

    Sukhanov, Ivan I.; Tyumentsev, Alexander N.; Ditenberg, Ivan A.

    2016-11-01

    This paper studies theoretically the elastically deformed state and analyzes deformation mechanisms in nanocrystals in the zones of localized elastic distortions and related disclination-type defects, such as dipole, quadrupole and multipole of partial disclinations. Significant differences in the energies of quadrupole and multipole configurations in comparison with nanodipole are revealed. The mechanism of deformation localization in the field of elastic distortions is proposed, which is a quasi-periodic sequence of formation and relaxation of various disclination ensembles with a periodic change in the energy of the defect.

  10. Encoding of Olfactory Information with Oscillating Neural Assemblies

    NASA Astrophysics Data System (ADS)

    Laurent, Gilles; Davidowitz, Hananel

    1994-09-01

    In the brain, fast oscillations of local field potentials, which are thought to arise from the coherent and rhythmic activity of large numbers of neurons, were observed first in the olfactory system and have since been described in many neocortical areas. The importance of these oscillations in information coding, however, is controversial. Here, local field potential and intracellular recordings were obtained from the antennal lobe and mushroom body of the locust Schistocerca americana. Different odors evoked coherent oscillations in different, but usually overlapping, ensembles of neurons. The phase of firing of individual neurons relative to the population was not dependent on the odor. The components of a coherently oscillating ensemble of neurons changed over the duration of a single exposure to an odor. It is thus proposed that odors are encoded by specific but dynamic assemblies of coherently oscillating neurons. Such distributed and temporal representation of complex sensory signals may facilitate combinatorial coding and associative learning in these, and possibly other, sensory networks.

  11. Direct observation of narrow mid-infrared plasmon linewidths of single metal oxide nanocrystals

    DOE PAGES

    Johns, Robert W.; Bechtel, Hans A.; Runnerstrom, Evan L.; ...

    2016-05-13

    Infrared-responsive doped metal oxide nanocrystals are an emerging class of plasmonic materials whose localized surface plasmon resonances (LSPR) can be resonant with molecular vibrations. This presents a distinctive opportunity to manipulate light-matter interactions to redirect chemical or spectroscopic outcomes through the strong local electric fields they generate. Here we report a technique for measuring single nanocrystal absorption spectra of doped metal oxide nanocrystals, revealing significant spectral inhomogeneity in their mid-infrared LSPRs. Our analysis suggests dopant incorporation is heterogeneous beyond expectation based on a statistical distribution of dopants. The broad ensemble linewidths typically observed in these materials result primarily from sammore » ple heterogeneity and not from strong electronic damping associated with lossy plasmonic materials. In fact, single nanocrystal spectra reveal linewidths as narrow as 600 cm -1 in aluminium-doped zinc oxide, a value less than half the ensemble linewidth and markedly less than homogeneous linewidths of gold nanospheres.« less

  12. Scale-dependent portfolio effects explain growth inflation and volatility reduction in landscape demography

    PubMed Central

    2017-01-01

    Population demography is central to fundamental ecology and for predicting range shifts, decline of threatened species, and spread of invasive organisms. There is a mismatch between most demographic work, carried out on few populations and at local scales, and the need to predict dynamics at landscape and regional scales. Inspired by concepts from landscape ecology and Markowitz’s portfolio theory, we develop a landscape portfolio platform to quantify and predict the behavior of multiple populations, scaling up the expectation and variance of the dynamics of an ensemble of populations. We illustrate this framework using a 35-y time series on gypsy moth populations. We demonstrate the demography accumulation curve in which the collective growth of the ensemble depends on the number of local populations included, highlighting a minimum but adequate number of populations for both regional-scale persistence and cross-scale inference. The attainable set of landscape portfolios further suggests tools for regional population management for both threatened and invasive species. PMID:29109261

  13. Effects of quantum coherence and interference in atoms near nanoparticles

    NASA Astrophysics Data System (ADS)

    Dhayal, Suman; Rostovtsev, Yuri V.

    2016-04-01

    Optical properties of ensembles of realistic quantum emitters coupled to plasmonic systems are studied by using adequate models that can take into account full atomic geometry. In particular, the coherent effects such as forming "dark states," optical pumping, coherent Raman scattering, and the stimulated Raman adiabatic passage (STIRAP) are revisited in the presence of metallic nanoparticles. It is shown that the dark states are still formed but they have more complicated structure, and the optical pumping and the STIRAP cannot be employed in the vicinity of plasmonic nanostructures. Also, there is a huge difference in the behavior of the local atomic polarization and the atomic polarization averaged over an ensemble of atoms homogeneously spread near nanoparticles. The average polarization is strictly related to the polarization induced by the external field, while the local polarization can be very different from the one induced by the external field. This is important for the excitation of single molecules, e.g., different components of scattering from single molecules can be used for their efficient detection.

  14. Experimental investigation on local mechanical response of superelastic NiTi shape memory alloy

    NASA Astrophysics Data System (ADS)

    Xiao, Yao; Zeng, Pan; Lei, Liping

    2016-01-01

    In this paper, primary attention is paid to the local mechanical response of NiTi shape memory alloy (SMA) under uniaxial tension. With the help of in situ digital image correlation, sets of experiments are conducted to measure the local strain field at various thermomechanical conditions. Two types of mechanical responses of NiTi SMA are identified. The residual strain localization phenomena are observed, which can be attributed to the localized phase transformation (PT) and we affirm that most of the irreversibility is accumulated simultaneously during PT. It is found that temperature and PT play important roles in inducing delocalization of the reverse transformation. We conclude that forward transformation has more influence on the transition of mechanical response in NiTi SMA than reverse transformation in terms of the critical transition temperature for inducing delocalized reverse transformation.

  15. A further assessment of vegetation feedback on decadal Sahel rainfall variability

    NASA Astrophysics Data System (ADS)

    Kucharski, Fred; Zeng, Ning; Kalnay, Eugenia

    2013-03-01

    The effect of vegetation feedback on decadal-scale Sahel rainfall variability is analyzed using an ensemble of climate model simulations in which the atmospheric general circulation model ICTPAGCM ("SPEEDY") is coupled to the dynamic vegetation model VEGAS to represent feedbacks from surface albedo change and evapotranspiration, forced externally by observed sea surface temperature (SST) changes. In the control experiment, where the full vegetation feedback is included, the ensemble is consistent with the observed decadal rainfall variability, with a forced component 60 % of the observed variability. In a sensitivity experiment where climatological vegetation cover and albedo are prescribed from the control experiment, the ensemble of simulations is not consistent with the observations because of strongly reduced amplitude of decadal rainfall variability, and the forced component drops to 35 % of the observed variability. The decadal rainfall variability is driven by SST forcing, but significantly enhanced by land-surface feedbacks. Both, local evaporation and moisture flux convergence changes are important for the total rainfall response. Also the internal decadal variability across the ensemble members (not SST-forced) is much stronger in the control experiment compared with the one where vegetation cover and albedo are prescribed. It is further shown that this positive vegetation feedback is physically related to the albedo feedback, supporting the Charney hypothesis.

  16. Regional sea level variability in a high-resolution global coupled climate model

    NASA Astrophysics Data System (ADS)

    Palko, D.; Kirtman, B. P.

    2016-12-01

    The prediction of trends at regional scales is essential in order to adapt to and prepare for the effects of climate change. However, GCMs are unable to make reliable predictions at regional scales. The prediction of local sea level trends is particularly critical. The main goal of this research is to utilize high-resolution (HR) (0.1° resolution in the ocean) coupled model runs of CCSM4 to analyze regional sea surface height (SSH) trends. Unlike typical, lower resolution (1.0°) GCM runs these HR runs resolve features in the ocean, like the Gulf Stream, which may have a large effect on regional sea level. We characterize the variability of regional SSH along the Atlantic coast of the US using tide gauge observations along with fixed radiative forcing runs of CCSM4 and HR interactive ensemble runs. The interactive ensemble couples an ensemble mean atmosphere with a single ocean realization. This coupling results in a 30% decrease in the strength of the Atlantic meridional overturning circulation; therefore, the HR interactive ensemble is analogous to a HR hosing experiment. By characterizing the variability in these high-resolution GCM runs and observations we seek to understand what processes influence coastal SSH along the Eastern Coast of the United States and better predict future SLR.

  17. Testing a Coupled Global-limited-area Data Assimilation System using Observations from the 2004 Pacific Typhoon Season

    NASA Astrophysics Data System (ADS)

    Holt, C. R.; Szunyogh, I.; Gyarmati, G.; Hoffman, R. N.; Leidner, M.

    2011-12-01

    Tropical cyclone (TC) track and intensity forecasts have improved in recent years due to increased model resolution, improved data assimilation, and the rapid increase in the number of routinely assimilated observations over oceans. The data assimilation approach that has received the most attention in recent years is Ensemble Kalman Filtering (EnKF). The most attractive feature of the EnKF is that it uses a fully flow-dependent estimate of the error statistics, which can have important benefits for the analysis of rapidly developing TCs. We implement the Local Ensemble Transform Kalman Filter algorithm, a vari- ation of the EnKF, on a reduced-resolution version of the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) model and the NCEP Regional Spectral Model (RSM) to build a coupled global-limited area anal- ysis/forecast system. This is the first time, to our knowledge, that such a system is used for the analysis and forecast of tropical cyclones. We use data from summer 2004 to study eight tropical cyclones in the Northwest Pacific. The benchmark data sets that we use to assess the performance of our system are the NCEP Reanalysis and the NCEP Operational GFS analyses from 2004. These benchmark analyses were both obtained by the Statistical Spectral Interpolation, which was the operational data assimilation system of NCEP in 2004. The GFS Operational analysis assimilated a large number of satellite radiance observations in addition to the observations assimilated in our system. All analyses are verified against the Joint Typhoon Warning Center Best Track data set. The errors are calculated for the position and intensity of the TCs. The global component of the ensemble-based system shows improvement in po- sition analysis over the NCEP Reanalysis, but shows no significant difference from the NCEP operational analysis for most of the storm tracks. The regional com- ponent of our system improves position analysis over all the global analyses. The intensity analyses, measured by the minimum sea level pressure, are of similar quality in all of the analyses. Regional deterministic forecasts started from our analyses are generally not significantly different from those started from the GFS operational analysis. On average, the regional experiments performed better for longer than 48 h sea level pressure forecasts, while the global forecast performed better in predicting the position for longer than 48 h.

  18. Evaluating the effects of historical land cover change on summertime weather and climate in New Jersey

    NASA Astrophysics Data System (ADS)

    Wichansky, Paul Stuart

    The 19th-century agrarian landscape of New Jersey (NJ) and the surrounding region has been extensively transformed to the present-day land cover by urbanization, reforestation, and localized areas of deforestation. This study used a mesoscale atmospheric numerical model to investigate the sensitivity of the warm season climate of NJ to these land cover changes. Reconstructed 1880s-era and present-day land cover datasets were used as surface boundary conditions for a set of simulations performed with the Regional Atmospheric Modeling System (RAMS). Three-member ensembles with historical and present-day land cover were compared to examine the sensitivity of surface air and dewpoint temperatures, rainfall, the individual components of the surface energy budget, horizontal and vertical winds, and the vertical profiles of temperature and humidity to these land cover changes. Mean temperatures for the present-day landscape were 0.3-0.6°C warmer than for the historical landscape over a considerable portion of NJ and the surrounding region, with daily maximum temperatures at least 1.0°C warmer over some of the highly urbanized locations. Reforested regions in the present-day landscape, however, showed a slight cooling. Surface warming was generally associated with repartitioning of net radiation from latent to sensible heat flux, and conversely for cooling. Reduced evapotranspiration from much of the present-day land surface led to dewpoint temperature decreases of 0.3-0.6°C. While urbanization was accompanied by strong surface albedo decreases and increases in net shortwave radiation, reforestation and potential changes in forest composition have generally increased albedos and also enhanced landscape heterogeneity. The increased deciduousness of forests may have further reduced net downward longwave radiation. These land cover changes have modified boundary-layer dynamics by increasing low-level convergence and upper-level divergence in the interior of NJ, especially where sensible heat fluxes have increased for the present-day landscape, hence enhancing uplift in the mid-troposphere. The mesoscale circulations that developed in the present-day ensemble were also more effective at lifting available moisture to higher levels of the boundary layer, lowering dewpoints near the surface but increasing them aloft. Likewise, the sea breeze in coastal areas of NJ in the present-day ensemble had stronger uplift during the afternoon and enhanced moisture transport to higher levels.

  19. Analysis of local bond-orientational order for liquid gallium at ambient pressure: Two types of cluster structures.

    PubMed

    Chen, Lin-Yuan; Tang, Ping-Han; Wu, Ten-Ming

    2016-07-14

    In terms of the local bond-orientational order (LBOO) parameters, a cluster approach to analyze local structures of simple liquids was developed. In this approach, a cluster is defined as a combination of neighboring seeds having at least nb local-orientational bonds and their nearest neighbors, and a cluster ensemble is a collection of clusters with a specified nb and number of seeds ns. This cluster analysis was applied to investigate the microscopic structures of liquid Ga at ambient pressure (AP). The liquid structures studied were generated through ab initio molecular dynamics simulations. By scrutinizing the static structure factors (SSFs) of cluster ensembles with different combinations of nb and ns, we found that liquid Ga at AP contained two types of cluster structures, one characterized by sixfold orientational symmetry and the other showing fourfold orientational symmetry. The SSFs of cluster structures with sixfold orientational symmetry were akin to the SSF of a hard-sphere fluid. On the contrary, the SSFs of cluster structures showing fourfold orientational symmetry behaved similarly as the anomalous SSF of liquid Ga at AP, which is well known for exhibiting a high-q shoulder. The local structures of a highly LBOO cluster whose SSF displayed a high-q shoulder were found to be more similar to the structure of β-Ga than those of other solid phases of Ga. More generally, the cluster structures showing fourfold orientational symmetry have an inclination to resemble more to β-Ga.

  20. Uncertainty in Twenty-First-Century CMIP5 Sea Level Projections

    NASA Technical Reports Server (NTRS)

    Little, Christopher M.; Horton, Radley M.; Kopp, Robert E.; Oppenheimer, Michael; Yip, Stan

    2015-01-01

    The representative concentration pathway (RCP) simulations included in phase 5 of the Coupled Model Intercomparison Project (CMIP5) quantify the response of the climate system to different natural and anthropogenic forcing scenarios. These simulations differ because of 1) forcing, 2) the representation of the climate system in atmosphere-ocean general circulation models (AOGCMs), and 3) the presence of unforced (internal) variability. Global and local sea level rise projections derived from these simulations, and the emergence of distinct responses to the four RCPs depend on the relative magnitude of these sources of uncertainty at different lead times. Here, the uncertainty in CMIP5 projections of sea level is partitioned at global and local scales, using a 164-member ensemble of twenty-first-century simulations. Local projections at New York City (NYSL) are highlighted. The partition between model uncertainty, scenario uncertainty, and internal variability in global mean sea level (GMSL) is qualitatively consistent with that of surface air temperature, with model uncertainty dominant for most of the twenty-first century. Locally, model uncertainty is dominant through 2100, with maxima in the North Atlantic and the Arctic Ocean. The model spread is driven largely by 4 of the 16 AOGCMs in the ensemble; these models exhibit outlying behavior in all RCPs and in both GMSL and NYSL. The magnitude of internal variability varies widely by location and across models, leading to differences of several decades in the local emergence of RCPs. The AOGCM spread, and its sensitivity to model exclusion and/or weighting, has important implications for sea level assessments, especially if a local risk management approach is utilized.

  1. An Internet Protocol-Based Software System for Real-Time, Closed-Loop, Multi-Spacecraft Mission Simulation Applications

    NASA Technical Reports Server (NTRS)

    Burns, Richard D.; Davis, George; Cary, Everett; Higinbotham, John; Hogie, Keith

    2003-01-01

    A mission simulation prototype for Distributed Space Systems has been constructed using existing developmental hardware and software testbeds at NASA s Goddard Space Flight Center. A locally distributed ensemble of testbeds, connected through the local area network, operates in real time and demonstrates the potential to assess the impact of subsystem level modifications on system level performance and, ultimately, on the quality and quantity of the end product science data.

  2. Local Characteristics of the Nocturnal Boundary Layer in Response to External Pressure Forcing

    NASA Astrophysics Data System (ADS)

    van der Linden, Steven; Baas, Peter; van Hooft, Antoon; van Hooijdonk, Ivo; Bosveld, Fred; van de Wiel, Bas

    2017-04-01

    Geostrophic wind speed data, derived from pressure observations, are used in combination with tower measurements to investigate the nocturnal stable boundary layer at Cabauw, The Netherlands. Since the geostrophic wind speed is not directly influenced by local nocturnal stability, it may be regarded as an external forcing parameter of the nocturnal stable boundary layer. This is in contrast to local parameters such as in situ wind speed, the Monin-Obukhov stability parameter (z/L) or the local Richardson number. To characterize the stable boundary layer, ensemble averages of clear-sky nights with similar geostrophic wind speed are formed. In this manner, the mean dynamical behavior of near-surface turbulent characteristics, and composite profiles of wind and temperature is systematically investigated. We find that the classification results in a gradual ordering of the diagnosed variables in terms of the geostrophic wind speed. In an ensemble sense the transition from the weakly stable to very stable boundary layer is more gradual than expected. Interestingly, for very weak geostrophic winds turbulent activity is found to be negligibly small while the resulting boundary cooling stays finite. Realistic numerical simulations for those cases should therefore have a a solid description of other thermodynamic processes such as soil heat conduction and radiative transfer. This prerequisite poses a challenge for Large-Eddy Simulations of weak wind nocturnal boundary layers.

  3. A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, K; Seymour, R; Wang, W

    2009-02-17

    A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less

  4. Constrained Local UniversE Simulations: a Local Group factory

    NASA Astrophysics Data System (ADS)

    Carlesi, Edoardo; Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I.; Pilipenko, Sergey V.; Knebe, Alexander; Courtois, Hélène; Tully, R. Brent; Steinmetz, Matthias

    2016-05-01

    Near-field cosmology is practised by studying the Local Group (LG) and its neighbourhood. This paper describes a framework for simulating the `near field' on the computer. Assuming the Λ cold dark matter (ΛCDM) model as a prior and applying the Bayesian tools of the Wiener filter and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the ΛCDMscenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of haloes must obey specific isolation, mass and separation criteria. At the second level, the orbital angular momentum and energy are constrained, and on the third one the phase of the orbit is constrained. Out of the 300 constrained simulations, 146 LGs obey the first set of criteria, 51 the second and 6 the third. The robustness of our LG `factory' enables the construction of a large ensemble of simulated LGs. Suitable candidates for high-resolution hydrodynamical simulations of the LG can be drawn from this ensemble, which can be used to perform comprehensive studies of the formation of the LG.

  5. GEOS S2S-2_1: GMAO's New High Resolution Seasonal Prediction System

    NASA Technical Reports Server (NTRS)

    Molod, Andrea; Akella, Santha; Andrews, Lauren; Barahona, Donifan; Borovikov, Anna; Chang, Yehui; Cullather, Richard; Hackert, Eric; Kovach, Robin; Koster, Randal; hide

    2017-01-01

    A new version of the modeling and analysis system used to produce sub-seasonal to seasonal forecasts has just been released by the NASA Goddard Global Modeling and Assimilation Office. The new version runs at higher atmospheric resolution (approximately 12 degree globally), contains a substantially improved model description of the cryosphere, and includes additional interactive earth system model components (aerosol model). In addition, the Ocean data assimilation system has been replaced with a Local Ensemble Transform Kalman Filter. Here will describe the new system, along with the plans for the future (GEOS S2S-3_0) which will include a higher resolution ocean model and more interactive earth system model components (interactive vegetation, biomass burning from fires). We will also present results from a free-running coupled simulation with the new system and results from a series of retrospective seasonal forecasts. Results from retrospective forecasts show significant improvements in surface temperatures over much of the northern hemisphere and a much improved prediction of sea ice extent in both hemispheres. The precipitation forecast skill is comparable to previous S2S systems, and the only trade off is an increased double ITCZ, which is expected as we go to higher atmospheric resolution.

  6. GEOS S2S-2_1: The GMAO new high resolution Seasonal Prediction System

    NASA Astrophysics Data System (ADS)

    Molod, A.; Vikhliaev, Y. V.; Hackert, E. C.; Kovach, R. M.; Zhao, B.; Cullather, R. I.; Marshak, J.; Borovikov, A.; Li, Z.; Barahona, D.; Andrews, L. C.; Chang, Y.; Schubert, S. D.; Koster, R. D.; Suarez, M.; Akella, S.

    2017-12-01

    A new version of the modeling and analysis system used to produce subseasonalto seasonal forecasts has just been released by the NASA/Goddard GlobalModeling and Assimilation Office. The new version runs at higher atmospheric resolution (approximately 1/2 degree globally), contains a subtantially improvedmodel description of the cryosphere, and includes additional interactive earth system model components (aerosol model). In addition, the Ocean data assimilationsystem has been replaced with a Local Ensemble Transform Kalman Filter.Here will describe the new system, along with the plans for the future (GEOS S2S-3_0) which will include a higher resolution ocean model and more interactive earth system model components (interactive vegetation, biomass burning from fires). We will alsopresent results from a free-running coupled simulation with the new system and resultsfrom a series of retrospective seasonal forecasts.Results from retrospective forecasts show significant improvements in surface temperaturesover much of the northern hemisphere and a much improved prediction of sea ice extent in bothhemispheres. The precipitation forecast skill is comparable to previous S2S systems, andthe only tradeoff is an increased "double ITCZ", which is expected as we go to higher atmospheric resolution.

  7. Shedding light on El Farol

    NASA Astrophysics Data System (ADS)

    Challet, Damien; Marsili, M.; Ottino, Gabriele

    2004-02-01

    We mathematize El Farol bar problem and transform it into a workable model. We find general conditions on the predictor space under which the convergence of the average attendance to the resource level does not require any intelligence on the side of the agents. Secondly, specializing to a particular ensemble of continuous strategies yields a model similar to the Minority Game. Statistical physics of disordered systems allows us to derive a complete understanding of the complex behavior of this model, on the basis of its phase diagram.

  8. The impact of covariance localization on the performance of an ocean EnKF system assimilating glider data in the Ligurian Sea

    NASA Astrophysics Data System (ADS)

    Falchetti, Silvia; Alvarez, Alberto

    2018-04-01

    Data assimilation through an ensemble Kalman filter (EnKF) is not exempt from deficiencies, including the generation of long-range unphysical correlations that degrade its performance. The covariance localization technique has been proposed and used in previous research to mitigate this effect. However, an evaluation of its performance is usually hindered by the sparseness and unsustained collection of independent observations. This article assesses the performance of an ocean prediction system composed of a multivariate EnKF coupled with a regional configuration of the Regional Ocean Model System (ROMS) with a covariance localization solution and data assimilation from an ocean glider that operated over a limited region of the Ligurian Sea. Simultaneous with the operation of the forecast system, a high-quality data set was repeatedly collected with a CTD sensor, i.e., every day during the period from 5 to 20 August 2013 (approximately 4 to 5 times the synoptic time scale of the area), located on board the NR/V Alliance for model validation. Comparisons between the validation data set and the forecasts provide evidence that the performance of the prediction system with covariance localization is superior to that observed using only EnKF assimilation without localization or using a free run ensemble. Furthermore, it is shown that covariance localization also increases the robustness of the model to the location of the assimilated data. Our analysis reveals that improvements are detected with regard to not only preventing the occurrence of spurious correlations but also preserving the spatial coherence in the updated covariance matrix. Covariance localization has been shown to be relevant in operational frameworks where short-term forecasts (on the order of days) are required.

  9. [The balance of markers of regulation vascular tone and fibrinogen in the prognosis of hemorrhagic transformation and fatal outcome in the acute period of ischemic stroke].

    PubMed

    Liang, O V; Kochetov, A G; Arkhipkin, A A; Novozhenova, Iu V; Shamalov, N A; Ramazanov, G R; Chuĭko, M R; Ogurtsov, P P; Skvortsova, V I

    2012-01-01

    The markers of regulation vascular tone, such as rennin, endothelin-1, and C-type natriuretic peptide, are of great value for prognosis of hemorrhagic transformation and fatal outcome of ischemic stroke. A change in the vascular tone in case of hemorrhagic transformation at the affected site precedes activation of the coagulation component of hemostasis as a mechanism preventing blood loss and increasing fibrinogen level. This work was aimed to study the balance of the above markers and fibrinogen in the prognosis of hemorrhagic transformation and fatal outcome in the acute period of ischemic stroke. It included 62 patients receiving no thrombolytic therapy. It was shown that symptomatic hemorrhagic transformation was associated with elevated rennin levels without a marked fall in the level of C-type natriuretic peptide and asymptomatic hemorrhagic transformation with elevated endothelin-1 levels and decreased concentration of natriuretic peptide. Fibrinogen level on day 4 of the observation proved to be a reliable predictor of negative prognosis. Asymptomatic hemorrhagic transformation without fatal outcome was associated with systemic and local vasoconstriction and inhibition of local vasodilation. Symptomatic hemorrhagic transformation with the fatal outcome was accompanied by dysregulation of vascular tone in the form of activation of systemic and local vasoconstriction, insufficient inhibition of local vasodilation and compensatory reaction in the form of activation of hemostatic mechanisms manifest as elevated fibrinogen levels on day 4. The lethal outcome without hemorrhagic transformation was associated with systemic vasoconstriction, activation of local vasodilation and vasoconstriction leading to local "biochemical paralysis" of vascular tone regulation.

  10. Unbiased, scalable sampling of protein loop conformations from probabilistic priors.

    PubMed

    Zhang, Yajia; Hauser, Kris

    2013-01-01

    Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.

  11. Unbiased, scalable sampling of protein loop conformations from probabilistic priors

    PubMed Central

    2013-01-01

    Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175

  12. Intercomparison of model response and internal variability across climate model ensembles

    NASA Astrophysics Data System (ADS)

    Kumar, Devashish; Ganguly, Auroop R.

    2017-10-01

    Characterization of climate uncertainty at regional scales over near-term planning horizons (0-30 years) is crucial for climate adaptation. Climate internal variability (CIV) dominates climate uncertainty over decadal prediction horizons at stakeholders' scales (regional to local). In the literature, CIV has been characterized indirectly using projections of climate change from multi-model ensembles (MME) instead of directly using projections from multiple initial condition ensembles (MICE), primarily because adequate number of initial condition (IC) runs were not available for any climate model. Nevertheless, the recent availability of significant number of IC runs from one climate model allows for the first time to characterize CIV directly from climate model projections and perform a sensitivity analysis to study the dominance of CIV compared to model response variability (MRV). Here, we measure relative agreement (a dimensionless number with values ranging between 0 and 1, inclusive; a high value indicates less variability and vice versa) among MME and MICE and find that CIV is lower than MRV for all projection time horizons and spatial resolutions for precipitation and temperature. However, CIV exhibits greater dominance over MRV for seasonal and annual mean precipitation at higher latitudes where signals of climate change are expected to emerge sooner. Furthermore, precipitation exhibits large uncertainties and a rapid decline in relative agreement from global to continental, regional, or local scales for MICE compared to MME. The fractional contribution of uncertainty due to CIV is invariant for precipitation and decreases for temperature as lead time progresses towards the end of the century.

  13. Ensemble Kalman filter for the reconstruction of the Earth's mantle circulation

    NASA Astrophysics Data System (ADS)

    Bocher, Marie; Fournier, Alexandre; Coltice, Nicolas

    2018-02-01

    Recent advances in mantle convection modeling led to the release of a new generation of convection codes, able to self-consistently generate plate-like tectonics at their surface. Those models physically link mantle dynamics to surface tectonics. Combined with plate tectonic reconstructions, they have the potential to produce a new generation of mantle circulation models that use data assimilation methods and where uncertainties in plate tectonic reconstructions are taken into account. We provided a proof of this concept by applying a suboptimal Kalman filter to the reconstruction of mantle circulation (Bocher et al., 2016). Here, we propose to go one step further and apply the ensemble Kalman filter (EnKF) to this problem. The EnKF is a sequential Monte Carlo method particularly adapted to solve high-dimensional data assimilation problems with nonlinear dynamics. We tested the EnKF using synthetic observations consisting of surface velocity and heat flow measurements on a 2-D-spherical annulus model and compared it with the method developed previously. The EnKF performs on average better and is more stable than the former method. Less than 300 ensemble members are sufficient to reconstruct an evolution. We use covariance adaptive inflation and localization to correct for sampling errors. We show that the EnKF results are robust over a wide range of covariance localization parameters. The reconstruction is associated with an estimation of the error, and provides valuable information on where the reconstruction is to be trusted or not.

  14. Ensemble sea ice forecast for predicting compressive situations in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Lehtiranta, Jonni; Lensu, Mikko; Kokkonen, Iiro; Haapala, Jari

    2017-04-01

    Forecasting of sea ice hazards is important for winter shipping in the Baltic Sea. In current numerical models the ice thickness distribution and drift are captured well, but compressive situations are often missing from forecast products. Its inclusion is requested by the shipping community, as compression poses a threat to ship operations. As compressing ice is capable of stopping ships for days and even damaging them, its inclusion in ice forecasts is vital. However, we have found that compression can not be predicted well in a deterministic forecast, since it can be a local and a quickly changing phenomenon. It is also very sensitive to small changes in the wind speed and direction, the prevailing ice conditions, and the model parameters. Thus, a probabilistic ensemble simulation is needed to produce a meaningful compression forecast. An ensemble model setup was developed in the SafeWIN project for this purpose. It uses the HELMI multicategory ice model, which was amended for making simulations in parallel. The ensemble was built by perturbing the atmospheric forcing and the physical parameters of the ice pack. The model setup will provide probabilistic forecasts for the compression in the Baltic sea ice. Additionally the model setup provides insight into the uncertainties related to different model parameters and their impact on the model results. We have completed several hindcast simulations for the Baltic Sea for verification purposes. These results are shown to match compression reports gathered from ships. In addition, an ensemble forecast is in preoperational testing phase and its first evaluation will be presented in this work.

  15. Understanding uncertainty in precipitation changes in a balanced perturbed-physics ensemble under multiple climate forcings

    NASA Astrophysics Data System (ADS)

    Millar, R.; Ingram, W.; Allen, M. R.; Lowe, J.

    2013-12-01

    Temperature and precipitation patterns are the climate variables with the greatest impacts on both natural and human systems. Due to the small spatial scales and the many interactions involved in the global hydrological cycle, in general circulation models (GCMs) representations of precipitation changes are subject to considerable uncertainty. Quantifying and understanding the causes of uncertainty (and identifying robust features of predictions) in both global and local precipitation change is an essential challenge of climate science. We have used the huge distributed computing capacity of the climateprediction.net citizen science project to examine parametric uncertainty in an ensemble of 20,000 perturbed-physics versions of the HadCM3 general circulation model. The ensemble has been selected to have a control climate in top-of-atmosphere energy balance [Yamazaki et al. 2013, J.G.R.]. We force this ensemble with several idealised climate-forcing scenarios including carbon dioxide step and transient profiles, solar radiation management geoengineering experiments with stratospheric aerosols, and short-lived climate forcing agents. We will present the results from several of these forcing scenarios under GCM parametric uncertainty. We examine the global mean precipitation energy budget to understand the robustness of a simple non-linear global precipitation model [Good et al. 2012, Clim. Dyn.] as a better explanation of precipitation changes in transient climate projections under GCM parametric uncertainty than a simple linear tropospheric energy balance model. We will also present work investigating robust conclusions about precipitation changes in a balanced ensemble of idealised solar radiation management scenarios [Kravitz et al. 2011, Atmos. Sci. Let.].

  16. Assimilating every-30-second 100-m-mesh radar observations for convective weather: implications to non-Gaussian PDF

    NASA Astrophysics Data System (ADS)

    Miyoshi, T.; Teramura, T.; Ruiz, J.; Kondo, K.; Lien, G. Y.

    2016-12-01

    Convective weather is known to be highly nonlinear and chaotic, and it is hard to predict their location and timing precisely. Our Big Data Assimilation (BDA) effort has been exploring to use dense and frequent observations to avoid non-Gaussian probability density function (PDF) and to apply an ensemble Kalman filter under the Gaussian error assumption. The phased array weather radar (PAWR) can observe a dense three-dimensional volume scan with 100-m range resolution and 100 elevation angles in only 30 seconds. The BDA system assimilates the PAWR reflectivity and Doppler velocity observations every 30 seconds into 100 ensemble members of storm-scale numerical weather prediction (NWP) model at 100-m grid spacing. The 30-second-update, 100-m-mesh BDA system has been quite successful in multiple case studies of local severe rainfall events. However, with 1000 ensemble members, the reduced-resolution BDA system at 1-km grid spacing showed significant non-Gaussian PDF with every-30-second updates. With a 10240-member ensemble Kalman filter with a global NWP model at 112-km grid spacing, we found roughly 1000 members satisfactory to capture the non-Gaussian error structures. With these in mind, we explore how the density of observations in space and time affects the non-Gaussianity in an ensemble Kalman filter with a simple toy model. In this presentation, we will present the most up-to-date results of the BDA research, as well as the investigation with the toy model on the non-Gaussianity with dense and frequent observations.

  17. In the eye of the beholder: Inhomogeneous distribution of high-resolution shapes within the random-walk ensemble

    NASA Astrophysics Data System (ADS)

    Müller, Christian L.; Sbalzarini, Ivo F.; van Gunsteren, Wilfred F.; Žagrović, Bojan; Hünenberger, Philippe H.

    2009-06-01

    The concept of high-resolution shapes (also referred to as folds or states, depending on the context) of a polymer chain plays a central role in polymer science, structural biology, bioinformatics, and biopolymer dynamics. However, although the idea of shape is intuitively very useful, there is no unambiguous mathematical definition for this concept. In the present work, the distributions of high-resolution shapes within the ideal random-walk ensembles with N =3,…,6 beads (or up to N =10 for some properties) are investigated using a systematic (grid-based) approach based on a simple working definition of shapes relying on the root-mean-square atomic positional deviation as a metric (i.e., to define the distance between pairs of structures) and a single cutoff criterion for the shape assignment. Although the random-walk ensemble appears to represent the paramount of homogeneity and randomness, this analysis reveals that the distribution of shapes within this ensemble, i.e., in the total absence of interatomic interactions characteristic of a specific polymer (beyond the generic connectivity constraint), is significantly inhomogeneous. In particular, a specific (densest) shape occurs with a local probability that is 1.28, 1.79, 2.94, and 10.05 times (N =3,…,6) higher than the corresponding average over all possible shapes (these results can tentatively be extrapolated to a factor as large as about 1028 for N =100). The qualitative results of this analysis lead to a few rather counterintuitive suggestions, namely, that, e.g., (i) a fold classification analysis applied to the random-walk ensemble would lead to the identification of random-walk "folds;" (ii) a clustering analysis applied to the random-walk ensemble would also lead to the identification random-walk "states" and associated relative free energies; and (iii) a random-walk ensemble of polymer chains could lead to well-defined diffraction patterns in hypothetical fiber or crystal diffraction experiments. The inhomogeneous nature of the shape probability distribution identified here for random walks may represent a significant underlying baseline effect in the analysis of real polymer chain ensembles (i.e., in the presence of specific interatomic interactions). As a consequence, a part of what is called a polymer shape may actually reside just "in the eye of the beholder" rather than in the nature of the interactions between the constituting atoms, and the corresponding observation-related bias should be taken into account when drawing conclusions from shape analyses as applied to real structural ensembles.

  18. In the eye of the beholder: Inhomogeneous distribution of high-resolution shapes within the random-walk ensemble.

    PubMed

    Müller, Christian L; Sbalzarini, Ivo F; van Gunsteren, Wilfred F; Zagrović, Bojan; Hünenberger, Philippe H

    2009-06-07

    The concept of high-resolution shapes (also referred to as folds or states, depending on the context) of a polymer chain plays a central role in polymer science, structural biology, bioinformatics, and biopolymer dynamics. However, although the idea of shape is intuitively very useful, there is no unambiguous mathematical definition for this concept. In the present work, the distributions of high-resolution shapes within the ideal random-walk ensembles with N=3,...,6 beads (or up to N=10 for some properties) are investigated using a systematic (grid-based) approach based on a simple working definition of shapes relying on the root-mean-square atomic positional deviation as a metric (i.e., to define the distance between pairs of structures) and a single cutoff criterion for the shape assignment. Although the random-walk ensemble appears to represent the paramount of homogeneity and randomness, this analysis reveals that the distribution of shapes within this ensemble, i.e., in the total absence of interatomic interactions characteristic of a specific polymer (beyond the generic connectivity constraint), is significantly inhomogeneous. In particular, a specific (densest) shape occurs with a local probability that is 1.28, 1.79, 2.94, and 10.05 times (N=3,...,6) higher than the corresponding average over all possible shapes (these results can tentatively be extrapolated to a factor as large as about 10(28) for N=100). The qualitative results of this analysis lead to a few rather counterintuitive suggestions, namely, that, e.g., (i) a fold classification analysis applied to the random-walk ensemble would lead to the identification of random-walk "folds;" (ii) a clustering analysis applied to the random-walk ensemble would also lead to the identification random-walk "states" and associated relative free energies; and (iii) a random-walk ensemble of polymer chains could lead to well-defined diffraction patterns in hypothetical fiber or crystal diffraction experiments. The inhomogeneous nature of the shape probability distribution identified here for random walks may represent a significant underlying baseline effect in the analysis of real polymer chain ensembles (i.e., in the presence of specific interatomic interactions). As a consequence, a part of what is called a polymer shape may actually reside just "in the eye of the beholder" rather than in the nature of the interactions between the constituting atoms, and the corresponding observation-related bias should be taken into account when drawing conclusions from shape analyses as applied to real structural ensembles.

  19. The Role of Ocean and Atmospheric Heat Transport in the Arctic Amplification

    NASA Astrophysics Data System (ADS)

    Vargas Martes, R. M.; Kwon, Y. O.; Furey, H. H.

    2017-12-01

    Observational data and climate model projections have suggested that the Arctic region is warming around twice faster than the rest of the globe, which has been referred as the Arctic Amplification (AA). While the local feedbacks, e.g. sea ice-albedo feedback, are often suggested as the primary driver of AA by previous studies, the role of meridional heat transport by ocean and atmosphere is less clear. This study uses the Community Earth System Model version 1 Large Ensemble simulation (CESM1-LE) to seek deeper understanding of the role meridional oceanic and atmospheric heat transports play in AA. The simulation consists of 40 ensemble members with the same physics and external forcing using a single fully coupled climate model. Each ensemble member spans two time periods; the historical period from 1920 to 2005 using the Coupled Model Intercomparison Project Phase 5 (CMIP5) historical forcing and the future period from 2006 to 2100 using the CMIP5 Representative Concentration Pathways 8.5 (RCP8.5) scenario. Each of the ensemble members are initialized with slightly different air temperatures. As the CESM1-LE uses a single model unlike the CMIP5 multi-model ensemble, the internal variability and the externally forced components can be separated more clearly. The projections are calculated by comparing the period 2081-2100 relative to the time period 2001-2020. The CESM1-LE projects an AA of 2.5-2.8 times faster than the global average, which is within the range of those from the CMIP5 multi-model ensemble. However, the spread of AA from the CESM1-LE, which is attributed to the internal variability, is 2-3 times smaller than that of the CMIP5 ensemble, which may also include the inter-model differences. CESM1LE projects a decrease in the atmospheric heat transport into the Arctic and an increase in the oceanic heat transport. The atmospheric heat transport is further decomposed into moisture transport and dry static energy transport. Also, the oceanic heat transport is decomposed into the Pacific and Atlantic contributions.

  20. Refining multi-model projections of temperature extremes by evaluation against land-atmosphere coupling diagnostics

    NASA Astrophysics Data System (ADS)

    Sippel, Sebastian; Zscheischler, Jakob; Mahecha, Miguel D.; Orth, Rene; Reichstein, Markus; Vogel, Martha; Seneviratne, Sonia I.

    2017-05-01

    The Earth's land surface and the atmosphere are strongly interlinked through the exchange of energy and matter. This coupled behaviour causes various land-atmosphere feedbacks, and an insufficient understanding of these feedbacks contributes to uncertain global climate model projections. For example, a crucial role of the land surface in exacerbating summer heat waves in midlatitude regions has been identified empirically for high-impact heat waves, but individual climate models differ widely in their respective representation of land-atmosphere coupling. Here, we compile an ensemble of 54 combinations of observations-based temperature (T) and evapotranspiration (ET) benchmarking datasets and investigate coincidences of T anomalies with ET anomalies as a proxy for land-atmosphere interactions during periods of anomalously warm temperatures. First, we demonstrate that a large fraction of state-of-the-art climate models from the Coupled Model Intercomparison Project (CMIP5) archive produces systematically too frequent coincidences of high T anomalies with negative ET anomalies in midlatitude regions during the warm season and in several tropical regions year-round. These coincidences (high T, low ET) are closely related to the representation of temperature variability and extremes across the multi-model ensemble. Second, we derive a land-coupling constraint based on the spread of the T-ET datasets and consequently retain only a subset of CMIP5 models that produce a land-coupling behaviour that is compatible with these benchmark estimates. The constrained multi-model simulations exhibit more realistic temperature extremes of reduced magnitude in present climate in regions where models show substantial spread in T-ET coupling, i.e. biases in the model ensemble are consistently reduced. Also the multi-model simulations for the coming decades display decreased absolute temperature extremes in the constrained ensemble. On the other hand, the differences between projected and present-day climate extremes are affected to a lesser extent by the applied constraint, i.e. projected changes are reduced locally by around 0.5 to 1 °C - but this remains a local effect in regions that are highly sensitive to land-atmosphere coupling. In summary, our approach offers a physically consistent, diagnostic-based avenue to evaluate multi-model ensembles and subsequently reduce model biases in simulated and projected extreme temperatures.

  1. The Cluster Variation Method: A Primer for Neuroscientists.

    PubMed

    Maren, Alianna J

    2016-09-30

    Effective Brain-Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables , is defined in terms of a single interaction enthalpy parameter ( h ) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.

  2. The Cluster Variation Method: A Primer for Neuroscientists

    PubMed Central

    Maren, Alianna J.

    2016-01-01

    Effective Brain–Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables, is defined in terms of a single interaction enthalpy parameter (h) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found. PMID:27706022

  3. Equilibration, thermalisation, and the emergence of statistical mechanics in closed quantum systems

    NASA Astrophysics Data System (ADS)

    Gogolin, Christian; Eisert, Jens

    2016-05-01

    We review selected advances in the theoretical understanding of complex quantum many-body systems with regard to emergent notions of quantum statistical mechanics. We cover topics such as equilibration and thermalisation in pure state statistical mechanics, the eigenstate thermalisation hypothesis, the equivalence of ensembles, non-equilibration dynamics following global and local quenches as well as ramps. We also address initial state independence, absence of thermalisation, and many-body localisation. We elucidate the role played by key concepts for these phenomena, such as Lieb-Robinson bounds, entanglement growth, typicality arguments, quantum maximum entropy principles and the generalised Gibbs ensembles, and quantum (non-)integrability. We put emphasis on rigorous approaches and present the most important results in a unified language.

  4. Correlations after quantum quenches in the XXZ spin chain: failure of the generalized Gibbs ensemble.

    PubMed

    Pozsgay, B; Mestyán, M; Werner, M A; Kormos, M; Zaránd, G; Takács, G

    2014-09-12

    We study the nonequilibrium time evolution of the spin-1/2 anisotropic Heisenberg (XXZ) spin chain, with a choice of dimer product and Néel states as initial states. We investigate numerically various short-ranged spin correlators in the long-time limit and find that they deviate significantly from predictions based on the generalized Gibbs ensemble (GGE) hypotheses. By computing the asymptotic spin correlators within the recently proposed quench-action formalism [Phys. Rev. Lett. 110, 257203 (2013)], however, we find excellent agreement with the numerical data. We, therefore, conclude that the GGE cannot give a complete description even of local observables, while the quench-action formalism correctly captures the steady state in this case.

  5. Equilibration, thermalisation, and the emergence of statistical mechanics in closed quantum systems.

    PubMed

    Gogolin, Christian; Eisert, Jens

    2016-05-01

    We review selected advances in the theoretical understanding of complex quantum many-body systems with regard to emergent notions of quantum statistical mechanics. We cover topics such as equilibration and thermalisation in pure state statistical mechanics, the eigenstate thermalisation hypothesis, the equivalence of ensembles, non-equilibration dynamics following global and local quenches as well as ramps. We also address initial state independence, absence of thermalisation, and many-body localisation. We elucidate the role played by key concepts for these phenomena, such as Lieb-Robinson bounds, entanglement growth, typicality arguments, quantum maximum entropy principles and the generalised Gibbs ensembles, and quantum (non-)integrability. We put emphasis on rigorous approaches and present the most important results in a unified language.

  6. Modeling uncertainty and correlation in soil properties using Restricted Pairing and implications for ensemble-based hillslope-scale soil moisture and temperature estimation

    NASA Astrophysics Data System (ADS)

    Flores, A. N.; Entekhabi, D.; Bras, R. L.

    2007-12-01

    Soil hydraulic and thermal properties (SHTPs) affect both the rate of moisture redistribution in the soil column and the volumetric soil water capacity. Adequately constraining these properties through field and lab analysis to parameterize spatially-distributed hydrology models is often prohibitively expensive. Because SHTPs vary significantly at small spatial scales individual soil samples are also only reliably indicative of local conditions, and these properties remain a significant source of uncertainty in soil moisture and temperature estimation. In ensemble-based soil moisture data assimilation, uncertainty in the model-produced prior estimate due to associated uncertainty in SHTPs must be taken into account to avoid under-dispersive ensembles. To treat SHTP uncertainty for purposes of supplying inputs to a distributed watershed model we use the restricted pairing (RP) algorithm, an extension of Latin Hypercube (LH) sampling. The RP algorithm generates an arbitrary number of SHTP combinations by sampling the appropriate marginal distributions of the individual soil properties using the LH approach, while imposing a target rank correlation among the properties. A previously-published meta- database of 1309 soils representing 12 textural classes is used to fit appropriate marginal distributions to the properties and compute the target rank correlation structure, conditioned on soil texture. Given categorical soil textures, our implementation of the RP algorithm generates an arbitrarily-sized ensemble of realizations of the SHTPs required as input to the TIN-based Realtime Integrated Basin Simulator with vegetation dynamics (tRIBS+VEGGIE) distributed parameter ecohydrology model. Soil moisture ensembles simulated with RP- generated SHTPs exhibit less variance than ensembles simulated with SHTPs generated by a scheme that neglects correlation among properties. Neglecting correlation among SHTPs can lead to physically unrealistic combinations of parameters that exhibit implausible hydrologic behavior when input to the tRIBS+VEGGIE model.

  7. A real-time ocean reanalyses intercomparison project in the context of tropical pacific observing system and ENSO monitoring

    NASA Astrophysics Data System (ADS)

    Xue, Yan; Wen, C.; Kumar, A.; Balmaseda, M.; Fujii, Y.; Alves, O.; Martin, M.; Yang, X.; Vernieres, G.; Desportes, C.; Lee, T.; Ascione, I.; Gudgel, R.; Ishikawa, I.

    2017-12-01

    An ensemble of nine operational ocean reanalyses (ORAs) is now routinely collected, and is used to monitor the consistency across the tropical Pacific temperature analyses in real-time in support of ENSO monitoring, diagnostics, and prediction. The ensemble approach allows a more reliable estimate of the signal as well as an estimation of the noise among analyses. The real-time estimation of signal-to-noise ratio assists the prediction of ENSO. The ensemble approach also enables us to estimate the impact of the Tropical Pacific Observing System (TPOS) on the estimation of ENSO-related oceanic indicators. The ensemble mean is shown to have a better accuracy than individual ORAs, suggesting the ensemble approach is an effective tool to reduce uncertainties in temperature analysis for ENSO. The ensemble spread, as a measure of uncertainties in ORAs, is shown to be partially linked to the data counts of in situ observations. Despite the constraints by TPOS data, uncertainties in ORAs are still large in the northwestern tropical Pacific, in the SPCZ region, as well as in the central and northeastern tropical Pacific. The uncertainties in total temperature reduced significantly in 2015 due to the recovery of the TAO/TRITON array to approach the value before the TAO crisis in 2012. However, the uncertainties in anomalous temperature remained much higher than the pre-2012 value, probably due to uncertainties in the reference climatology. This highlights the importance of the long-term stability of the observing system for anomaly monitoring. The current data assimilation systems tend to constrain the solution very locally near the buoy sites, potentially damaging the larger-scale dynamical consistency. So there is an urgent need to improve data assimilation systems so that they can optimize the observation information from TPOS and contribute to improved ENSO prediction.

  8. Modelling climate impact on floods under future emission scenarios using an ensemble of climate model projections

    NASA Astrophysics Data System (ADS)

    Wetterhall, F.; Cloke, H. L.; He, Y.; Freer, J.; Pappenberger, F.

    2012-04-01

    Evidence provided by modelled assessments of climate change impact on flooding is fundamental to water resource and flood risk decision making. Impact models usually rely on climate projections from Global and Regional Climate Models, and there is no doubt that these provide a useful assessment of future climate change. However, cascading ensembles of climate projections into impact models is not straightforward because of problems of coarse resolution in Global and Regional Climate Models (GCM/RCM) and the deficiencies in modelling high-intensity precipitation events. Thus decisions must be made on how to appropriately pre-process the meteorological variables from GCM/RCMs, such as selection of downscaling methods and application of Model Output Statistics (MOS). In this paper a grand ensemble of projections from several GCM/RCM are used to drive a hydrological model and analyse the resulting future flood projections for the Upper Severn, UK. The impact and implications of applying MOS techniques to precipitation as well as hydrological model parameter uncertainty is taken into account. The resultant grand ensemble of future river discharge projections from the RCM/GCM-hydrological model chain is evaluated against a response surface technique combined with a perturbed physics experiment creating a probabilisic ensemble climate model outputs. The ensemble distribution of results show that future risk of flooding in the Upper Severn increases compared to present conditions, however, the study highlights that the uncertainties are large and that strong assumptions were made in using Model Output Statistics to produce the estimates of future discharge. The importance of analysing on a seasonal basis rather than just annual is highlighted. The inability of the RCMs (and GCMs) to produce realistic precipitation patterns, even in present conditions, is a major caveat of local climate impact studies on flooding, and this should be a focus for future development.

  9. Single-Molecule Fluorescence Imaging for Studying Organic, Organometallic, and Inorganic Reaction Mechanisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blum, Suzanne A.

    2016-05-24

    The reactive behavior of individual molecules is seldom observed, because we usually measure the average properties of billions of molecules. What we miss is important: the catalytic activity of less than 1% of the molecules under observation can dominate the outcome of a chemical reaction seen at a macroscopic level. Currently available techniques to examine reaction mechanisms (such as nuclear magnetic resonance spectroscopy and mass spectrometry) study molecules as an averaged ensemble. These ensemble techniques are unable to detect minor components (under ~1%) in mixtures or determine which components in the mixture are responsible for reactivity and catalysis. In themore » field of mechanistic chemistry, there is a resulting heuristic device that if an intermediate is very reactive in catalysis, it often cannot be observed (termed “Halpern’s Rule” ). Ultimately, the development of single-molecule imaging technology could be a powerful tool to observe these “unobservable” intermediates and active catalysts. Single-molecule techniques have already transformed biology and the understanding of biochemical processes. The potential of single-molecule fluorescence microscopy to address diverse chemical questions, such as the chemical reactivity of organometallic or inorganic systems with discrete metal complexes, however, has not yet been realized. In this respect, its application to chemical systems lags significantly behind its application to biophysical systems. This transformative imaging technique has broad, multidisciplinary impact with the potential to change the way the chemistry community studies reaction mechanisms and reactivity distributions, especially in the core area of catalysis.« less

  10. Solvable Hydrodynamics of Quantum Integrable Systems

    NASA Astrophysics Data System (ADS)

    Bulchandani, Vir B.; Vasseur, Romain; Karrasch, Christoph; Moore, Joel E.

    2017-12-01

    The conventional theory of hydrodynamics describes the evolution in time of chaotic many-particle systems from local to global equilibrium. In a quantum integrable system, local equilibrium is characterized by a local generalized Gibbs ensemble or equivalently a local distribution of pseudomomenta. We study time evolution from local equilibria in such models by solving a certain kinetic equation, the "Bethe-Boltzmann" equation satisfied by the local pseudomomentum density. Explicit comparison with density matrix renormalization group time evolution of a thermal expansion in the XXZ model shows that hydrodynamical predictions from smooth initial conditions can be remarkably accurate, even for small system sizes. Solutions are also obtained in the Lieb-Liniger model for free expansion into vacuum and collisions between clouds of particles, which model experiments on ultracold one-dimensional Bose gases.

  11. The interplay between genetic and bioelectrical signaling permits a spatial regionalisation of membrane potentials in model multicellular ensembles

    PubMed Central

    Cervera, Javier; Meseguer, Salvador; Mafe, Salvador

    2016-01-01

    The single cell-centred approach emphasises ion channels as specific proteins that determine individual properties, disregarding their contribution to multicellular outcomes. We simulate the interplay between genetic and bioelectrical signals in non-excitable cells from the local single-cell level to the long range multicellular ensemble. The single-cell genetic regulation is based on mean-field kinetic equations involving the mRNA and protein concentrations. The transcription rate factor is assumed to depend on the absolute value of the cell potential, which is dictated by the voltage-gated cell ion channels and the intercellular gap junctions. The interplay between genetic and electrical signals may allow translating single-cell states into multicellular states which provide spatio-temporal information. The model results have clear implications for biological processes: (i) bioelectric signals can override slightly different genetic pre-patterns; (ii) ensembles of cells initially at the same potential can undergo an electrical regionalisation because of persistent genetic differences between adjacent spatial regions; and (iii) shifts in the normal cell electrical balance could trigger significant changes in the genetic regulation. PMID:27731412

  12. The interplay between genetic and bioelectrical signaling permits a spatial regionalisation of membrane potentials in model multicellular ensembles.

    PubMed

    Cervera, Javier; Meseguer, Salvador; Mafe, Salvador

    2016-10-12

    The single cell-centred approach emphasises ion channels as specific proteins that determine individual properties, disregarding their contribution to multicellular outcomes. We simulate the interplay between genetic and bioelectrical signals in non-excitable cells from the local single-cell level to the long range multicellular ensemble. The single-cell genetic regulation is based on mean-field kinetic equations involving the mRNA and protein concentrations. The transcription rate factor is assumed to depend on the absolute value of the cell potential, which is dictated by the voltage-gated cell ion channels and the intercellular gap junctions. The interplay between genetic and electrical signals may allow translating single-cell states into multicellular states which provide spatio-temporal information. The model results have clear implications for biological processes: (i) bioelectric signals can override slightly different genetic pre-patterns; (ii) ensembles of cells initially at the same potential can undergo an electrical regionalisation because of persistent genetic differences between adjacent spatial regions; and (iii) shifts in the normal cell electrical balance could trigger significant changes in the genetic regulation.

  13. MicroRNA Intercellular Transfer and Bioelectrical Regulation of Model Multicellular Ensembles by the Gap Junction Connectivity.

    PubMed

    Cervera, Javier; Meseguer, Salvador; Mafe, Salvador

    2017-08-17

    We have studied theoretically the microRNA (miRNA) intercellular transfer through voltage-gated gap junctions in terms of a biophysically grounded system of coupled differential equations. Instead of modeling a specific system, we use a general approach describing the interplay between the genetic mechanisms and the single-cell electric potentials. The dynamics of the multicellular ensemble are simulated under different conditions including spatially inhomogeneous transcription rates and local intercellular transfer of miRNAs. These processes result in spatiotemporal changes of miRNA, mRNA, and ion channel protein concentrations that eventually modify the bioelectrical states of small multicellular domains because of the ensemble average nature of the electrical potential. The simulations allow a qualitative understanding of the context-dependent nature of the effects observed when specific signaling molecules are transferred through gap junctions. The results suggest that an efficient miRNA intercellular transfer could permit the spatiotemporal control of small cellular domains by the conversion of single-cell genetic and bioelectric states into multicellular states regulated by the gap junction interconnectivity.

  14. Characteristics of ion flow in the quiet state of the inner plasma sheet

    NASA Technical Reports Server (NTRS)

    Angelopoulos, V.; Kennel, C. F.; Coroniti, F. V.; Pellat, R.; Spence, H. E.; Kivelson, M. G.; Walker, R. J.; Baumjohann, W.; Feldman, W. C.; Gosling, J. T.

    1993-01-01

    We use AMPTE/IRM and ISEE 2 data to study the properties of the high beta plasma sheet, the inner plasma sheet (IPS). Bursty bulk flows (BBFs) are excised from the two databases, and the average flow pattern in the non-BBF (quiet) IPS is constructed. At local midnight this ensemble-average flow is predominantly duskward; closer to the flanks it is mostly earthward. The flow pattern agrees qualitatively with calculations based on the Tsyganenko (1987) model (T87), where the earthward flow is due to the ensemble-average cross tail electric field and the duskward flow is the diamagnetic drift due to an inward pressure gradient. The IPS is on the average in pressure equilibrium with the lobes. Because of its large variance the average flow does not represent the instantaneous flow field. Case studies also show that the non-BBF flow is highly irregular and inherently unsteady, a reason why earthward convection can avoid a pressure balance inconsistency with the lobes. The ensemble distribution of velocities is a fundamental observable of the quiet plasma sheet flow field.

  15. Highly photostable NV centre ensembles in CVD diamond produced by using N2O as the doping gas

    NASA Astrophysics Data System (ADS)

    Tallaire, A.; Mayer, L.; Brinza, O.; Pinault-Thaury, M. A.; Debuisschert, T.; Achard, J.

    2017-10-01

    High density Nitrogen-Vacancy (NV) centre ensembles incorporated in plasma assisted chemical vapour deposition (CVD) diamond are crucial to the development of more efficient sensing devices that use the properties of luminescent defects. Achieving high NV doping with N2 as the dopant gas source during diamond growth is, however, plagued by the formation of macroscopic and point defects that quench luminescence. Moreover, such NVs are found to exhibit poor photostability under high laser powers. Although this effect can be harnessed to locally and durably switch off NV luminescence for data storage, it is usually undesirable for most applications. In this work, the use of N2O as an alternative doping source is proposed. Much higher amounts of the doping gas can be added without significantly generating defects, which allows the incorporation of perfectly photostable and higher density NV ensembles. This effect is believed to be related to the lower dissociation energy of the N2O molecule together with the beneficial effect of the presence of a low and controlled amount of oxygen near the growing surface.

  16. Detecting of transient vibration signatures using an improved fast spatial-spectral ensemble kurtosis kurtogram and its applications to mechanical signature analysis of short duration data from rotating machinery

    NASA Astrophysics Data System (ADS)

    Chen, BinQiang; Zhang, ZhouSuo; Zi, YanYang; He, ZhengJia; Sun, Chuang

    2013-10-01

    Detecting transient vibration signatures is of vital importance for vibration-based condition monitoring and fault detection of the rotating machinery. However, raw mechanical signals collected by vibration sensors are generally mixtures of physical vibrations of the multiple mechanical components installed in the examined machinery. Fault-generated incipient vibration signatures masked by interfering contents are difficult to be identified. The fast kurtogram (FK) is a concise and smart gadget for characterizing these vibration features. The multi-rate filter-bank (MRFB) and the spectral kurtosis (SK) indicator of the FK are less powerful when strong interfering vibration contents exist, especially when the FK are applied to vibration signals of short duration. It is encountered that the impulsive interfering contents not authentically induced by mechanical faults complicate the optimal analyzing process and lead to incorrect choosing of the optimal analysis subband, therefore the original FK may leave out the essential fault signatures. To enhance the analyzing performance of FK for industrial applications, an improved version of fast kurtogram, named as "fast spatial-spectral ensemble kurtosis kurtogram", is presented. In the proposed technique, discrete quasi-analytic wavelet tight frame (QAWTF) expansion methods are incorporated as the detection filters. The QAWTF, constructed based on dual tree complex wavelet transform, possesses better vibration transient signature extracting ability and enhanced time-frequency localizability compared with conventional wavelet packet transforms (WPTs). Moreover, in the constructed QAWTF, a non-dyadic ensemble wavelet subband generating strategy is put forward to produce extra wavelet subbands that are capable of identifying fault features located in transition-band of WPT. On the other hand, an enhanced signal impulsiveness evaluating indicator, named "spatial-spectral ensemble kurtosis" (SSEK), is put forward and utilized as the quantitative measure to select optimal analyzing parameters. The SSEK indicator is robuster in evaluating the impulsiveness intensity of vibration signals due to its better suppressing ability of Gaussian noise, harmonics and sporadic impulsive shocks. Numerical validations, an experimental test and two engineering applications were used to verify the effectiveness of the proposed technique. The analyzing results of the numerical validations, experimental tests and engineering applications demonstrate that the proposed technique possesses robuster transient vibration content detecting performance in comparison with the original FK and the WPT-based FK method, especially when they are applied to the processing of vibration signals of relative limited duration.

  17. Ensemble forecasting for renewable energy applications - status and current challenges for their generation and verification

    NASA Astrophysics Data System (ADS)

    Pinson, Pierre

    2016-04-01

    The operational management of renewable energy generation in power systems and electricity markets requires forecasts in various forms, e.g., deterministic or probabilistic, continuous or categorical, depending upon the decision process at hand. Besides, such forecasts may also be necessary at various spatial and temporal scales, from high temporal resolutions (in the order of minutes) and very localized for an offshore wind farm, to coarser temporal resolutions (hours) and covering a whole country for day-ahead power scheduling problems. As of today, weather predictions are a common input to forecasting methodologies for renewable energy generation. Since for most decision processes, optimal decisions can only be made if accounting for forecast uncertainties, ensemble predictions and density forecasts are increasingly seen as the product of choice. After discussing some of the basic approaches to obtaining ensemble forecasts of renewable power generation, it will be argued that space-time trajectories of renewable power production may or may not be necessitate post-processing ensemble forecasts for relevant weather variables. Example approaches and test case applications will be covered, e.g., looking at the Horns Rev offshore wind farm in Denmark, or gridded forecasts for the whole continental Europe. Eventually, we will illustrate some of the limitations of current frameworks to forecast verification, which actually make it difficult to fully assess the quality of post-processing approaches to obtain renewable energy predictions.

  18. High resolution probabilistic precipitation forecast over Spain combining the statistical downscaling tool PROMETEO and the AEMET short range EPS system (AEMET/SREPS)

    NASA Astrophysics Data System (ADS)

    Cofino, A. S.; Santos, C.; Garcia-Moya, J. A.; Gutierrez, J. M.; Orfila, B.

    2009-04-01

    The Short-Range Ensemble Prediction System (SREPS) is a multi-LAM (UM, HIRLAM, MM5, LM and HRM) multi analysis/boundary conditions (ECMWF, UKMetOffice, DWD and GFS) run twice a day by AEMET (72 hours lead time) over a European domain, with a total of 5 (LAMs) x 4 (GCMs) = 20 members. One of the main goals of this project is analyzing the impact of models and boundary conditions in the short-range high-resolution forecasted precipitation. A previous validation of this method has been done considering a set of climate networks in Spain, France and Germany, by interpolating the prediction to the gauge locations (SREPS, 2008). In this work we compare these results with those obtained by using a statistical downscaling method to post-process the global predictions, obtaining an "advanced interpolation" for the local precipitation using climate network precipitation observations. In particular, we apply the PROMETEO downscaling system based on analogs and compare the SREPS ensemble of 20 members with the PROMETEO statistical ensemble of 5 (analog ensemble) x 4 (GCMs) = 20 members. Moreover, we will also compare the performance of a combined approach post-processing the SREPS outputs using the PROMETEO system. References: SREPS 2008. 2008 EWGLAM-SRNWP Meeting (http://www.aemet.es/documentos/va/divulgacion/conferencias/prediccion/Ewglam/PRED_CSantos.pdf)

  19. Origins of fine aerosol mass in the Baltimore-Washington corridor: implications from observation, factor analysis, and ensemble air parcel back trajectories

    NASA Astrophysics Data System (ADS)

    Antony Chen, L.-W.; Doddridge, Bruce G.; Dickerson, Russell R.; Chow, Judith C.; Henry, Ronald C.

    Chemically speciated fine particulate matter (PM 2.5) and trace gases (including NH 3, HNO 3, CO, SO 2, NO y) have been sampled at Fort Meade (FME: 39.10°N, 76.74°W; elevation 46 m MSL), Maryland, since July 1999. FME is suburban, located in the middle of the Baltimore-Washington corridor, and generally downwind of the highly industrialized Midwest. The PM 2.5 at FME is expected to be of both local and regional sources. Measurements over a 2-year period include eight seasonally representative months. The PM 2.5 shows an annual mean of 13 μg m -3 and primarily consists of sulfate, nitrate, ammonium, and carbonaceous material. Day-to-day and seasonal variations in the PM 2.5 chemical composition reflect changes of contribution from various sources. UNMIX, an innovative receptor model, is used to retrieve potential sources of the PM 2.5. A six-factor model, including regional sulfate, local sulfate, wood smoke, copper/iron processing industry, mobile, and secondary nitrate, is constructed and compared with reported source emission profiles. The six factors are studied further using an ensemble back trajectory method to identify possible source locations. Sources of local sulfate, mobile, and secondary nitrate are more localized around the receptor than those of other factors. Regional sulfate and wood smoke are more regional and associated with westerly and southerly transport, respectively. This study suggests that the local contribution to PM 2.5 mass can vary from <30% in summer to >60% in winter.

  20. Molecular dynamics study of dual-phase microstructure of Titanium and Zirconium metals during the quenching process

    NASA Astrophysics Data System (ADS)

    Miyazaki, Narumasa; Sato, Kazunori; Shibutani, Yoji

    Dual-phase (DP) transformation, which is composed of felite- and/or martensite- multicomponent microstructural phases, is one of the most effective tools to product functional alloys. To obtain this DP structure such as DP steels and other materials, we usually apply thermal processes such as quenching, tempering and annealing. As the transformation dynamics of DP microstructure depends on conditions of temperature, annealing time, and quenching rate, physical properties of materials are able to be tuned by controlling microstructure type, size, their interfaces and so on. In this study, to understand the behavior of DP transformation and to control physical properties of materials by tuning DP microstructures, we analyze the atomistic dynamics of DP transformation during the quenching process and the detail of DP microstructures by using the molecular dynamics simulations. As target metals of DP transformation, we focus on group 4 transition metals, such as Ti and Zr described by EAM interatomic potentials. For Ti and Zr models we perform molecular dynamics simulations by assuming melt-quenching process from 3000 K to 0 K under the isothermal-isobaric ensemble. During the process for each material, we observe liquid to HCP like transition around the melting temperature, and continuously HCP-BCC like transition around martensitic transformation temperature. Furthermore, we clearly distinguish DP microstructure for each quenched model.

  1. Characterization of coronary plaque regions in intravascular ultrasound images using a hybrid ensemble classifier.

    PubMed

    Hwang, Yoo Na; Lee, Ju Hwan; Kim, Ga Young; Shin, Eun Seok; Kim, Sung Min

    2018-01-01

    The purpose of this study was to propose a hybrid ensemble classifier to characterize coronary plaque regions in intravascular ultrasound (IVUS) images. Pixels were allocated to one of four tissues (fibrous tissue (FT), fibro-fatty tissue (FFT), necrotic core (NC), and dense calcium (DC)) through processes of border segmentation, feature extraction, feature selection, and classification. Grayscale IVUS images and their corresponding virtual histology images were acquired from 11 patients with known or suspected coronary artery disease using 20 MHz catheter. A total of 102 hybrid textural features including first order statistics (FOS), gray level co-occurrence matrix (GLCM), extended gray level run-length matrix (GLRLM), Laws, local binary pattern (LBP), intensity, and discrete wavelet features (DWF) were extracted from IVUS images. To select optimal feature sets, genetic algorithm was implemented. A hybrid ensemble classifier based on histogram and texture information was then used for plaque characterization in this study. The optimal feature set was used as input of this ensemble classifier. After tissue characterization, parameters including sensitivity, specificity, and accuracy were calculated to validate the proposed approach. A ten-fold cross validation approach was used to determine the statistical significance of the proposed method. Our experimental results showed that the proposed method had reliable performance for tissue characterization in IVUS images. The hybrid ensemble classification method outperformed other existing methods by achieving characterization accuracy of 81% for FFT and 75% for NC. In addition, this study showed that Laws features (SSV and SAV) were key indicators for coronary tissue characterization. The proposed method had high clinical applicability for image-based tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Ensemble streamflow assimilation with the National Water Model.

    NASA Astrophysics Data System (ADS)

    Rafieeinasab, A.; McCreight, J. L.; Noh, S.; Seo, D. J.; Gochis, D.

    2017-12-01

    Through case studies of flooding across the US, we compare the performance of the National Water Model (NWM) data assimilation (DA) scheme to that of a newly implemented ensemble Kalman filter approach. The NOAA National Water Model (NWM) is an operational implementation of the community WRF-Hydro modeling system. As of August 2016, the NWM forecasts of distributed hydrologic states and fluxes (including soil moisture, snowpack, ET, and ponded water) over the contiguous United States have been publicly disseminated by the National Center for Environmental Prediction (NCEP) . It also provides streamflow forecasts at more than 2.7 million river reaches up to 30 days in advance. The NWM employs a nudging scheme to assimilate more than 6,000 USGS streamflow observations and provide initial conditions for its forecasts. A problem with nudging is how the forecasts relax quickly to open-loop bias in the forecast. This has been partially addressed by an experimental bias correction approach which was found to have issues with phase errors during flooding events. In this work, we present an ensemble streamflow data assimilation approach combining new channel-only capabilities of the NWM and HydroDART (a coupling of the offline WRF-Hydro model and NCAR's Data Assimilation Research Testbed; DART). Our approach focuses on the single model state of discharge and incorporates error distributions on channel-influxes (overland and groundwater) in the assimilation via an ensemble Kalman filter (EnKF). In order to avoid filter degeneracy associated with a limited number of ensemble at large scale, DART's covariance inflation (Anderson, 2009) and localization capabilities are implemented and evaluated. The current NWM data assimilation scheme is compared to preliminary results from the EnKF application for several flooding case studies across the US.

  3. Intelligent ensemble T-S fuzzy neural networks with RCDPSO_DM optimization for effective handling of complex clinical pathway variances.

    PubMed

    Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang

    2013-07-01

    Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Population interactions between parietal and primary motor cortices during reach

    PubMed Central

    Rao, Naveen G.; Bondy, Adrian; Truccolo, Wilson; Donoghue, John P.

    2014-01-01

    Neural interactions between parietal area 2/5 and primary motor cortex (M1) were examined to determine the timing and behavioral correlates of cortico-cortical interactions. Neural activity in areas 2/5 and M1 was simultaneously recorded with 96-channel microelectrode arrays in three rhesus monkeys performing a center-out reach task. We introduce a new method to reveal parietal-motor interactions at a population level using partial spike-field coherence (PSFC) between ensembles of neurons in one area and a local field potential (LFP) in another. PSFC reflects the extent of phase locking between spike times and LFP, after removing the coherence between LFPs in the two areas. Spectral analysis of M1 LFP revealed three bands: low, medium, and high, differing in power between movement preparation and performance. We focus on PSFC in the 1–10 Hz band, in which coherence was strongest. PSFC was also present in the 10–40 Hz band during movement preparation in many channels but generally nonsignificant in the 60–200 Hz band. Ensemble PSFC revealed stronger interactions than single cell-LFP pairings. PSFC of area 2/5 ensembles with M1 LFP typically rose around movement onset and peaked ∼500 ms afterward. PSFC was typically stronger for subsets of area 2/5 neurons and M1 LFPs with similar directional bias than for those with opposite bias, indicating that area 2/5 contributes movement direction information. Together with linear prediction of M1 LFP by area 2/5 spiking, the ensemble-LFP pairing approach reveals interactions missed by single neuron-LFP pairing, demonstrating that cortico-cortical communication can be more readily observed at the ensemble level. PMID:25210154

  5. Mechanical desorption of a single chain: unusual aspects of phase coexistence at a first-order transition.

    PubMed

    Skvortsov, Alexander M; Klushin, Leonid I; Polotsky, Alexey A; Binder, Kurt

    2012-03-01

    The phase transition occurring when a single polymer chain adsorbed at a planar solid surface is mechanically desorbed is analyzed in two statistical ensembles. In the force ensemble, a constant force applied to the nongrafted end of the chain (that is grafted at its other end) is used as a given external control variable. In the z-ensemble, the displacement z of this nongrafted end from the surface is taken as the externally controlled variable. Basic thermodynamic parameters, such as the adsorption energy, exhibit a very different behavior as a function of these control parameters. In the thermodynamic limit of infinite chain length the desorption transition with the force as a control parameter clearly is discontinuous, while in the z-ensemble continuous variations are found. However, one should not be misled by a too-naive application of the Ehrenfest criterion to consider the transition as a continuous transition: rather, one traverses a two-phase coexistence region, where part of the chain is still adsorbed and the other part desorbed and stretched. Similarities with and differences from two-phase coexistence at vapor-liquid transitions are pointed out. The rounding of the singularities due to finite chain length is illustrated by exact calculations for the nonreversal random walk model on the simple cubic lattice. A new concept of local order parameter profiles for the description of the mechanical desorption of adsorbed polymers is suggested. This concept give evidence for both the existence of two-phase coexistence within single polymer chains for this transition and the anomalous character of this two-phase coexistence. Consequences for the proper interpretation of experiments performed in different ensembles are briefly mentioned.

  6. Hybrid vs Adaptive Ensemble Kalman Filtering for Storm Surge Forecasting

    NASA Astrophysics Data System (ADS)

    Altaf, M. U.; Raboudi, N.; Gharamti, M. E.; Dawson, C.; McCabe, M. F.; Hoteit, I.

    2014-12-01

    Recent storm surge events due to Hurricanes in the Gulf of Mexico have motivated the efforts to accurately forecast water levels. Toward this goal, a parallel architecture has been implemented based on a high resolution storm surge model, ADCIRC. However the accuracy of the model notably depends on the quality and the recentness of the input data (mainly winds and bathymetry), model parameters (e.g. wind and bottom drag coefficients), and the resolution of the model grid. Given all these uncertainties in the system, the challenge is to build an efficient prediction system capable of providing accurate forecasts enough ahead of time for the authorities to evacuate the areas at risk. We have developed an ensemble-based data assimilation system to frequently assimilate available data into the ADCIRC model in order to improve the accuracy of the model. In this contribution we study and analyze the performances of different ensemble Kalman filter methodologies for efficient short-range storm surge forecasting, the aim being to produce the most accurate forecasts at the lowest possible computing time. Using Hurricane Ike meteorological data to force the ADCIRC model over a domain including the Gulf of Mexico coastline, we implement and compare the forecasts of the standard EnKF, the hybrid EnKF and an adaptive EnKF. The last two schemes have been introduced as efficient tools for enhancing the behavior of the EnKF when implemented with small ensembles by exploiting information from a static background covariance matrix. Covariance inflation and localization are implemented in all these filters. Our results suggest that both the hybrid and the adaptive approach provide significantly better forecasts than those resulting from the standard EnKF, even when implemented with much smaller ensembles.

  7. Quantifying Uncertainty in Flood Inundation Mapping Using Streamflow Ensembles and Multiple Hydraulic Modeling Techniques

    NASA Astrophysics Data System (ADS)

    Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.

    2016-12-01

    The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.

  8. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  9. Application of ensemble back trajectory and factor analysis methods to aerosol data from Fort Meade, MD: Implications for sources

    NASA Astrophysics Data System (ADS)

    Chen, L. A.; Doddridge, B. G.; Dickerson, R. R.

    2001-12-01

    As the primary field experiment for Maryland Aerosol Research and CHaracterization (MARCH-Atlantic) study, chemically speciated PM2.5 has been sampled at Fort Meade (FME, 39.10° N 76.74° W) since July 1999. FME is suburban, located in the middle of the bustling Baltimore-Washington corridor, which is generally downwind of the highly industrialized Midwest. Due to this unique sampling location, the PM2.5 observed at FME is expected to be of both local and regional sources, with relative contributions varying temporally. This variation, believed to be largely controlled by the meteorology, influences day-to-day or seasonal profiles of PM2.5 mass concentration and chemical composition. Air parcel back trajectories, which describe the path of air parcels traveling backward in time from site (receptor), reflect changes in the synoptic meteorological conditions. In this paper, an ensemble back trajectory method is employed to study the meteorology associated with each high/low PM2.5 episode in different seasons. For every sampling day, the residence time of air parcels within the eastern US at a 1° x 1° x 500 m geographic resolution can be estimated in order to resolve areas likely dominating the production of various PM2.5 components. Local sources are found to be more dominant in winter than in summer. "Factor analysis" is based on mass balance approach, providing useful insights on air pollution data. Here, a newly developed factor analysis model (UNMIX) is used to extract source profiles and contributions from the speciated PM2.5 data. Combing the model results with ensemble back trajectory method improves the understanding of the source regions and helps partition the contributions from local or more distant areas. >http://www.meto.umd.edu/~bruce/MARCH-Atl.html

  10. Integrated ensemble noise-reconstructed empirical mode decomposition for mechanical fault detection

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Ji, Feng; Gao, Yuan; Zhu, Jun; Wei, Chenjun; Zhou, Yu

    2018-05-01

    A new branch of fault detection is utilizing the noise such as enhancing, adding or estimating the noise so as to improve the signal-to-noise ratio (SNR) and extract the fault signatures. Hereinto, ensemble noise-reconstructed empirical mode decomposition (ENEMD) is a novel noise utilization method to ameliorate the mode mixing and denoised the intrinsic mode functions (IMFs). Despite the possibility of superior performance in detecting weak and multiple faults, the method still suffers from the major problems of the user-defined parameter and the powerless capability for a high SNR case. Hence, integrated ensemble noise-reconstructed empirical mode decomposition is proposed to overcome the drawbacks, improved by two noise estimation techniques for different SNRs as well as the noise estimation strategy. Independent from the artificial setup, the noise estimation by the minimax thresholding is improved for a low SNR case, which especially shows an outstanding interpretation for signature enhancement. For approximating the weak noise precisely, the noise estimation by the local reconfiguration using singular value decomposition (SVD) is proposed for a high SNR case, which is particularly powerful for reducing the mode mixing. Thereinto, the sliding window for projecting the phase space is optimally designed by the correlation minimization. Meanwhile, the reasonable singular order for the local reconfiguration to estimate the noise is determined by the inflection point of the increment trend of normalized singular entropy. Furthermore, the noise estimation strategy, i.e. the selection approaches of the two estimation techniques along with the critical case, is developed and discussed for different SNRs by means of the possible noise-only IMF family. The method is validated by the repeatable simulations to demonstrate the synthetical performance and especially confirm the capability of noise estimation. Finally, the method is applied to detect the local wear fault from a dual-axis stabilized platform and the gear crack from an operating electric locomotive to verify its effectiveness and feasibility.

  11. KinImmerse: Macromolecular VR for NMR ensembles

    PubMed Central

    Block, Jeremy N; Zielinski, David J; Chen, Vincent B; Davis, Ian W; Vinson, E Claire; Brady, Rachael; Richardson, Jane S; Richardson, David C

    2009-01-01

    Background In molecular applications, virtual reality (VR) and immersive virtual environments have generally been used and valued for the visual and interactive experience – to enhance intuition and communicate excitement – rather than as part of the actual research process. In contrast, this work develops a software infrastructure for research use and illustrates such use on a specific case. Methods The Syzygy open-source toolkit for VR software was used to write the KinImmerse program, which translates the molecular capabilities of the kinemage graphics format into software for display and manipulation in the DiVE (Duke immersive Virtual Environment) or other VR system. KinImmerse is supported by the flexible display construction and editing features in the KiNG kinemage viewer and it implements new forms of user interaction in the DiVE. Results In addition to molecular visualizations and navigation, KinImmerse provides a set of research tools for manipulation, identification, co-centering of multiple models, free-form 3D annotation, and output of results. The molecular research test case analyzes the local neighborhood around an individual atom within an ensemble of nuclear magnetic resonance (NMR) models, enabling immersive visual comparison of the local conformation with the local NMR experimental data, including target curves for residual dipolar couplings (RDCs). Conclusion The promise of KinImmerse for production-level molecular research in the DiVE is shown by the locally co-centered RDC visualization developed there, which gave new insights now being pursued in wider data analysis. PMID:19222844

  12. Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5

    DOE PAGES

    Qian, Yun; Yan, Huiping; Hou, Zhangshuan; ...

    2015-04-10

    We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less

  13. Neuronal Ensemble Synchrony during Human Focal Seizures

    PubMed Central

    Ahmed, Omar J.; Harrison, Matthew T.; Eskandar, Emad N.; Cosgrove, G. Rees; Madsen, Joseph R.; Blum, Andrew S.; Potter, N. Stevenson; Hochberg, Leigh R.; Cash, Sydney S.

    2014-01-01

    Seizures are classically characterized as the expression of hypersynchronous neural activity, yet the true degree of synchrony in neuronal spiking (action potentials) during human seizures remains a fundamental question. We quantified the temporal precision of spike synchrony in ensembles of neocortical neurons during seizures in people with pharmacologically intractable epilepsy. Two seizure types were analyzed: those characterized by sustained gamma (∼40–60 Hz) local field potential (LFP) oscillations or by spike-wave complexes (SWCs; ∼3 Hz). Fine (<10 ms) temporal synchrony was rarely present during gamma-band seizures, where neuronal spiking remained highly irregular and asynchronous. In SWC seizures, phase locking of neuronal spiking to the SWC spike phase induced synchrony at a coarse 50–100 ms level. In addition, transient fine synchrony occurred primarily during the initial ∼20 ms period of the SWC spike phase and varied across subjects and seizures. Sporadic coherence events between neuronal population spike counts and LFPs were observed during SWC seizures in high (∼80 Hz) gamma-band and during high-frequency oscillations (∼130 Hz). Maximum entropy models of the joint neuronal spiking probability, constrained only on single neurons' nonstationary coarse spiking rates and local network activation, explained most of the fine synchrony in both seizure types. Our findings indicate that fine neuronal ensemble synchrony occurs mostly during SWC, not gamma-band, seizures, and primarily during the initial phase of SWC spikes. Furthermore, these fine synchrony events result mostly from transient increases in overall neuronal network spiking rates, rather than changes in precise spiking correlations between specific pairs of neurons. PMID:25057195

  14. Characterization and Simulation of Gunfire with Wavelets

    DOE PAGES

    Smallwood, David O.

    1999-01-01

    Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less

  15. Generalized eigenstate typicality in translation-invariant quasifree fermionic models

    NASA Astrophysics Data System (ADS)

    Riddell, Jonathon; Müller, Markus P.

    2018-01-01

    We demonstrate a generalized notion of eigenstate thermalization for translation-invariant quasifree fermionic models: the vast majority of eigenstates satisfying a finite number of suitable constraints (e.g., fixed energy and particle number) have the property that their reduced density matrix on small subsystems approximates the corresponding generalized Gibbs ensemble. To this end, we generalize analytic results by H. Lai and K. Yang [Phys. Rev. B 91, 081110(R) (2015), 10.1103/PhysRevB.91.081110] and illustrate the claim numerically by example of the Jordan-Wigner transform of the XX spin chain.

  16. The comparison of automated clustering algorithms for resampling representative conformer ensembles with RMSD matrix.

    PubMed

    Kim, Hyoungrae; Jang, Cheongyun; Yadav, Dharmendra K; Kim, Mi-Hyun

    2017-03-23

    The accuracy of any 3D-QSAR, Pharmacophore and 3D-similarity based chemometric target fishing models are highly dependent on a reasonable sample of active conformations. Since a number of diverse conformational sampling algorithm exist, which exhaustively generate enough conformers, however model building methods relies on explicit number of common conformers. In this work, we have attempted to make clustering algorithms, which could find reasonable number of representative conformer ensembles automatically with asymmetric dissimilarity matrix generated from openeye tool kit. RMSD was the important descriptor (variable) of each column of the N × N matrix considered as N variables describing the relationship (network) between the conformer (in a row) and the other N conformers. This approach used to evaluate the performance of the well-known clustering algorithms by comparison in terms of generating representative conformer ensembles and test them over different matrix transformation functions considering the stability. In the network, the representative conformer group could be resampled for four kinds of algorithms with implicit parameters. The directed dissimilarity matrix becomes the only input to the clustering algorithms. Dunn index, Davies-Bouldin index, Eta-squared values and omega-squared values were used to evaluate the clustering algorithms with respect to the compactness and the explanatory power. The evaluation includes the reduction (abstraction) rate of the data, correlation between the sizes of the population and the samples, the computational complexity and the memory usage as well. Every algorithm could find representative conformers automatically without any user intervention, and they reduced the data to 14-19% of the original values within 1.13 s per sample at the most. The clustering methods are simple and practical as they are fast and do not ask for any explicit parameters. RCDTC presented the maximum Dunn and omega-squared values of the four algorithms in addition to consistent reduction rate between the population size and the sample size. The performance of the clustering algorithms was consistent over different transformation functions. Moreover, the clustering method can also be applied to molecular dynamics sampling simulation results.

  17. Lessons from the Desert: Integrating Managerial Expertise and Learning for Organizational Transformation

    ERIC Educational Resources Information Center

    Roth, George

    2004-01-01

    Reflection upon a field study of a corporate transformation provides insights into the application and integration of organizational learning theory and frameworks with local, corporate knowledge. In the corporate transformation studied this local knowledge came from consumer psychology, marketing campaigns and the use of media. When these ideas…

  18. Data Assimilation using Artificial Neural Networks for the global FSU atmospheric model

    NASA Astrophysics Data System (ADS)

    Cintra, Rosangela; Cocke, Steven; Campos Velho, Haroldo

    2015-04-01

    Data assimilation is the process by which measurements and model predictions are combined to obtain an accurate representation of the state of the modeled system. Uncertainty is the characteristic of the atmosphere, coupled with inevitable inadequacies in observations and computer models and increase errors in weather forecasts. Data assimilation is a technique to generate an initial condition to a weather or climate forecasts. This paper shows the results of a data assimilation technique using artificial neural networks (ANN) to obtain the initial condition to the atmospheric general circulation model (AGCM) for the Florida State University in USA. The Local Ensemble Transform Kalman filter (LETKF) is implemented with Florida State University Global Spectral Model (FSUGSM). The ANN data assimilation is made to emulate the initial condition from LETKF to run the FSUGSM. LETKF is a version of Kalman filter with Monte-Carlo ensembles of short-term forecasts to solve the data assimilation problem. The model FSUGSM is a multilevel (27 vertical levels) spectral primitive equation model with a vertical sigma coordinate. All variables are expanded horizontally in a truncated series of spherical harmonic functions (at resolution T63) and a transform technique is applied to calculate the physical processes in real space. The LETKF data assimilation experiments are based in synthetic observations data (surface pressure, absolute temperature, zonal component wind, meridional component wind and humidity). For the ANN data assimilation scheme, we use Multilayer Perceptron (MLP-DA) with supervised training algorithm where ANN receives input vectors with their corresponding response or target output from LETKF scheme. An automatic tool that finds the optimal representation to these ANNs configures the MLP-DA in this experiment. After the training process, the scheme MLP-DA is seen as a function of data assimilation where the inputs are observations and a short-range forecast to each model grid point. The ANNs were trained with data from each month of 2001, 2002, 2003, and 2004. A hind-casting experiment for data assimilation cycle using MLP-DA was performed with synthetic observations for January 2005. The numerical results demonstrate the effectiveness of the ANN technique for atmospheric data assimilation, since the analyses (initial conditions) have similar quality to LETKF analyses. The major advantage of using MLP-DA is the computational performance, which is faster than LETKF. The reduced computational cost allows the inclusion of greater number of observations and new data sources and the use of high resolution of models, which ensures the accuracy of analysis and of its weather prediction

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Logan C.; Ciesielski, Peter N.; Jarvis, Mark W.

    Here, biomass particles can experience variable thermal conditions during fast pyrolysis due to differences in their size and morphology, and from local temperature variations within a reactor. These differences lead to increased heterogeneity of the chemical products obtained in the pyrolysis vapors and bio-oil. Here we present a simple, high-throughput method to investigate the thermal history experienced by large ensembles of particles during fast pyrolysis by imaging and quantitative image analysis. We present a correlation between the surface luminance (darkness) of the biochar particle and the highest temperature that it experienced during pyrolysis. Next, we apply this correlation to large,more » heterogeneous ensembles of char particles produced in a laminar entrained flow reactor (LEFR). The results are used to interpret the actual temperature distributions delivered by the reactor over a range of operating conditions.« less

  20. Graph-Theoretic Statistical Methods for Detecting and Localizing Distributional Change in Multivariate Data

    DTIC Science & Technology

    2015-06-01

    The weekly Navigators Bible study group, a rotating cast totaling about 120 men led by Bob Reehm, was a crucial ensemble in this effort. Their...grace, that we may receive mercy and find grace to help in time of need. Hebrews 4:15-16 For the Lord gives wisdom; from his mouth come knowledge

  1. 9 Tips for Affordable Student Trips

    ERIC Educational Resources Information Center

    Adams, Jonathan

    2013-01-01

    The trick to having a successful and affordable trip is planning ahead and planning thoroughly. Keep the spirits high and the costs low by following a well-traveled ensemble director's suggestions as presented in this article. These tips include finding local attractions that are unique to the city that the group will be visiting, looking at…

  2. Precise and fast spatial-frequency analysis using the iterative local Fourier transform.

    PubMed

    Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook

    2016-09-19

    The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.

  3. Simulating large-scale crop yield by using perturbed-parameter ensemble method

    NASA Astrophysics Data System (ADS)

    Iizumi, T.; Yokozawa, M.; Sakurai, G.; Nishimori, M.

    2010-12-01

    Toshichika Iizumi, Masayuki Yokozawa, Gen Sakurai, Motoki Nishimori Agro-Meteorology Division, National Institute for Agro-Environmental Sciences, Japan Abstract One of concerning issues of food security under changing climate is to predict the inter-annual variation of crop production induced by climate extremes and modulated climate. To secure food supply for growing world population, methodology that can accurately predict crop yield on a large scale is needed. However, for developing a process-based large-scale crop model with a scale of general circulation models (GCMs), 100 km in latitude and longitude, researchers encounter the difficulties in spatial heterogeneity of available information on crop production such as cultivated cultivars and management. This study proposed an ensemble-based simulation method that uses a process-based crop model and systematic parameter perturbation procedure, taking maize in U.S., China, and Brazil as examples. The crop model was developed modifying the fundamental structure of the Soil and Water Assessment Tool (SWAT) to incorporate the effect of heat stress on yield. We called the new model PRYSBI: the Process-based Regional-scale Yield Simulator with Bayesian Inference. The posterior probability density function (PDF) of 17 parameters, which represents the crop- and grid-specific features of the crop and its uncertainty under given data, was estimated by the Bayesian inversion analysis. We then take 1500 ensemble members of simulated yield values based on the parameter sets sampled from the posterior PDF to describe yearly changes of the yield, i.e. perturbed-parameter ensemble method. The ensemble median for 27 years (1980-2006) was compared with the data aggregated from the county yield. On a country scale, the ensemble median of the simulated yield showed a good correspondence with the reported yield: the Pearson’s correlation coefficient is over 0.6 for all countries. In contrast, on a grid scale, the correspondence is still high in most grids regardless of the countries. However, the model showed comparatively low reproducibility in the slope areas, such as around the Rocky Mountains in South Dakota, around the Great Xing'anling Mountains in Heilongjiang, and around the Brazilian Plateau. As there is a wide-ranging local climate conditions in the complex terrain, such as the slope of mountain, the GCM grid-scale weather inputs is likely one of major sources of error. The results of this study highlight the benefits of the perturbed-parameter ensemble method in simulating crop yield on a GCM grid scale: (1) the posterior PDF of parameter could quantify the uncertainty of parameter value of the crop model associated with the local crop production aspects; (2) the method can explicitly account for the uncertainty of parameter value in the crop model simulations; (3) the method achieve a Monte Carlo approximation of probability of sub-grid scale yield, accounting for the nonlinear response of crop yield to weather and management; (4) the method is therefore appropriate to aggregate the simulated sub-grid scale yields to a grid-scale yield and it may be a reason for high performance of the model in capturing inter-annual variation of yield.

  4. Single particle characterization using a light scattering module coupled to a time-of-flight aerosol mass spectrometer

    NASA Astrophysics Data System (ADS)

    Cross, E. S.; Onasch, T. B.; Canagaratna, M.; Jayne, J. T.; Kimmel, J.; Yu, X.-Y.; Alexander, M. L.; Worsnop, D. R.; Davidovits, P.

    2008-12-01

    We present the first single particle results obtained using an Aerodyne time-of-flight aerosol mass spectrometer coupled with a light scattering module (LS-ToF-AMS). The instrument was deployed at the T1 ground site approximately 40 km northeast of the Mexico City Metropolitan Area (MCMA) as part of the MILAGRO field study in March of 2006. The instrument was operated as a standard AMS from 12-30 March, acquiring average chemical composition and size distributions for the ambient aerosol, and in single particle mode from 27-30 March. Over a 75-h sampling period, 12 853 single particle mass spectra were optically triggered, saved, and analyzed. The correlated optical and chemical detection allowed detailed examination of single particle collection and quantification within the LS-ToF-AMS. The single particle data enabled the mixing states of the ambient aerosol to be characterized within the context of the size-resolved ensemble chemical information. The particulate mixing states were examined as a function of sampling time and most of the particles were found to be internal mixtures containing many of the organic and inorganic species identified in the ensemble analysis. The single particle mass spectra were deconvolved, using techniques developed for ensemble AMS data analysis, into HOA, OOA, NH4NO3, (NH4)2SO4, and NH4Cl fractions. Average single particle mass and chemistry measurements are shown to be in agreement with ensemble MS and PTOF measurements. While a significant fraction of ambient particles were internal mixtures of varying degrees, single particle measurements of chemical composition allowed the identification of time periods during which the ambient ensemble was externally mixed. In some cases the chemical composition of the particles suggested a likely source. Throughout the full sampling period, the ambient ensemble was an external mixture of combustion-generated HOA particles from local sources (e.g. traffic), with number concentrations peaking during morning rush hour (04:00-08:00 LT) each day, and more processed particles of mixed composition from nonspecific sources. From 09:00-12:00 LT all particles within the ambient ensemble, including the locally produced HOA particles, became coated with NH4NO3 due to photochemical production of HNO3. The number concentration of externally mixed HOA particles remained low during daylight hours. Throughout the afternoon the OOA component dominated the organic fraction of the single particles, likely due to secondary organic aerosol formation and condensation. Single particle mass fractions of (NH4)2SO4 were lowest during the day and highest during the night. In one instance, gas-to-particle condensation of (NH4)2SO4 was observed on all measured particles within a strong SO2 plume arriving at T1 from the northwest. Particles with high NH4Cl mass fractions were identified during early morning periods. A limited number of particles (~5% of the total number) with mass spectral features characteristic of biomass burning were also identified.

  5. Prediction of dosage-based parameters from the puff dispersion of airborne materials in urban environments using the CFD-RANS methodology

    NASA Astrophysics Data System (ADS)

    Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.

    2018-02-01

    One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict ensemble-average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model's predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble-average peak concentration was systematically underpredicted by the model to a degree higher than the allowable by the acceptance criteria, in 1 of the 2 wind-tunnel experiments. The model performance depended on the positions of the examined sensors in relation to the emission source and the buildings configuration. The work presented in this paper was carried out (partly) within the scope of COST Action ES1006 "Evaluation, improvement, and guidance for the use of local-scale emergency prediction and response tools for airborne hazards in built environments".

  6. Localization of soft modes at the depinning transition

    NASA Astrophysics Data System (ADS)

    Cao, Xiangyu; Bouzat, Sebastian; Kolton, Alejandro B.; Rosso, Alberto

    2018-02-01

    We characterize the soft modes of the dynamical matrix at the depinning transition, and compare the matrix with the properties of the Anderson model (and long-range generalizations). The density of states at the edge of the spectrum displays a universal linear tail, different from the Lifshitz tails. The eigenvectors are instead very similar in the two matrix ensembles. We focus on the ground state (soft mode), which represents the epicenter of avalanche instabilities. We expect it to be localized in all finite dimensions, and make a clear connection between its localization length and the Larkin length of the depinning model. In the fully connected model, we show that the weak-strong pinning transition coincides with a peculiar localization transition of the ground state.

  7. Financial instability from local market measures

    NASA Astrophysics Data System (ADS)

    Bardoscia, Marco; Livan, Giacomo; Marsili, Matteo

    2012-08-01

    We study the emergence of instabilities in a stylized model of a financial market, when different market actors calculate prices according to different (local) market measures. We derive typical properties for ensembles of large random markets using techniques borrowed from statistical mechanics of disordered systems. We show that, depending on the number of financial instruments available and on the heterogeneity of local measures, the market moves from an arbitrage-free phase to an unstable one, where the complexity of the market—as measured by the diversity of financial instruments—increases, and arbitrage opportunities arise. A sharp transition separates the two phases. Focusing on two different classes of local measures inspired by real market strategies, we are able to analytically compute the critical lines, corroborating our findings with numerical simulations.

  8. Symmetry structure in discrete models of biochemical systems: natural subsystems and the weak control hierarchy in a new model of computation driven by interactions.

    PubMed

    Nehaniv, Chrystopher L; Rhodes, John; Egri-Nagy, Attila; Dini, Paolo; Morris, Eric Rothstein; Horváth, Gábor; Karimi, Fariba; Schreckling, Daniel; Schilstra, Maria J

    2015-07-28

    Interaction computing is inspired by the observation that cell metabolic/regulatory systems construct order dynamically, through constrained interactions between their components and based on a wide range of possible inputs and environmental conditions. The goals of this work are to (i) identify and understand mathematically the natural subsystems and hierarchical relations in natural systems enabling this and (ii) use the resulting insights to define a new model of computation based on interactions that is useful for both biology and computation. The dynamical characteristics of the cellular pathways studied in systems biology relate, mathematically, to the computational characteristics of automata derived from them, and their internal symmetry structures to computational power. Finite discrete automata models of biological systems such as the lac operon, the Krebs cycle and p53-mdm2 genetic regulation constructed from systems biology models have canonically associated algebraic structures (their transformation semigroups). These contain permutation groups (local substructures exhibiting symmetry) that correspond to 'pools of reversibility'. These natural subsystems are related to one another in a hierarchical manner by the notion of 'weak control'. We present natural subsystems arising from several biological examples and their weak control hierarchies in detail. Finite simple non-Abelian groups are found in biological examples and can be harnessed to realize finitary universal computation. This allows ensembles of cells to achieve any desired finitary computational transformation, depending on external inputs, via suitably constrained interactions. Based on this, interaction machines that grow and change their structure recursively are introduced and applied, providing a natural model of computation driven by interactions.

  9. Finite-temperature stress calculations in atomic models using moments of position.

    PubMed

    Parthasarathy, Ranganathan; Misra, Anil; Ouyang, Lizhi

    2018-07-04

    Continuum modeling of finite temperature mechanical behavior of atomic systems requires refined description of atomic motions. In this paper, we identify additional kinematical quantities that are relevant for a more accurate continuum description as the system is subjected to step-wise loading. The presented formalism avoids the necessity for atomic trajectory mapping with deformation, provides the definitions of the kinematic variables and their conjugates in real space, and simplifies local work conjugacy. The total work done on an atom under deformation is decomposed into the work corresponding to changing its equilibrium position and work corresponding to changing its second moment about equilibrium position. Correspondingly, we define two kinematic variables: a deformation gradient tensor and a vibration tensor, and derive their stress conjugates, termed here as static and vibration stresses, respectively. The proposed approach is validated using MD simulation in NVT ensembles for fcc aluminum subjected to uniaxial extension. The observed evolution of second moments in the MD simulation with macroscopic deformation is not directly related to the transformation of atomic trajectories through the deformation gradient using generator functions. However, it is noteworthy that deformation leads to a change in the second moment of the trajectories. Correspondingly, the vibration part of the Piola stress becomes particularly significant at high temperature and high tensile strain as the crystal approaches the softening limit. In contrast to the eigenvectors of the deformation gradient, the eigenvectors of the vibration tensor show strong spatial heterogeneity in the vicinity of softening. More importantly, the elliptic distribution of local atomic density transitions to a dumbbell shape, before significant non-affinity in equilibrium positions has occurred.

  10. Finite-temperature stress calculations in atomic models using moments of position

    NASA Astrophysics Data System (ADS)

    Parthasarathy, Ranganathan; Misra, Anil; Ouyang, Lizhi

    2018-07-01

    Continuum modeling of finite temperature mechanical behavior of atomic systems requires refined description of atomic motions. In this paper, we identify additional kinematical quantities that are relevant for a more accurate continuum description as the system is subjected to step-wise loading. The presented formalism avoids the necessity for atomic trajectory mapping with deformation, provides the definitions of the kinematic variables and their conjugates in real space, and simplifies local work conjugacy. The total work done on an atom under deformation is decomposed into the work corresponding to changing its equilibrium position and work corresponding to changing its second moment about equilibrium position. Correspondingly, we define two kinematic variables: a deformation gradient tensor and a vibration tensor, and derive their stress conjugates, termed here as static and vibration stresses, respectively. The proposed approach is validated using MD simulation in NVT ensembles for fcc aluminum subjected to uniaxial extension. The observed evolution of second moments in the MD simulation with macroscopic deformation is not directly related to the transformation of atomic trajectories through the deformation gradient using generator functions. However, it is noteworthy that deformation leads to a change in the second moment of the trajectories. Correspondingly, the vibration part of the Piola stress becomes particularly significant at high temperature and high tensile strain as the crystal approaches the softening limit. In contrast to the eigenvectors of the deformation gradient, the eigenvectors of the vibration tensor show strong spatial heterogeneity in the vicinity of softening. More importantly, the elliptic distribution of local atomic density transitions to a dumbbell shape, before significant non-affinity in equilibrium positions has occurred.

  11. [Multi-channel in vivo recording techniques: analysis of phase coupling between spikes and rhythmic oscillations of local field potentials].

    PubMed

    Wang, Ce-Qun; Chen, Qiang; Zhang, Lu; Xu, Jia-Min; Lin, Long-Nian

    2014-12-25

    The purpose of this article is to introduce the measurements of phase coupling between spikes and rhythmic oscillations of local field potentials (LFPs). Multi-channel in vivo recording techniques allow us to record ensemble neuronal activity and LFPs simultaneously from the same sites in the brain. Neuronal activity is generally characterized by temporal spike sequences, while LFPs contain oscillatory rhythms in different frequency ranges. Phase coupling analysis can reveal the temporal relationships between neuronal firing and LFP rhythms. As the first step, the instantaneous phase of LFP rhythms can be calculated using Hilbert transform, and then for each time-stamped spike occurred during an oscillatory epoch, we marked instantaneous phase of the LFP at that time stamp. Finally, the phase relationships between the neuronal firing and LFP rhythms were determined by examining the distribution of the firing phase. Phase-locked spikes are revealed by the non-random distribution of spike phase. Theta phase precession is a unique phase relationship between neuronal firing and LFPs, which is one of the basic features of hippocampal place cells. Place cells show rhythmic burst firing following theta oscillation within a place field. And phase precession refers to that rhythmic burst firing shifted in a systematic way during traversal of the field, moving progressively forward on each theta cycle. This relation between phase and position can be described by a linear model, and phase precession is commonly quantified with a circular-linear coefficient. Phase coupling analysis helps us to better understand the temporal information coding between neuronal firing and LFPs.

  12. A variational ensemble scheme for noisy image data assimilation

    NASA Astrophysics Data System (ADS)

    Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne

    2014-05-01

    Data assimilation techniques aim at recovering a system state variables trajectory denoted as X, along time from partially observed noisy measurements of the system denoted as Y. These procedures, which couple dynamics and noisy measurements of the system, fulfill indeed a twofold objective. On one hand, they provide a denoising - or reconstruction - procedure of the data through a given model framework and on the other hand, they provide estimation procedures for unknown parameters of the dynamics. A standard variational data assimilation problem can be formulated as the minimization of the following objective function with respect to the initial discrepancy, η, from the background initial guess: δ« J(η(x)) = 1∥Xb (x) - X (t ,x)∥2 + 1 tf∥H(X (t,x ))- Y (t,x)∥2dt. 2 0 0 B 2 t0 R (1) where the observation operator H links the state variable and the measurements. The cost function can be interpreted as the log likelihood function associated to the a posteriori distribution of the state given the past history of measurements and the background. In this work, we aim at studying ensemble based optimal control strategies for data assimilation. Such formulation nicely combines the ingredients of ensemble Kalman filters and variational data assimilation (4DVar). It is also formulated as the minimization of the objective function (1), but similarly to ensemble filter, it introduces in its objective function an empirical ensemble-based background-error covariance defined as: B ≡ <(Xb - )(Xb - )T>. (2) Thus, it works in an off-line smoothing mode rather than on the fly like sequential filters. Such resulting ensemble variational data assimilation technique corresponds to a relatively new family of methods [1,2,3]. It presents two main advantages: first, it does not require anymore to construct the adjoint of the dynamics tangent linear operator, which is a considerable advantage with respect to the method's implementation, and second, it enables the handling of a flow-dependent background error covariance matrix that can be consistently adjusted to the background error. These nice advantages come however at the cost of a reduced rank modeling of the solution space. The B matrix is at most of rank N - 1 (N is the size of the ensemble) which is considerably lower than the dimension of state space. This rank deficiency may introduce spurious correlation errors, which particularly impact the quality of results associated with a high resolution computing grid. The common strategy to suppress these distant correlations for ensemble Kalman techniques is through localization procedures. In this paper we present key theoretical properties associated to different choices of methods involved in this setup and compare with an incremental 4DVar method experimentally the performances of several variations of an ensemble technique of interest. The comparisons have been led on the basis of a Shallow Water model and have been carried out both with synthetic data and real observations. We particularly addressed the potential pitfalls and advantages of the different methods. The results indicate an advantage in favor of the ensemble technique both in quality and computational cost when dealing with incomplete observations. We highlight as the premise of using ensemble variational assimilation, that the initial perturbation used to build the initial ensemble has to fit the physics of the observed phenomenon . We also apply the method to a stochastic shallow-water model which incorporate an uncertainty expression if the subgrid stress tensor related to the ensemble spread. References [1] A. C. Lorenc, The potential of the ensemble kalman filter for nwp - a comparison with 4d-var, Quart. J. Roy. Meteor. Soc., Vol. 129, pp. 3183-3203, 2003. [2] C. Liu, Q. Xiao, and B. Wang, An Ensemble-Based Four-Dimensional Variational Data Assimilation Scheme. Part I: Technical Formulation and Preliminary Test, Mon. Wea. Rev., Vol. 136(9), pp. 3363-3373, 2008. [3] M. Buehner, Ensemble-derived stationary and flow-dependent background-error covariances: Evaluation in a quasi- operational NWP setting, Quart. J. Roy. Meteor. Soc., Vol. 131(607), pp. 1013-1043, April 2005.

  13. Real time detection of farm-level swine mycobacteriosis outbreak using time series modeling of the number of condemned intestines in abattoirs.

    PubMed

    Adachi, Yasumoto; Makita, Kohei

    2015-09-01

    Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.

  14. Copper nanoparticle ensembles for selective electroreduction of CO 2 to C 2-C 3 products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dohyung; Kley, Christopher S.; Li, Yifan

    Direct conversion of carbon dioxide to multicarbon products remains as a grand challenge in electrochemical CO 2 reduction. Various forms of oxidized copper have been demonstrated as electrocatalysts that still require large overpotentials. Here in this paper, we show that an ensemble of Cu nanoparticles (NPs) enables selective formation of C 2–C 3 products at low overpotentials. Densely packed Cu NP ensembles underwent structural transformation during electrolysis into electrocatalytically active cube-like particles intermixed with smaller nanoparticles. Ethylene, ethanol, and n-propanol are the major C 2–C 3 products with onset potential at -0.53 V (vs. reversible hydrogen electrode, RHE) and Cmore » 2–C 3 faradaic efficiency (FE) reaching 50% at only -0.75 V. Thus, the catalyst exhibits selective generation of C 2–C 3 hydrocarbons and oxygenates at considerably lowered overpotentials in neutral pH aqueous media. In addition, this approach suggests new opportunities in realizing multicarbon product formation from CO 2, where the majority of efforts has been to use oxidized copper-based materials. Robust catalytic performance is demonstrated by 10 h of stable operation with C 2–C 3 current density 10 mA/cm 2 (at -0.75 V), rendering it attractive for solar-to-fuel applications. Lastly, Tafel analysis suggests reductive CO coupling as a rate determining step for C 2 products, while n-propanol (C 3) production seems to have a discrete pathway.« less

  15. Copper nanoparticle ensembles for selective electroreduction of CO 2 to C 2-C 3 products

    DOE PAGES

    Kim, Dohyung; Kley, Christopher S.; Li, Yifan; ...

    2017-09-18

    Direct conversion of carbon dioxide to multicarbon products remains as a grand challenge in electrochemical CO 2 reduction. Various forms of oxidized copper have been demonstrated as electrocatalysts that still require large overpotentials. Here in this paper, we show that an ensemble of Cu nanoparticles (NPs) enables selective formation of C 2–C 3 products at low overpotentials. Densely packed Cu NP ensembles underwent structural transformation during electrolysis into electrocatalytically active cube-like particles intermixed with smaller nanoparticles. Ethylene, ethanol, and n-propanol are the major C 2–C 3 products with onset potential at -0.53 V (vs. reversible hydrogen electrode, RHE) and Cmore » 2–C 3 faradaic efficiency (FE) reaching 50% at only -0.75 V. Thus, the catalyst exhibits selective generation of C 2–C 3 hydrocarbons and oxygenates at considerably lowered overpotentials in neutral pH aqueous media. In addition, this approach suggests new opportunities in realizing multicarbon product formation from CO 2, where the majority of efforts has been to use oxidized copper-based materials. Robust catalytic performance is demonstrated by 10 h of stable operation with C 2–C 3 current density 10 mA/cm 2 (at -0.75 V), rendering it attractive for solar-to-fuel applications. Lastly, Tafel analysis suggests reductive CO coupling as a rate determining step for C 2 products, while n-propanol (C 3) production seems to have a discrete pathway.« less

  16. Clustering algorithms for identifying core atom sets and for assessing the precision of protein structure ensembles.

    PubMed

    Snyder, David A; Montelione, Gaetano T

    2005-06-01

    An important open question in the field of NMR-based biomolecular structure determination is how best to characterize the precision of the resulting ensemble of structures. Typically, the RMSD, as minimized in superimposing the ensemble of structures, is the preferred measure of precision. However, the presence of poorly determined atomic coordinates and multiple "RMSD-stable domains"--locally well-defined regions that are not aligned in global superimpositions--complicate RMSD calculations. In this paper, we present a method, based on a novel, structurally defined order parameter, for identifying a set of core atoms to use in determining superimpositions for RMSD calculations. In addition we present a method for deciding whether to partition that core atom set into "RMSD-stable domains" and, if so, how to determine partitioning of the core atom set. We demonstrate our algorithm and its application in calculating statistically sound RMSD values by applying it to a set of NMR-derived structural ensembles, superimposing each RMSD-stable domain (or the entire core atom set, where appropriate) found in each protein structure under consideration. A parameter calculated by our algorithm using a novel, kurtosis-based criterion, the epsilon-value, is a measure of precision of the superimposition that complements the RMSD. In addition, we compare our algorithm with previously described algorithms for determining core atom sets. The methods presented in this paper for biomolecular structure superimposition are quite general, and have application in many areas of structural bioinformatics and structural biology.

  17. Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Gingrich, Mark

    Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.

  18. An inverse method to estimate emission rates based on nonlinear least-squares-based ensemble four-dimensional variational data assimilation with local air concentration measurements.

    PubMed

    Geng, Xiaobing; Xie, Zhenghui; Zhang, Lijun; Xu, Mei; Jia, Binghao

    2018-03-01

    An inverse source estimation method is proposed to reconstruct emission rates using local air concentration sampling data. It involves the nonlinear least squares-based ensemble four-dimensional variational data assimilation (NLS-4DVar) algorithm and a transfer coefficient matrix (TCM) created using FLEXPART, a Lagrangian atmospheric dispersion model. The method was tested by twin experiments and experiments with actual Cs-137 concentrations measured around the Fukushima Daiichi Nuclear Power Plant (FDNPP). Emission rates can be reconstructed sequentially with the progression of a nuclear accident, which is important in the response to a nuclear emergency. With pseudo observations generated continuously, most of the emission rates were estimated accurately, except under conditions when the wind blew off land toward the sea and at extremely slow wind speeds near the FDNPP. Because of the long duration of accidents and variability in meteorological fields, monitoring networks composed of land stations only in a local area are unable to provide enough information to support an emergency response. The errors in the estimation compared to the real observations from the FDNPP nuclear accident stemmed from a shortage of observations, lack of data control, and an inadequate atmospheric dispersion model without improvement and appropriate meteorological data. The proposed method should be developed further to meet the requirements of a nuclear emergency response. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Influences of historical and projected changes in climate and land management practices on nutrient fluxes in the Mississippi River Basin, 1948-2100

    NASA Astrophysics Data System (ADS)

    Spak, S.; Ward, A. S.; Li, Y.; Dalrymple, K. E.

    2016-12-01

    Nitrogen fertilization is central to contemporary row crop production in the U.S., but resultant nitrate transport leads to eutrophication, hypoxia, and algal blooms throughout the Mississippi River Basin and in coastal waters of the Gulf of Mexico. Effective basin-scale nutrient management requires a comprehensive understanding of the dynamics of nitrate transport in this large river catchment and the roles of individual management practices, that must then be operationalized to optimize management for both local geophysical and agricultural conditions and in response to decadal and inter-annual variations in local and regional climate. Here, we apply ensemble simulations with Agro-IBIS and THMB using spatially and temporally specific land cover, soil, agricultural, topographic, and climate data to simulate the individual and combined effects of land management and climate on historical (1948-2007) nitrate concentrations and transport in the Mississippi River Basin. We further identify sensitivities of in-stream nitrate dynamics to local and regional applications of Best Management Practices. The ensemble resolves the effects of techniques recommended in the Iowa Nutrient Reduction Strategy, including crop rotations, fertilizer management, tillage and residue management, and cover crops. Analysis of the nitrate transport response surfaces identifies non-linear effects of combined nutrient management tactics, and quantifies the stationarity of the relative and absolute influences of land management and climate during the 60-year study period.

  20. Studying plastic shear localization in aluminum alloys under dynamic loading

    NASA Astrophysics Data System (ADS)

    Bilalov, D. A.; Sokovikov, M. A.; Chudinov, V. V.; Oborin, V. A.; Bayandin, Yu. V.; Terekhina, A. I.; Naimark, O. B.

    2016-12-01

    An experimental and theoretical study of plastic shear localization mechanisms observed under dynamic deformation using the shear-compression scheme on a Hopkinson-Kolsky bar has been carried out using specimens of AMg6 alloy. The mechanisms of plastic shear instability are associated with collective effects in the microshear ensemble in spatially localized areas. The lateral surface of the specimens was photographed in the real-time mode using a CEDIP Silver 450M high-speed infrared camera. The temperature distribution obtained at different times allowed us to trace the evolution of the localization of the plastic strain. Based on the equations that describe the effect of nonequilibrium transitions on the mechanisms of structural relaxation and plastic flow, numerical simulation of plastic shear localization has been performed. A numerical experiment relevant to the specimen-loading scheme was carried out using a system of constitutive equations that reflect the part of the structural relaxation mechanisms caused by the collective behavior of microshears with the autowave modes of the evolution of the localized plastic flow. Upon completion of the experiment, the specimens were subjected to microstructure analysis using a New View-5010 optical microscope-interferometer. After the dynamic deformation, the constancy of the Hurst exponent, which reflects the relationship between the behavior of defects and roughness induced by the defects on the surfaces of the specimens is observed in a wider range of spatial scales. These investigations revealed the distinctive features in the localization of the deformation followed by destruction to the script of the adiabatic shear. These features may be caused by the collective multiscale behavior of defects, which leads to a sharp decrease in the stress-relaxation time and, consequently, a localized plastic flow and generation of fracture nuclei in the form of adiabatic shear. Infrared scanning of the localization zone of the plastic strain in situ and the subsequent study of the defect structure corroborated the hypothesis about the decisive role of non-equilibrium transitions in defect ensembles during the evolution of a localized plastic flow.

  1. QMEANclust: estimation of protein model quality by combining a composite scoring function with structural density information.

    PubMed

    Benkert, Pascal; Schwede, Torsten; Tosatto, Silvio Ce

    2009-05-20

    The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus. Our recently published QMEAN composite scoring function has been improved by including an all-atom interaction potential term. The preliminary model ranking based on the new QMEAN score is used to select a subset of reliable models against which the structural consensus score is calculated. This scoring function called QMEANclust achieves a correlation coefficient of predicted quality score and GDT_TS of 0.9 averaged over the 98 CASP7 targets and perform significantly better in selecting good models from the ensemble of server models than any other groups participating in the quality estimation category of CASP7. Both scoring functions are also benchmarked on the MOULDER test set consisting of 20 target proteins each with 300 alternatives models generated by MODELLER. QMEAN outperforms all other tested scoring functions operating on individual models, while the consensus method QMEANclust only works properly on decoy sets containing a certain fraction of near-native conformations. We also present a local version of QMEAN for the per-residue estimation of model quality (QMEANlocal) and compare it to a new local consensus-based approach. Improved model selection is obtained by using a composite scoring function operating on single models in order to enrich higher quality models which are subsequently used to calculate the structural consensus. The performance of consensus-based methods such as QMEANclust highly depends on the composition and quality of the model ensemble to be analysed. Therefore, performance estimates for consensus methods based on large meta-datasets (e.g. CASP) might overrate their applicability in more realistic modelling situations with smaller sets of models based on individual methods.

  2. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

  3. Forecasting European Wildfires Today and in the Future

    NASA Astrophysics Data System (ADS)

    Navarro Abellan, Maria; Porras Alegre, Ignasi; María Sole, Josep; Gálvez, Pedro; Bielski, Conrad; Nurmi, Pertti

    2017-04-01

    Society as a whole is increasingly exposed and vulnerable to natural disasters due to extreme weather events exacerbated by climate change. The increased frequency of wildfires is not only a result of a changing climate, but wildfires themselves also produce a significant amount of greenhouse gases that, in-turn, further contribute to global warming. I-REACT (Improving Resilience to Emergencies through Advanced Cyber Technologies) is an innovation project funded by the European Commission , which aims to use social media, smartphones and wearables to improve natural disaster management by integrating existing services, both local and European, into a platform that supports the entire emergency management cycle. In order to assess the impact of climate change on wildfire hazards, METEOSIM designed two different System Processes (SP) that will be integrated into the I-REACT service that can provide information on a variety of time scales. SP1 - Climate Change Impact The climate change impact on climate variables related to fires is calculated by building an ensemble based on the Coupled Model Intercomparison Project Phase 5 (CMIP5) and CORDEX data. A validation and an Empirical-Statistical Downscaling (ESD) calibration are done to assess the changes in the past of the climatic variables related to wildfires (temperature, precipitation, wind, relative humidity and Fire Weather Index). Calculations in the trend and the frequency of extreme events of those variables are done for three time scales: near-term (2011-2040), mid-term (2041-2070) and long term (2071-2100). SP2 - Operational daily forecast of the Canadian Forest Fire Weather Index (FWI) Using ensemble data from the ECMWF and from the GLAMEPS (multi-model ensemble) models, both supplied by the Finnish Meteorological Institute (FMI), the Fire Weather Index (FWI) and its index components are produced for each ensemble member within a wide forecast time range, from a few hours up to 10 days resulting in a probabilistic output of the FWI for different regions in Europe. This work will improve the currently available information to various wildfire information users such as fire departments, the civil protection, local authorities, etc., where accurate and reliable information in extreme weather situations are vital for improving planning and risk management.

  4. Multi-location gram-positive and gram-negative bacterial protein subcellular localization using gene ontology and multi-label classifier ensemble.

    PubMed

    Wang, Xiao; Zhang, Jun; Li, Guo-Zheng

    2015-01-01

    It has become a very important and full of challenge task to predict bacterial protein subcellular locations using computational methods. Although there exist a lot of prediction methods for bacterial proteins, the majority of these methods can only deal with single-location proteins. But unfortunately many multi-location proteins are located in the bacterial cells. Moreover, multi-location proteins have special biological functions capable of helping the development of new drugs. So it is necessary to develop new computational methods for accurately predicting subcellular locations of multi-location bacterial proteins. In this article, two efficient multi-label predictors, Gpos-ECC-mPLoc and Gneg-ECC-mPLoc, are developed to predict the subcellular locations of multi-label gram-positive and gram-negative bacterial proteins respectively. The two multi-label predictors construct the GO vectors by using the GO terms of homologous proteins of query proteins and then adopt a powerful multi-label ensemble classifier to make the final multi-label prediction. The two multi-label predictors have the following advantages: (1) they improve the prediction performance of multi-label proteins by taking the correlations among different labels into account; (2) they ensemble multiple CC classifiers and further generate better prediction results by ensemble learning; and (3) they construct the GO vectors by using the frequency of occurrences of GO terms in the typical homologous set instead of using 0/1 values. Experimental results show that Gpos-ECC-mPLoc and Gneg-ECC-mPLoc can efficiently predict the subcellular locations of multi-label gram-positive and gram-negative bacterial proteins respectively. Gpos-ECC-mPLoc and Gneg-ECC-mPLoc can efficiently improve prediction accuracy of subcellular localization of multi-location gram-positive and gram-negative bacterial proteins respectively. The online web servers for Gpos-ECC-mPLoc and Gneg-ECC-mPLoc predictors are freely accessible at http://biomed.zzuli.edu.cn/bioinfo/gpos-ecc-mploc/ and http://biomed.zzuli.edu.cn/bioinfo/gneg-ecc-mploc/ respectively.

  5. Sea Ice in the NCEP Seasonal Forecast System

    NASA Astrophysics Data System (ADS)

    Wu, X.; Saha, S.; Grumbine, R. W.; Bailey, D. A.; Carton, J.; Penny, S. G.

    2017-12-01

    Sea ice is known to play a significant role in the global climate system. For a weather or climate forecast system (CFS), it is important that the realistic distribution of sea ice is represented. Sea ice prediction is challenging; sea ice can form or melt, it can move with wind and/or ocean current; sea ice interacts with both the air above and ocean underneath, it influences by, and has impact on the air and ocean conditions. NCEP has developed coupled CFS (version 2, CFSv2) and also carried out CFS reanalysis (CFSR), which includes a coupled model with the NCEP global forecast system, a land model, an ocean model (GFDL MOM4), and a sea ice model. In this work, we present the NCEP coupled model, the CFSv2 sea ice component that includes a dynamic thermodynamic sea ice model and a simple "assimilation" scheme, how sea ice has been assimilated in CFSR, the characteristics of the sea ice from CFSR and CFSv2, and the improvements of sea ice needed for future seasonal prediction system, part of the Unified Global Coupled System (UGCS), which is being developed and under testing, including sea ice data assimilation with the Local Ensemble Transform Kalman Filter (LETKF). Preliminary results from the UGCS testing will also be presented.

  6. Fault Diagnosis of Demountable Disk-Drum Aero-Engine Rotor Using Customized Multiwavelet Method.

    PubMed

    Chen, Jinglong; Wang, Yu; He, Zhengjia; Wang, Xiaodong

    2015-10-23

    The demountable disk-drum aero-engine rotor is an important piece of equipment that greatly impacts the safe operation of aircraft. However, assembly looseness or crack fault has led to several unscheduled breakdowns and serious accidents. Thus, condition monitoring and fault diagnosis technique are required for identifying abnormal conditions. Customized ensemble multiwavelet method for aero-engine rotor condition identification, using measured vibration data, is developed in this paper. First, customized multiwavelet basis function with strong adaptivity is constructed via symmetric multiwavelet lifting scheme. Then vibration signal is processed by customized ensemble multiwavelet transform. Next, normalized information entropy of multiwavelet decomposition coefficients is computed to directly reflect and evaluate the condition. The proposed approach is first applied to fault detection of an experimental aero-engine rotor. Finally, the proposed approach is used in an engineering application, where it successfully identified the crack fault of a demountable disk-drum aero-engine rotor. The results show that the proposed method possesses excellent performance in fault detection of aero-engine rotor. Moreover, the robustness of the multiwavelet method against noise is also tested and verified by simulation and field experiments.

  7. Bandgap Inhomogeneity of a PbSe Quantum Dot Ensemble from Two-Dimensional Spectroscopy and Comparison to Size Inhomogeneity from Electron Microscopy

    DOE PAGES

    Park, Samuel D.; Baranov, Dmitry; Ryu, Jisu; ...

    2017-01-03

    Femtosecond two-dimensional Fourier transform spectroscopy is used to determine the static bandgap inhomogeneity of a colloidal quantum dot ensemble. The excited states of quantum dots absorb light, so their absorptive two-dimensional (2D) spectra will typically have positive and negative peaks. We show that the absorption bandgap inhomogeneity is robustly determined by the slope of the nodal line separating positive and negative peaks in the 2D spectrum around the bandgap transition; this nodal line slope is independent of excited state parameters not known from the absorption and emission spectra. The absorption bandgap inhomogeneity is compared to a size and shape distributionmore » determined by electron microscopy. The electron microscopy images are analyzed using new 2D histograms that correlate major and minor image projections to reveal elongated nanocrystals, a conclusion supported by grazing incidence small-angle X-ray scattering and high-resolution transmission electron microscopy. Lastly, the absorption bandgap inhomogeneity quantitatively agrees with the bandgap variations calculated from the size and shape distribution, placing upper bounds on any surface contributions.« less

  8. Use of NARCCAP results for extremes: British Columbia case studies

    NASA Astrophysics Data System (ADS)

    Murdock, T. Q.; Eckstrand, H.; Buerger, G.; Hiebert, J.

    2011-12-01

    Demand for projections of extremes has arisen out of local infrastructure vulnerability assessments and adaptation planning. Four preliminary analyses of extremes have been undertaken in British Columbia in the past two years in collaboration with users: BC Ministry of Transportation and Infrastructure, Engineers Canada, City of Castelgar, and Columbia Basin Trust. Projects have included analysis of extremes for stormwater management, highways, and community adaptation in different areas of the province. This need for projections of extremes has been met using an ensemble of Regional Climate Model (RCM) results from NARCCAP, in some cases supplemented by and compared to statistical downscaling. Before assessing indices of extremes, each RCM simulation in the NARCCAP ensemble driven by reanalysis (NCEP) was compared to historical observations to assess RCM skill. Next, the anomalies according to each RCM future projection were compared to those of their driving GCM to determine the "value added" by the RCMs. Selected results will be shown for several indices of extremes, including the Climdex set of indices that has been widely used elsewhere (e.g., Stardex) and specific parameters of interest defined by users. Finally, the need for threshold scaling of some indices and use of as large an ensemble as possible will be illustrated.

  9. Uncertainty of global summer precipitation in the CMIP5 models: a comparison between high-resolution and low-resolution models

    NASA Astrophysics Data System (ADS)

    Huang, Danqing; Yan, Peiwen; Zhu, Jian; Zhang, Yaocun; Kuang, Xueyuan; Cheng, Jing

    2018-04-01

    The uncertainty of global summer precipitation simulated by the 23 CMIP5 CGCMs and the possible impacts of model resolutions are investigated in this study. Large uncertainties exist over the tropical and subtropical regions, which can be mainly attributed to convective precipitation simulation. High-resolution models (HRMs) and low-resolution models (LRMs) are further investigated to demonstrate their different contributions to the uncertainties of the ensemble mean. It shows that the high-resolution model ensemble means (HMME) and low-resolution model ensemble mean (LMME) mitigate the biases between the MME and observation over most continents and oceans, respectively. The HMME simulates more precipitation than the LMME over most oceans, but less precipitation over some continents. The dominant precipitation category in the HRMs (LRMs) is the heavy precipitation (moderate precipitation) over the tropic regions. The combinations of convective and stratiform precipitation are also quite different: the HMME has much higher ratio of stratiform precipitation while the LMME has more convective precipitation. Finally, differences in precipitation between the HMME and LMME can be traced to their differences in the SST simulations via the local and remote air-sea interaction.

  10. Factors Influencing the Sahelian Paradox at the Local Watershed Scale: Causal Inference Insights

    NASA Astrophysics Data System (ADS)

    Van Gordon, M.; Groenke, A.; Larsen, L.

    2017-12-01

    While the existence of paradoxical rainfall-runoff and rainfall-groundwater correlations are well established in the West African Sahel, the hydrologic mechanisms involved are poorly understood. In pursuit of mechanistic explanations, we perform a causal inference analysis on hydrologic variables in three watersheds in Benin and Niger. Using an ensemble of techniques, we compute the strength of relationships between observational soil moisture, runoff, precipitation, and temperature data at seasonal and event timescales. Performing analysis over a range of time lags allows dominant time scales to emerge from the relationships between variables. By determining the time scales of hydrologic connectivity over vertical and lateral space, we show differences in the importance of overland and subsurface flow over the course of the rainy season and between watersheds. While previous work on the paradoxical hydrologic behavior in the Sahel focuses on surface processes and infiltration, our results point toward the importance of subsurface flow to rainfall-runoff relationships in these watersheds. The hypotheses generated from our ensemble approach suggest that subsequent explorations of mechanistic hydrologic processes in the region include subsurface flow. Further, this work highlights how an ensemble approach to causal analysis can reveal nuanced relationships between variables even in poorly understood hydrologic systems.

  11. An algorithm for the Italian atomic time scale

    NASA Technical Reports Server (NTRS)

    Cordara, F.; Vizio, G.; Tavella, P.; Pettiti, V.

    1994-01-01

    During the past twenty years, the time scale at the IEN has been realized by a commercial cesium clock, selected from an ensemble of five, whose rate has been continuously steered towards UTC to maintain a long term agreement within 3 x 10(exp -13). A time scale algorithm, suitable for a small clock ensemble and capable of improving the medium and long term stability of the IEN time scale, has been recently designed taking care of reducing the effects of the seasonal variations and the sudden frequency anomalies of the single cesium clocks. The new time scale, TA(IEN), is obtained as a weighted average of the clock ensemble computed once a day from the time comparisons between the local reference UTC(IEN) and the single clocks. It is foreseen to include in the computation also ten cesium clocks maintained in other Italian laboratories to further improve its reliability and its long term stability. To implement this algorithm, a personal computer program in Quick Basic has been prepared and it has been tested at the IEN time and frequency laboratory. Results obtained using this algorithm on the real clocks data relative to a period of about two years are presented.

  12. Energy propagation by transverse waves in multiple flux tube systems using filling factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Doorsselaere, T.; Gijsen, S. E.; Andries, J.

    2014-11-01

    In the last few years, it has been found that transverse waves are present at all times in coronal loops or spicules. Their energy has been estimated with an expression derived for bulk Alfvén waves in homogeneous media, with correspondingly uniform wave energy density and flux. The kink mode, however, is localized in space with the energy density and flux dependent on the position in the cross-sectional plane. The more relevant quantities for the kink mode are the integrals of the energy density and flux over the cross-sectional plane. The present paper provides an approximation to the energy propagated bymore » kink modes in an ensemble of flux tubes by means of combining the analysis of single flux tube kink oscillations with a filling factor for the tube cross-sectional area. This finally allows one to compare the expressions for energy flux of Alfvén waves with an ensemble of kink waves. We find that the correction factor for the energy in kink waves, compared to the bulk Alfvén waves, is between f and 2f, where f is the density filling factor of the ensemble of flux tubes.« less

  13. Bayesian network ensemble as a multivariate strategy to predict radiation pneumonitis risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkyu, E-mail: sangkyu.lee@mail.mcgill.ca; Ybarra, Norma; Jeyaseelan, Krishinima

    2015-05-15

    Purpose: Prediction of radiation pneumonitis (RP) has been shown to be challenging due to the involvement of a variety of factors including dose–volume metrics and radiosensitivity biomarkers. Some of these factors are highly correlated and might affect prediction results when combined. Bayesian network (BN) provides a probabilistic framework to represent variable dependencies in a directed acyclic graph. The aim of this study is to integrate the BN framework and a systems’ biology approach to detect possible interactions among RP risk factors and exploit these relationships to enhance both the understanding and prediction of RP. Methods: The authors studied 54 nonsmall-cellmore » lung cancer patients who received curative 3D-conformal radiotherapy. Nineteen RP events were observed (common toxicity criteria for adverse events grade 2 or higher). Serum concentration of the following four candidate biomarkers were measured at baseline and midtreatment: alpha-2-macroglobulin, angiotensin converting enzyme (ACE), transforming growth factor, interleukin-6. Dose-volumetric and clinical parameters were also included as covariates. Feature selection was performed using a Markov blanket approach based on the Koller–Sahami filter. The Markov chain Monte Carlo technique estimated the posterior distribution of BN graphs built from the observed data of the selected variables and causality constraints. RP probability was estimated using a limited number of high posterior graphs (ensemble) and was averaged for the final RP estimate using Bayes’ rule. A resampling method based on bootstrapping was applied to model training and validation in order to control under- and overfit pitfalls. Results: RP prediction power of the BN ensemble approach reached its optimum at a size of 200. The optimized performance of the BN model recorded an area under the receiver operating characteristic curve (AUC) of 0.83, which was significantly higher than multivariate logistic regression (0.77), mean heart dose (0.69), and a pre-to-midtreatment change in ACE (0.66). When RP prediction was made only with pretreatment information, the AUC ranged from 0.76 to 0.81 depending on the ensemble size. Bootstrap validation of graph features in the ensemble quantified confidence of association between variables in the graphs where ten interactions were statistically significant. Conclusions: The presented BN methodology provides the flexibility to model hierarchical interactions between RP covariates, which is applied to probabilistic inference on RP. The authors’ preliminary results demonstrate that such framework combined with an ensemble method can possibly improve prediction of RP under real-life clinical circumstances such as missing data or treatment plan adaptation.« less

  14. WE-E-BRE-05: Ensemble of Graphical Models for Predicting Radiation Pneumontis Risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S; Ybarra, N; Jeyaseelan, K

    Purpose: We propose a prior knowledge-based approach to construct an interaction graph of biological and dosimetric radiation pneumontis (RP) covariates for the purpose of developing a RP risk classifier. Methods: We recruited 59 NSCLC patients who received curative radiotherapy with minimum 6 month follow-up. 16 RP events was observed (CTCAE grade ≥2). Blood serum was collected from every patient before (pre-RT) and during RT (mid-RT). From each sample the concentration of the following five candidate biomarkers were taken as covariates: alpha-2-macroglobulin (α2M), angiotensin converting enzyme (ACE), transforming growth factor β (TGF-β), interleukin-6 (IL-6), and osteopontin (OPN). Dose-volumetric parameters were alsomore » included as covariates. The number of biological and dosimetric covariates was reduced by a variable selection scheme implemented by L1-regularized logistic regression (LASSO). Posterior probability distribution of interaction graphs between the selected variables was estimated from the data under the literature-based prior knowledge to weight more heavily the graphs that contain the expected associations. A graph ensemble was formed by averaging the most probable graphs weighted by their posterior, creating a Bayesian Network (BN)-based RP risk classifier. Results: The LASSO selected the following 7 RP covariates: (1) pre-RT concentration level of α2M, (2) α2M level mid- RT/pre-RT, (3) pre-RT IL6 level, (4) IL6 level mid-RT/pre-RT, (5) ACE mid-RT/pre-RT, (6) PTV volume, and (7) mean lung dose (MLD). The ensemble BN model achieved the maximum sensitivity/specificity of 81%/84% and outperformed univariate dosimetric predictors as shown by larger AUC values (0.78∼0.81) compared with MLD (0.61), V20 (0.65) and V30 (0.70). The ensembles obtained by incorporating the prior knowledge improved classification performance for the ensemble size 5∼50. Conclusion: We demonstrated a probabilistic ensemble method to detect robust associations between RP covariates and its potential to improve RP prediction accuracy. Our Bayesian approach to incorporate prior knowledge can enhance efficiency in searching of such associations from data. The authors acknowledge partial support by: 1) CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290) and 2) The Terry Fox Foundation Strategic Training Initiative for Excellence in Radiation Research for the 21st Century (EIRR21)« less

  15. Local Subspace Classifier with Transform-Invariance for Image Classification

    NASA Astrophysics Data System (ADS)

    Hotta, Seiji

    A family of linear subspace classifiers called local subspace classifier (LSC) outperforms the k-nearest neighbor rule (kNN) and conventional subspace classifiers in handwritten digit classification. However, LSC suffers very high sensitivity to image transformations because it uses projection and the Euclidean distances for classification. In this paper, I present a combination of a local subspace classifier (LSC) and a tangent distance (TD) for improving accuracy of handwritten digit recognition. In this classification rule, we can deal with transform-invariance easily because we are able to use tangent vectors for approximation of transformations. However, we cannot use tangent vectors in other type of images such as color images. Hence, kernel LSC (KLSC) is proposed for incorporating transform-invariance into LSC via kernel mapping. The performance of the proposed methods is verified with the experiments on handwritten digit and color image classification.

  16. Correlated multielectron dynamics in mid-infrared laser pulse interactions with neon atoms.

    PubMed

    Tang, Qingbin; Huang, Cheng; Zhou, Yueming; Lu, Peixiang

    2013-09-09

    The multielectron dynamics in nonsequential triple ionization (NSTI) of neon atoms driven by mid-infrared (MIR) laser pulses is investigated with the three-dimensional classical ensemble model. In consistent with the experimental result, our numerical result shows that in the MIR regime, the triply charged ion longitudinal momentum spectrum exhibits a pronounced double-hump structure at low laser intensity. Back analysis reveals that as the intensity increases, the responsible triple ionization channels transform from direct (e, 3e) channel to the various mixed channels. This transformation of the NSTI channels leads to the results that the shape of ion momentum spectra becomes narrow and the distinct maxima shift towards low momenta with the increase of the laser intensity. By tracing the triply ionized trajectories, the various ionization channels at different laser intensities are clearly identified and these results provide an insight into the complex dynamics of the correlated three electrons in NSTI.

  17. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  18. The incorrect usage of singular spectral analysis and discrete wavelet transform in hybrid models to predict hydrological time series

    NASA Astrophysics Data System (ADS)

    Du, Kongchang; Zhao, Ying; Lei, Jiaqiang

    2017-09-01

    In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.

  19. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. The Experimental Regional Ensemble Forecast System (ExREF): Its Use in NWS Forecast Operations and Preliminary Verification

    NASA Technical Reports Server (NTRS)

    Reynolds, David; Rasch, William; Kozlowski, Daniel; Burks, Jason; Zavodsky, Bradley; Bernardet, Ligia; Jankov, Isidora; Albers, Steve

    2014-01-01

    The Experimental Regional Ensemble Forecast (ExREF) system is a tool for the development and testing of new Numerical Weather Prediction (NWP) methodologies. ExREF is run in near-realtime by the Global Systems Division (GSD) of the NOAA Earth System Research Laboratory (ESRL) and its products are made available through a website, an ftp site, and via the Unidata Local Data Manager (LDM). The ExREF domain covers most of North America and has 9-km horizontal grid spacing. The ensemble has eight members, all employing WRF-ARW. The ensemble uses a variety of initial conditions from LAPS and the Global Forecasting System (GFS) and multiple boundary conditions from the GFS ensemble. Additionally, a diversity of physical parameterizations is used to increase ensemble spread and to account for the uncertainty in forecasting extreme precipitation events. ExREF has been a component of the Hydrometeorology Testbed (HMT) NWP suite in the 2012-2013 and 2013-2014 winters. A smaller domain covering just the West Coast was created to minimize band-width consumption for the NWS. This smaller domain has and is being distributed to the National Weather Service (NWS) Weather Forecast Office and California Nevada River Forecast Center in Sacramento, California, where it is ingested into the Advanced Weather Interactive Processing System (AWIPS I and II) to provide guidance on the forecasting of extreme precipitation events. This paper will review the cooperative effort employed by NOAA ESRL, NASA SPoRT (Short-term Prediction Research and Transition Center), and the NWS to facilitate the ingest and display of ExREF data utilizing the AWIPS I and II D2D and GFE (Graphical Software Editor) software. Within GFE is a very useful verification software package called BoiVer that allows the NWS to utilize the River Forecast Center's 4 km gridded QPE to compare with all operational NWP models 6-hr QPF along with the ExREF mean 6-hr QPF so the forecasters can build confidence in the use of the ExREF in preparing their rainfall forecasts. Preliminary results will be presented.

  1. Ensemble-sensitivity Analysis Based Observation Targeting for Mesoscale Convection Forecasts and Factors Influencing Observation-Impact Prediction

    NASA Astrophysics Data System (ADS)

    Hill, A.; Weiss, C.; Ancell, B. C.

    2017-12-01

    The basic premise of observation targeting is that additional observations, when gathered and assimilated with a numerical weather prediction (NWP) model, will produce a more accurate forecast related to a specific phenomenon. Ensemble-sensitivity analysis (ESA; Ancell and Hakim 2007; Torn and Hakim 2008) is a tool capable of accurately estimating the proper location of targeted observations in areas that have initial model uncertainty and large error growth, as well as predicting the reduction of forecast variance due to the assimilated observation. ESA relates an ensemble of NWP model forecasts, specifically an ensemble of scalar forecast metrics, linearly to earlier model states. A thorough investigation is presented to determine how different factors of the forecast process are impacting our ability to successfully target new observations for mesoscale convection forecasts. Our primary goals for this work are to determine: (1) If targeted observations hold more positive impact over non-targeted (i.e. randomly chosen) observations; (2) If there are lead-time constraints to targeting for convection; (3) How inflation, localization, and the assimilation filter influence impact prediction and realized results; (4) If there exist differences between targeted observations at the surface versus aloft; and (5) how physics errors and nonlinearity may augment observation impacts.Ten cases of dryline-initiated convection between 2011 to 2013 are simulated within a simplified OSSE framework and presented here. Ensemble simulations are produced from a cycling system that utilizes the Weather Research and Forecasting (WRF) model v3.8.1 within the Data Assimilation Research Testbed (DART). A "truth" (nature) simulation is produced by supplying a 3-km WRF run with GFS analyses and integrating the model forward 90 hours, from the beginning of ensemble initialization through the end of the forecast. Target locations for surface and radiosonde observations are computed 6, 12, and 18 hours into the forecast based on a chosen scalar forecast response metric (e.g., maximum reflectivity at convection initiation). A variety of experiments are designed to achieve the aforementioned goals and will be presented, along with their results, detailing the feasibility of targeting for mesoscale convection forecasts.

  2. School Music Advocates Go Straight to Video: Online Services like SchoolTube Offer Far-Reaching Possibilities

    ERIC Educational Resources Information Center

    Block, Debbie Galante

    2009-01-01

    A few years ago, Bill Pendziwiatr of Crestwood School District in Pennsylvania helped create a video documenting six local music programs, including snippets of rehearsals and performances by choirs, traditional bands, jazz and rock ensembles, orchestras, even a clapping class. His goal was to distribute the video all over the state so that…

  3. Evaluating the climate effects of reforestation in New England using a weather research and forecasting (WRF) model multiphysics ensemble

    Treesearch

    E.A. Burakowski; S.V. Ollinger; G.B. Bonan; C.P. Wake; J.E. Dibb; D.Y. Hollinger

    2016-01-01

    The New England region of the northeastern United States has a land use history characterized by forest clearing for agriculture and other uses during European colonization and subsequent reforestation following widespread farm abandonment. Despite these broad changes, the potential influence on local and regional climate has received relatively little attention. This...

  4. Localization in a quantum spin Hall system.

    PubMed

    Onoda, Masaru; Avishai, Yshai; Nagaosa, Naoto

    2007-02-16

    The localization problem of electronic states in a two-dimensional quantum spin Hall system (that is, a symplectic ensemble with topological term) is studied by the transfer matrix method. The phase diagram in the plane of energy and disorder strength is exposed, and demonstrates "levitation" and "pair annihilation" of the domains of extended states analogous to that of the integer quantum Hall system. The critical exponent nu for the divergence of the localization length is estimated as nu congruent with 1.6, which is distinct from both exponents pertaining to the conventional symplectic and the unitary quantum Hall systems. Our analysis strongly suggests a different universality class related to the topology of the pertinent system.

  5. Ensemble Manifold Rank Preserving for Acceleration-Based Human Activity Recognition.

    PubMed

    Tao, Dapeng; Jin, Lianwen; Yuan, Yuan; Xue, Yang

    2016-06-01

    With the rapid development of mobile devices and pervasive computing technologies, acceleration-based human activity recognition, a difficult yet essential problem in mobile apps, has received intensive attention recently. Different acceleration signals for representing different activities or even a same activity have different attributes, which causes troubles in normalizing the signals. We thus cannot directly compare these signals with each other, because they are embedded in a nonmetric space. Therefore, we present a nonmetric scheme that retains discriminative and robust frequency domain information by developing a novel ensemble manifold rank preserving (EMRP) algorithm. EMRP simultaneously considers three aspects: 1) it encodes the local geometry using the ranking order information of intraclass samples distributed on local patches; 2) it keeps the discriminative information by maximizing the margin between samples of different classes; and 3) it finds the optimal linear combination of the alignment matrices to approximate the intrinsic manifold lied in the data. Experiments are conducted on the South China University of Technology naturalistic 3-D acceleration-based activity dataset and the naturalistic mobile-devices based human activity dataset to demonstrate the robustness and effectiveness of the new nonmetric scheme for acceleration-based human activity recognition.

  6. Random SU(2) invariant tensors

    NASA Astrophysics Data System (ADS)

    Li, Youning; Han, Muxin; Ruan, Dong; Zeng, Bei

    2018-04-01

    SU(2) invariant tensors are states in the (local) SU(2) tensor product representation but invariant under the global group action. They are of importance in the study of loop quantum gravity. A random tensor is an ensemble of tensor states. An average over the ensemble is carried out when computing any physical quantities. The random tensor exhibits a phenomenon known as ‘concentration of measure’, which states that for any bipartition the average value of entanglement entropy of its reduced density matrix is asymptotically the maximal possible as the local dimensions go to infinity. We show that this phenomenon is also true when the average is over the SU(2) invariant subspace instead of the entire space for rank-n tensors in general. It is shown in our earlier work Li et al (2017 New J. Phys. 19 063029) that the subleading correction of the entanglement entropy has a mild logarithmic divergence when n  =  4. In this paper, we show that for n  >  4 the subleading correction is not divergent but a finite number. In some special situation, the number could be even smaller than 1/2, which is the subleading correction of random state over the entire Hilbert space of tensors.

  7. Historical and future land use effects on N2O and NO emissions using an ensemble modeling approach: Costa Rica's Caribbean lowlands as an example

    USGS Publications Warehouse

    Reiners, William A.; Liu, S.; Gerow, K.G.; Keller, M.; Schimel, D.S.

    2002-01-01

    [1] The humid tropical zone is a major source area for N2O and NO emissions to the atmosphere. Local emission rates vary widely with local conditions, particularly land use practices which swiftly change with expanding settlement and changing market conditions. The combination of wide variation in emission rates and rapidly changing land use make regional estimation and future prediction of biogenic trace gas emission particularly difficult. This study estimates contemporary, historical, and future N2O and NO emissions from 0.5 million ha of northeastern Costa Rica, a well-documented region in the wet tropics undergoing rapid agricultural development. Estimates were derived by linking spatially distributed environmental data with an ecosystem simulation model in an ensemble estimation approach that incorporates the variance and covariance of spatially distributed driving variables. Results include measures of variance for regional emissions. The formation and aging of pastures from forest provided most of the past temporal change in N2O and NO flux in this region; future changes will be controlled by the degree of nitrogen fertilizer application and extent of intensively managed croplands.

  8. Historical and future land use effects on N2O and NO emissions using an ensemble modeling approach: Costa Rica's Caribbean lowlands as an example

    NASA Astrophysics Data System (ADS)

    Reiners, W. A.; Liu, S.; Gerow, K. G.; Keller, M.; Schimel, D. S.

    2002-12-01

    The humid tropical zone is a major source area for N2O and NO emissions to the atmosphere. Local emission rates vary widely with local conditions, particularly land use practices which swiftly change with expanding settlement and changing market conditions. The combination of wide variation in emission rates and rapidly changing land use make regional estimation and future prediction of biogenic trace gas emission particularly difficult. This study estimates contemporary, historical, and future N2O and NO emissions from 0.5 million ha of northeastern Costa Rica, a well-documented region in the wet tropics undergoing rapid agricultural development. Estimates were derived by linking spatially distributed environmental data with an ecosystem simulation model in an ensemble estimation approach that incorporates the variance and covariance of spatially distributed driving variables. Results include measures of variance for regional emissions. The formation and aging of pastures from forest provided most of the past temporal change in N2O and NO flux in this region; future changes will be controlled by the degree of nitrogen fertilizer application and extent of intensively managed croplands.

  9. 3D spine reconstruction of postoperative patients from multi-level manifold ensembles.

    PubMed

    Kadoury, Samuel; Labelle, Hubert; Parent, Stefan

    2014-01-01

    The quantitative assessment of surgical outcomes using personalized anatomical models is an essential task for the treatment of spinal deformities such as adolescent idiopathic scoliosis. However an accurate 3D reconstruction of the spine from postoperative X-ray images remains challenging due to presence of instrumentation (metallic rods and screws) occluding vertebrae on the spine. In this paper, we formulate the reconstruction problem as an optimization over a manifold of articulated spine shapes learned from pathological training data. The manifold itself is represented using a novel data structure, a multi-level manifold ensemble, which contains links between nodes in a single hierarchical structure, as well as links between different hierarchies, representing overlapping partitions. We show that this data structure allows both efficient localization and navigation on the manifold, for on-the-fly building of local nonlinear models (manifold charting). Our reconstruction framework was tested on pre- and postoperative X-ray datasets from patients who underwent spinal surgery. Compared to manual ground-truth, our method achieves a 3D reconstruction accuracy of 2.37 +/- 0.85 mm for postoperative spine models and can deal with severe cases of scoliosis.

  10. The melting point of lithium: an orbital-free first-principles molecular dynamics study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Mohan; Hung, Linda; Huang, Chen

    2013-08-25

    The melting point of liquid lithium near zero pressure is studied with large-scale orbital-free first-principles molecular dynamics (OF-FPMD) in the isobaric-isothermal ensemble. Here, we adopt the Wang-Govind-Carter (WGC) functional as our kinetic energy density functional (KEDF) and construct a bulk-derived local pseudopotential (BLPS) for Li. Our simulations employ both the ‘heat-until-melts’ method and the coexistence method. We predict 465 K as an upper bound of the melting point of Li from the ‘heat-until-melts’ method, while we predict 434 K as the melting point of Li from the coexistence method. These values compare well with an experimental melting point of 453more » K at zero pressure. Furthermore, we calculate a few important properties of liquid Li including the diffusion coefficients, pair distribution functions, static structure factors, and compressibilities of Li at 470 K and 725 K in the canonical ensemble. This theoretically-obtained results show good agreement with known experimental results, suggesting that OF-FPMD using a non-local KEDF and a BLPS is capable of accurately describing liquid metals.« less

  11. Advances in snow cover distributed modelling via ensemble simulations and assimilation of satellite data

    NASA Astrophysics Data System (ADS)

    Revuelto, J.; Dumont, M.; Tuzet, F.; Vionnet, V.; Lafaysse, M.; Lecourt, G.; Vernay, M.; Morin, S.; Cosme, E.; Six, D.; Rabatel, A.

    2017-12-01

    Nowadays snowpack models show a good capability in simulating the evolution of snow in mountain areas. However singular deviations of meteorological forcing and shortcomings in the modelling of snow physical processes, when accumulated on time along a snow season, could produce large deviations from real snowpack state. The evaluation of these deviations is usually assessed with on-site observations from automatic weather stations. Nevertheless the location of these stations could strongly influence the results of these evaluations since local topography may have a marked influence on snowpack evolution. Despite the evaluation of snowpack models with automatic weather stations usually reveal good results, there exist a lack of large scale evaluations of simulations results on heterogeneous alpine terrain subjected to local topographic effects.This work firstly presents a complete evaluation of the detailed snowpack model Crocus over an extended mountain area, the Arve upper catchment (western European Alps). This catchment has a wide elevation range with a large area above 2000m a.s.l. and/or glaciated. The evaluation compares results obtained with distributed and semi-distributed simulations (the latter nowadays used on the operational forecasting). Daily observations of the snow covered area from MODIS satellite sensor, seasonal glacier surface mass balance evolution measured in more than 65 locations and the galciers annual equilibrium line altitude from Landsat/Spot/Aster satellites, have been used for model evaluation. Additionally the latest advances in producing ensemble snowpack simulations for assimilating satellite reflectance data over extended areas will be presented. These advances comprises the generation of an ensemble of downscaled high-resolution meteorological forcing from meso-scale meteorological models and the application of a particle filter scheme for assimilating satellite observations. Despite the results are prefatory, they show a good potential improving snowpack forecasting capabilities.

  12. Investigating local and long-range neuronal network dynamics by simultaneous optogenetics, reverse microdialysis and silicon probe recordings in vivo

    PubMed Central

    Taylor, Hannah; Schmiedt, Joscha T.; Çarçak, Nihan; Onat, Filiz; Di Giovanni, Giuseppe; Lambert, Régis; Leresche, Nathalie; Crunelli, Vincenzo; David, Francois

    2014-01-01

    Background The advent of optogenetics has given neuroscientists the opportunity to excite or inhibit neuronal population activity with high temporal resolution and cellular selectivity. Thus, when combined with recordings of neuronal ensemble activity in freely moving animals optogenetics can provide an unprecedented snapshot of the contribution of neuronal assemblies to (patho)physiological conditions in vivo. Still, the combination of optogenetic and silicone probe (or tetrode) recordings does not allow investigation of the role played by voltage- and transmitter-gated channels of the opsin-transfected neurons and/or other adjacent neurons in controlling neuronal activity. New method and results We demonstrate that optogenetics and silicone probe recordings can be combined with intracerebral reverse microdialysis for the long-term delivery of neuroactive drugs around the optic fiber and silicone probe. In particular, we show the effect of antagonists of T-type Ca2+ channels, hyperpolarization-activated cyclic nucleotide-gated channels and metabotropic glutamate receptors on silicone probe-recorded activity of the local opsin-transfected neurons in the ventrobasal thalamus, and demonstrate the changes that the block of these thalamic channels/receptors brings about in the network dynamics of distant somatotopic cortical neuronal ensembles. Comparison with existing methods This is the first demonstration of successfully combining optogenetics and neuronal ensemble recordings with reverse microdialysis. This combination of techniques overcomes some of the disadvantages that are associated with the use of intracerebral injection of a drug-containing solution at the site of laser activation. Conclusions The combination of reverse microdialysis, silicone probe recordings and optogenetics can unravel the short and long-term effects of specific transmitter- and voltage-gated channels on laser-modulated firing at the site of optogenetic stimulation and the actions that these manipulations exert on distant neuronal populations. PMID:25004203

  13. Investigating local and long-range neuronal network dynamics by simultaneous optogenetics, reverse microdialysis and silicon probe recordings in vivo.

    PubMed

    Taylor, Hannah; Schmiedt, Joscha T; Carçak, Nihan; Onat, Filiz; Di Giovanni, Giuseppe; Lambert, Régis; Leresche, Nathalie; Crunelli, Vincenzo; David, Francois

    2014-09-30

    The advent of optogenetics has given neuroscientists the opportunity to excite or inhibit neuronal population activity with high temporal resolution and cellular selectivity. Thus, when combined with recordings of neuronal ensemble activity in freely moving animals optogenetics can provide an unprecedented snapshot of the contribution of neuronal assemblies to (patho)physiological conditions in vivo. Still, the combination of optogenetic and silicone probe (or tetrode) recordings does not allow investigation of the role played by voltage- and transmitter-gated channels of the opsin-transfected neurons and/or other adjacent neurons in controlling neuronal activity. We demonstrate that optogenetics and silicone probe recordings can be combined with intracerebral reverse microdialysis for the long-term delivery of neuroactive drugs around the optic fiber and silicone probe. In particular, we show the effect of antagonists of T-type Ca(2+) channels, hyperpolarization-activated cyclic nucleotide-gated channels and metabotropic glutamate receptors on silicone probe-recorded activity of the local opsin-transfected neurons in the ventrobasal thalamus, and demonstrate the changes that the block of these thalamic channels/receptors brings about in the network dynamics of distant somatotopic cortical neuronal ensembles. This is the first demonstration of successfully combining optogenetics and neuronal ensemble recordings with reverse microdialysis. This combination of techniques overcomes some of the disadvantages that are associated with the use of intracerebral injection of a drug-containing solution at the site of laser activation. The combination of reverse microdialysis, silicone probe recordings and optogenetics can unravel the short and long-term effects of specific transmitter- and voltage-gated channels on laser-modulated firing at the site of optogenetic stimulation and the actions that these manipulations exert on distant neuronal populations. Copyright © 2014. Published by Elsevier B.V.

  14. Errors and uncertainties in regional climate simulations of rainfall variability over Tunisia: a multi-model and multi-member approach

    NASA Astrophysics Data System (ADS)

    Fathalli, Bilel; Pohl, Benjamin; Castel, Thierry; Safi, Mohamed Jomâa

    2018-02-01

    Temporal and spatial variability of rainfall over Tunisia (at 12 km spatial resolution) is analyzed in a multi-year (1992-2011) ten-member ensemble simulation performed using the WRF model, and a sample of regional climate hindcast simulations from Euro-CORDEX. RCM errors and skills are evaluated against a dense network of local rain gauges. Uncertainties arising, on the one hand, from the different model configurations and, on the other hand, from internal variability are furthermore quantified and ranked at different timescales using simple spread metrics. Overall, the WRF simulation shows good skill for simulating spatial patterns of rainfall amounts over Tunisia, marked by strong altitudinal and latitudinal gradients, as well as the rainfall interannual variability, in spite of systematic errors. Mean rainfall biases are wet in both DJF and JJA seasons for the WRF ensemble, while they are dry in winter and wet in summer for most of the used Euro-CORDEX models. The sign of mean annual rainfall biases over Tunisia can also change from one member of the WRF ensemble to another. Skills in regionalizing precipitation over Tunisia are season dependent, with better correlations and weaker biases in winter. Larger inter-member spreads are observed in summer, likely because of (1) an attenuated large-scale control on Mediterranean and Tunisian climate, and (2) a larger contribution of local convective rainfall to the seasonal amounts. Inter-model uncertainties are globally stronger than those attributed to model's internal variability. However, inter-member spreads can be of the same magnitude in summer, emphasizing the important stochastic nature of the summertime rainfall variability over Tunisia.

  15. Experiences in multiyear combined state-parameter estimation with an ecosystem model of the North Atlantic and Arctic Oceans using the Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Simon, Ehouarn; Samuelsen, Annette; Bertino, Laurent; Mouysset, Sandrine

    2015-12-01

    A sequence of one-year combined state-parameter estimation experiments has been conducted in a North Atlantic and Arctic Ocean configuration of the coupled physical-biogeochemical model HYCOM-NORWECOM over the period 2007-2010. The aim is to evaluate the ability of an ensemble-based data assimilation method to calibrate ecosystem model parameters in a pre-operational setting, namely the production of the MyOcean pilot reanalysis of the Arctic biology. For that purpose, four biological parameters (two phyto- and two zooplankton mortality rates) are estimated by assimilating weekly data such as, satellite-derived Sea Surface Temperature, along-track Sea Level Anomalies, ice concentrations and chlorophyll-a concentrations with an Ensemble Kalman Filter. The set of optimized parameters locally exhibits seasonal variations suggesting that time-dependent parameters should be used in ocean ecosystem models. A clustering analysis of the optimized parameters is performed in order to identify consistent ecosystem regions. In the north part of the domain, where the ecosystem model is the most reliable, most of them can be associated with Longhurst provinces and new provinces emerge in the Arctic Ocean. However, the clusters do not coincide anymore with the Longhurst provinces in the Tropics due to large model errors. Regarding the ecosystem state variables, the assimilation of satellite-derived chlorophyll concentration leads to significant reduction of the RMS errors in the observed variables during the first year, i.e. 2008, compared to a free run simulation. However, local filter divergences of the parameter component occur in 2009 and result in an increase in the RMS error at the time of the spring bloom.

  16. Towards a true protein movie: a perspective on the potential impact of the ensemble-based structure determination using exact NOEs.

    PubMed

    Vögeli, Beat; Orts, Julien; Strotz, Dean; Chi, Celestine; Minges, Martina; Wälti, Marielle Aulikki; Güntert, Peter; Riek, Roland

    2014-04-01

    Confined by the Boltzmann distribution of the energies of the states, a multitude of structural states are inherent to biomolecules. For a detailed understanding of a protein's function, its entire structural landscape at atomic resolution and insight into the interconversion between all the structural states (i.e. dynamics) are required. Whereas dedicated trickery with NMR relaxation provides aspects of local dynamics, and 3D structure determination by NMR is well established, only recently have several attempts been made to formulate a more comprehensive description of the dynamics and the structural landscape of a protein. Here, a perspective is given on the use of exact NOEs (eNOEs) for the elucidation of structural ensembles of a protein describing the covered conformational space. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Anti-correlated cortical networks arise from spontaneous neuronal dynamics at slow timescales.

    PubMed

    Kodama, Nathan X; Feng, Tianyi; Ullett, James J; Chiel, Hillel J; Sivakumar, Siddharth S; Galán, Roberto F

    2018-01-12

    In the highly interconnected architectures of the cerebral cortex, recurrent intracortical loops disproportionately outnumber thalamo-cortical inputs. These networks are also capable of generating neuronal activity without feedforward sensory drive. It is unknown, however, what spatiotemporal patterns may be solely attributed to intrinsic connections of the local cortical network. Using high-density microelectrode arrays, here we show that in the isolated, primary somatosensory cortex of mice, neuronal firing fluctuates on timescales from milliseconds to tens of seconds. Slower firing fluctuations reveal two spatially distinct neuronal ensembles, which correspond to superficial and deeper layers. These ensembles are anti-correlated: when one fires more, the other fires less and vice versa. This interplay is clearest at timescales of several seconds and is therefore consistent with shifts between active sensing and anticipatory behavioral states in mice.

  18. Study of alloy disorder in quantum dots through multi-million atom simulations

    NASA Technical Reports Server (NTRS)

    Kilmeck, Gerhard; Oyafuso, Fabiano; Boykin, T. B.; Bowen, R. C.; von Allmen, Paul A.

    2003-01-01

    A tight binding model which includes s, p, d, s orbitals is used to examine the electronic structures of an ensemble of dome-shaped In0.6 Ga0.4 As quantum dots. Given ensembles of identically sized quantum dots, variations in composition and configuration yield a linewidth broadening of less than 0.35 meV, much smaller than the total broadening determined from photoluminescence experiments. It is also found that the computed disorder-induced broadening is very sensitive to the applied boundary conditions, so that care must be taken to ensure proper convergence of the numerical results. Examination of local eigenenergies as functions of position shows similar convergence problems and indicates that an inaccurate resolution of the equilibrium atomic positions due to truncation of the simulation domain may be the source of the slow ground state convergence.

  19. Fixed points, stable manifolds, weather regimes, and their predictability.

    PubMed

    Deremble, Bruno; D'Andrea, Fabio; Ghil, Michael

    2009-12-01

    In a simple, one-layer atmospheric model, we study the links between low-frequency variability and the model's fixed points in phase space. The model dynamics is characterized by the coexistence of multiple "weather regimes." To investigate the transitions from one regime to another, we focus on the identification of stable manifolds associated with fixed points. We show that these manifolds act as separatrices between regimes. We track each manifold by making use of two local predictability measures arising from the meteorological applications of nonlinear dynamics, namely, "bred vectors" and singular vectors. These results are then verified in the framework of ensemble forecasts issued from "clouds" (ensembles) of initial states. The divergence of the trajectories allows us to establish the connections between zones of low predictability, the geometry of the stable manifolds, and transitions between regimes.

  20. Arc expression identifies the lateral amygdala fear memory trace

    PubMed Central

    Gouty-Colomer, L A; Hosseini, B; Marcelo, I M; Schreiber, J; Slump, D E; Yamaguchi, S; Houweling, A R; Jaarsma, D; Elgersma, Y; Kushner, S A

    2016-01-01

    Memories are encoded within sparsely distributed neuronal ensembles. However, the defining cellular properties of neurons within a memory trace remain incompletely understood. Using a fluorescence-based Arc reporter, we were able to visually identify the distinct subset of lateral amygdala (LA) neurons activated during auditory fear conditioning. We found that Arc-expressing neurons have enhanced intrinsic excitability and are preferentially recruited into newly encoded memory traces. Furthermore, synaptic potentiation of thalamic inputs to the LA during fear conditioning is learning-specific, postsynaptically mediated and highly localized to Arc-expressing neurons. Taken together, our findings validate the immediate-early gene Arc as a molecular marker for the LA neuronal ensemble recruited during fear learning. Moreover, these results establish a model of fear memory formation in which intrinsic excitability determines neuronal selection, whereas learning-related encoding is governed by synaptic plasticity. PMID:25802982

  1. Educating Students, Transforming Communities: Tribal Colleges Bridge Gap from Poverty to Prosperity

    ERIC Educational Resources Information Center

    Benton, Sherrole

    2012-01-01

    Tribal colleges are often performing little miracles in their communities. Most tribal colleges operate without benefit of local and state taxes. Yet, they bring in new money from other sources that stimulate the local economy. Students gain knowledge and skills that can transform their communities and local economies. Tribal colleges not only…

  2. Local Mechanical Response of Superelastic NiTi Shape-Memory Alloy Under Uniaxial Loading

    NASA Astrophysics Data System (ADS)

    Xiao, Yao; Zeng, Pan; Lei, Liping; Du, Hongfei

    2015-11-01

    In this paper, we focus on the local mechanical response of superelastic NiTi SMA at different temperatures under uniaxial loading. In situ DIC is applied to measure the local strain of the specimen. Based on the experimental results, two types of mechanical response, which are characterized with localized phase transformation and homogenous phase transformation, are identified, respectively. Motivated by residual strain accumulation phenomenon of the superelastic mechanical response, we conduct controlled experiments, and infer that for a given material point, all (or most) of the irreversibility is accumulated when the transformation front is traversing the material point. A robust constitutive model is established to explain the experimental phenomena and we successfully simulate the evolution of local strain that agrees closely with the experimental results.

  3. Directionality fields generated by a local Hilbert transform

    NASA Astrophysics Data System (ADS)

    Ahmed, W. W.; Herrero, R.; Botey, M.; Hayran, Z.; Kurt, H.; Staliunas, K.

    2018-03-01

    We propose an approach based on a local Hilbert transform to design non-Hermitian potentials generating arbitrary vector fields of directionality, p ⃗(r ⃗) , with desired shapes and topologies. We derive a local Hilbert transform to systematically build such potentials by modifying background potentials (being either regular or random, extended or localized). We explore particular directionality fields, for instance in the form of a focus to create sinks for probe fields (which could help to increase absorption at the sink), or to generate vortices in the probe fields. Physically, the proposed directionality fields provide a flexible mechanism for dynamical shaping and precise control over probe fields leading to novel effects in wave dynamics.

  4. Advanced Beamforming Concepts: Source Localization Using the Bispectrum, Gabor Transform, Wigner-Ville Distribution, and Nonstationary Signal Representation

    DTIC Science & Technology

    1991-12-01

    TRANSFORM, WIGNER - VILLE DISTRIBUTION , AND NONSTATIONARY SIGNAL REPRESENTATIONS 6. AUTHOR(S) J. C. Allen 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS...bispectrum yields a bispectral direction finder. Estimates of time-frequency distributions produce Wigner - Ville and Gabor direction-finders. Some types...Beamforming Concepts: Source Localization Using the Bispectrum, Gabor Transform, Wigner - Ville Distribution , and Nonstationary Signal Representations

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deffner, Sebastian; Zurek, Wojciech H.

    Envariance—entanglement assisted invariance—is a recently discovered symmetry of composite quantum systems. Here, we show that thermodynamic equilibrium states are fully characterized by their envariance. In particular, the microcanonical equilibrium of a systemmore » $${ \\mathcal S }$$ with Hamiltonian $${H}_{{ \\mathcal S }}$$ is a fully energetically degenerate quantum state envariant under every unitary transformation. A representation of the canonical equilibrium then follows from simply counting degenerate energy states. Finally, our conceptually novel approach is free of mathematically ambiguous notions such as ensemble, randomness, etc., and, while it does not even rely on probability, it helps to understand its role in the quantum world.« less

  6. "Chemical transformers" from nanoparticle ensembles operated with logic.

    PubMed

    Motornov, Mikhail; Zhou, Jian; Pita, Marcos; Gopishetty, Venkateshwarlu; Tokarev, Ihor; Katz, Evgeny; Minko, Sergiy

    2008-09-01

    The pH-responsive nanoparticles were coupled with information-processing enzyme-based systems to yield "smart" signal-responsive hybrid systems with built-in Boolean logic. The enzyme systems performed AND/OR logic operations, transducing biochemical input signals into reversible structural changes (signal-directed self-assembly) of the nanoparticle assemblies, thus resulting in the processing and amplification of the biochemical signals. The hybrid system mimics biological systems in effective processing of complex biochemical information, resulting in reversible changes of the self-assembled structures of the nanoparticles. The bioinspired approach to the nanostructured morphing materials could be used in future self-assembled molecular robotic systems.

  7. Graph transformation method for calculating waiting times in Markov chains.

    PubMed

    Trygubenko, Semen A; Wales, David J

    2006-06-21

    We describe an exact approach for calculating transition probabilities and waiting times in finite-state discrete-time Markov processes. All the states and the rules for transitions between them must be known in advance. We can then calculate averages over a given ensemble of paths for both additive and multiplicative properties in a nonstochastic and noniterative fashion. In particular, we can calculate the mean first-passage time between arbitrary groups of stationary points for discrete path sampling databases, and hence extract phenomenological rate constants. We present a number of examples to demonstrate the efficiency and robustness of this approach.

  8. Sound Source Localization Using Non-Conformal Surface Sound Field Transformation Based on Spherical Harmonic Wave Decomposition

    PubMed Central

    Zhang, Lanyue; Ding, Dandan; Yang, Desen; Wang, Jia; Shi, Jie

    2017-01-01

    Spherical microphone arrays have been paid increasing attention for their ability to locate a sound source with arbitrary incident angle in three-dimensional space. Low-frequency sound sources are usually located by using spherical near-field acoustic holography. The reconstruction surface and holography surface are conformal surfaces in the conventional sound field transformation based on generalized Fourier transform. When the sound source is on the cylindrical surface, it is difficult to locate by using spherical surface conformal transform. The non-conformal sound field transformation by making a transfer matrix based on spherical harmonic wave decomposition is proposed in this paper, which can achieve the transformation of a spherical surface into a cylindrical surface by using spherical array data. The theoretical expressions of the proposed method are deduced, and the performance of the method is simulated. Moreover, the experiment of sound source localization by using a spherical array with randomly and uniformly distributed elements is carried out. Results show that the non-conformal surface sound field transformation from a spherical surface to a cylindrical surface is realized by using the proposed method. The localization deviation is around 0.01 m, and the resolution is around 0.3 m. The application of the spherical array is extended, and the localization ability of the spherical array is improved. PMID:28489065

  9. Working Towards New Transformative Geoscience Analytics Enabled by Petascale Computing

    NASA Astrophysics Data System (ADS)

    Woodcock, R.; Wyborn, L.

    2012-04-01

    Currently the top 10 supercomputers in the world are petascale and already exascale computers are being planned. Cloud computing facilities are becoming mainstream either as private or commercial investments. These computational developments will provide abundant opportunities for the earth science community to tackle the data deluge which has resulted from new instrumentation enabling data to be gathered at a greater rate and at higher resolution. Combined, the new computational environments should enable the earth sciences to be transformed. However, experience in Australia and elsewhere has shown that it is not easy to scale existing earth science methods, software and analytics to take advantage of the increased computational capacity that is now available. It is not simply a matter of 'transferring' current work practices to the new facilities: they have to be extensively 'transformed'. In particular new Geoscientific methods will need to be developed using advanced data mining, assimilation, machine learning and integration algorithms. Software will have to be capable of operating in highly parallelised environments, and will also need to be able to scale as the compute systems grow. Data access will have to improve and the earth science community needs to move from the file discovery, display and then locally download paradigm to self describing data cubes and data arrays that are available as online resources from either major data repositories or in the cloud. In the new transformed world, rather than analysing satellite data scene by scene, sensor agnostic data cubes of calibrated earth observation data will enable researchers to move across data from multiple sensors at varying spatial data resolutions. In using geophysics to characterise basement and cover, rather than analysing individual gridded airborne geophysical data sets, and then combining the results, petascale computing will enable analysis of multiple data types, collected at varying resolutions with integration and validation across data type boundaries. Increased capacity of storage and compute will mean that uncertainty and reliability of individual observations will consistently be taken into account and propagated throughout the processing chain. If these data access difficulties can be overcome, the increased compute capacity will also mean that larger scale, more complex models can be run at higher resolution and instead of single pass modelling runs. Ensembles of models will be able to be run to simultaneously test multiple hypotheses. Petascale computing and high performance data offer more than "bigger, faster": it is an opportunity for a transformative change in the way in which geoscience research is routinely conducted.

  10. The Ensembl REST API: Ensembl Data for Any Language.

    PubMed

    Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R S; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul

    2015-01-01

    We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. © The Author 2014. Published by Oxford University Press.

  11. Multiresolution image registration in digital x-ray angiography with intensity variation modeling.

    PubMed

    Nejati, Mansour; Pourghassem, Hossein

    2014-02-01

    Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction.

  12. Ensembl BioMarts: a hub for data retrieval across taxonomic space.

    PubMed

    Kinsella, Rhoda J; Kähäri, Andreas; Haider, Syed; Zamora, Jorge; Proctor, Glenn; Spudich, Giulietta; Almeida-King, Jeff; Staines, Daniel; Derwent, Paul; Kerhornou, Arnaud; Kersey, Paul; Flicek, Paul

    2011-01-01

    For a number of years the BioMart data warehousing system has proven to be a valuable resource for scientists seeking a fast and versatile means of accessing the growing volume of genomic data provided by the Ensembl project. The launch of the Ensembl Genomes project in 2009 complemented the Ensembl project by utilizing the same visualization, interactive and programming tools to provide users with a means for accessing genome data from a further five domains: protists, bacteria, metazoa, plants and fungi. The Ensembl and Ensembl Genomes BioMarts provide a point of access to the high-quality gene annotation, variation data, functional and regulatory annotation and evolutionary relationships from genomes spanning the taxonomic space. This article aims to give a comprehensive overview of the Ensembl and Ensembl Genomes BioMarts as well as some useful examples and a description of current data content and future objectives. Database URLs: http://www.ensembl.org/biomart/martview/; http://metazoa.ensembl.org/biomart/martview/; http://plants.ensembl.org/biomart/martview/; http://protists.ensembl.org/biomart/martview/; http://fungi.ensembl.org/biomart/martview/; http://bacteria.ensembl.org/biomart/martview/.

  13. DNA transformation via local heat shock

    NASA Astrophysics Data System (ADS)

    Li, Sha; Meadow Anderson, L.; Yang, Jui-Ming; Lin, Liwei; Yang, Haw

    2007-07-01

    This work describes transformation of foreign DNA into bacterial host cells by local heat shock using a microfluidic system with on-chip, built-in platinum heaters. Plasmid DNA encoding ampicillin resistance and a fluorescent protein can be effectively transformed into the DH5α chemically competent E. coli using this device. Results further demonstrate that only one-thousandth of volume is required to obtain transformation efficiencies as good as or better than conventional practices. As such, this work complements other lab-on-a-chip technologies for potential gene cloning/therapy and protein expression applications.

  14. Lessons Learned from Assimilating Altimeter Data into a Coupled General Circulation Model with the GMAO Augmented Ensemble Kalman Filter

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian; Vernieres, Guillaume; Rienecker, Michele; Jacob, Jossy; Kovach, Robin

    2011-01-01

    Satellite altimetry measurements have provided global, evenly distributed observations of the ocean surface since 1993. However, the difficulties introduced by the presence of model biases and the requirement that data assimilation systems extrapolate the sea surface height (SSH) information to the subsurface in order to estimate the temperature, salinity and currents make it difficult to optimally exploit these measurements. This talk investigates the potential of the altimetry data assimilation once the biases are accounted for with an ad hoc bias estimation scheme. Either steady-state or state-dependent multivariate background-error covariances from an ensemble of model integrations are used to address the problem of extrapolating the information to the sub-surface. The GMAO ocean data assimilation system applied to an ensemble of coupled model instances using the GEOS-5 AGCM coupled to MOM4 is used in the investigation. To model the background error covariances, the system relies on a hybrid ensemble approach in which a small number of dynamically evolved model trajectories is augmented on the one hand with past instances of the state vector along each trajectory and, on the other, with a steady state ensemble of error estimates from a time series of short-term model forecasts. A state-dependent adaptive error-covariance localization and inflation algorithm controls how the SSH information is extrapolated to the sub-surface. A two-step predictor corrector approach is used to assimilate future information. Independent (not-assimilated) temperature and salinity observations from Argo floats are used to validate the assimilation. A two-step projection method in which the system first calculates a SSH increment and then projects this increment vertically onto the temperature, salt and current fields is found to be most effective in reconstructing the sub-surface information. The performance of the system in reconstructing the sub-surface fields is particularly impressive for temperature, but not as satisfactory for salt.

  15. Testing a multi-malaria-model ensemble against 30 years of data in the Kenyan highlands

    PubMed Central

    2014-01-01

    Background Multi-model ensembles could overcome challenges resulting from uncertainties in models’ initial conditions, parameterization and structural imperfections. They could also quantify in a probabilistic way uncertainties in future climatic conditions and their impacts. Methods A four-malaria-model ensemble was implemented to assess the impact of long-term changes in climatic conditions on Plasmodium falciparum malaria morbidity observed in Kericho, in the highlands of Western Kenya, over the period 1979–2009. Input data included quality controlled temperature and rainfall records gathered at a nearby weather station over the historical periods 1979–2009 and 1980–2009, respectively. Simulations included models’ sensitivities to changes in sets of parameters and analysis of non-linear changes in the mean duration of host’s infectivity to vectors due to increased resistance to anti-malarial drugs. Results The ensemble explained from 32 to 38% of the variance of the observed P. falciparum malaria incidence. Obtained R2-values were above the results achieved with individual model simulation outputs. Up to 18.6% of the variance of malaria incidence could be attributed to the +0.19 to +0.25°C per decade significant long-term linear trend in near-surface air temperatures. On top of this 18.6%, at least 6% of the variance of malaria incidence could be related to the increased resistance to anti-malarial drugs. Ensemble simulations also suggest that climatic conditions have likely been less favourable to malaria transmission in Kericho in recent years. Conclusions Long-term changes in climatic conditions and non-linear changes in the mean duration of host’s infectivity are synergistically driving the increasing incidence of P. falciparum malaria in the Kenyan highlands. User-friendly, online-downloadable, open source mathematical tools, such as the one presented here, could improve decision-making processes of local and regional health authorities. PMID:24885824

  16. Multi-model Ensemble of Ocean Data Assimilation Products in The Northwestern Pacific and Their Quality Assessment

    NASA Astrophysics Data System (ADS)

    Isoguchi, O.; Matsui, K.; Kamachi, M.; Usui, N.; Miyazawa, Y.; Ishikawa, Y.; Hirose, N.

    2017-12-01

    Several operational ocean assimilation models are currently available for the Northwestern Pacific and surrounding marginal seas. One of the main targets is predicting the Kuroshio/Kuroshio Extension, which have an impact not only on social activities, such as fishery and ship routing, but also on local weather. There is a demand to assess their quality comprehensively and make the best out the available products. In the present study, several ocean data assimilation products and their multi-ensemble product were assessed by comparing with satellite-derived sea surface temperature (SST), sea surface height (SSH), and in-situ hydrographic sections. The Kuroshio axes were also computed from the surface currents of these products and were compared with the Kuroshio Axis data produced analyzing satellite-SST, SSH, and in-situ observations by Marine Information Research Center (MIRC). The multi-model ensemble products generally showed the best accuracy in terms of the comparisons with the satellite-derived SST and SSH. On the other hand, the ensemble products didn't result in the best one in the comparison with the hydrographic sections. It is thus suggested that the multi-model ensemble works efficiently for the horizontally 2D parameters for which each assimilation product tends to have random errors while it does not work well for the vertical 2D comparisons for which it tends to have bias errors with respect to in-situ data. In the assessment with the Kuroshio Axis Data, some products showed more energetic behavior than the Kuroshio Axis data, resulting in the large path errors which are defined as a ratio between an area surrounded by the reference and model-derived ones and a path length. It is however not determined which are real, because in-situ observations are still lacking to resolve energetic Kuroshio behavior even though the Kuroshio is one of the strongest current.

  17. Incorporating abundance information and guiding variable selection for climate-based ensemble forecasting of species' distributional shifts.

    PubMed

    Tanner, Evan P; Papeş, Monica; Elmore, R Dwayne; Fuhlendorf, Samuel D; Davis, Craig A

    2017-01-01

    Ecological niche models (ENMs) have increasingly been used to estimate the potential effects of climate change on species' distributions worldwide. Recently, predictions of species abundance have also been obtained with such models, though knowledge about the climatic variables affecting species abundance is often lacking. To address this, we used a well-studied guild (temperate North American quail) and the Maxent modeling algorithm to compare model performance of three variable selection approaches: correlation/variable contribution (CVC), biological (i.e., variables known to affect species abundance), and random. We then applied the best approach to forecast potential distributions, under future climatic conditions, and analyze future potential distributions in light of available abundance data and presence-only occurrence data. To estimate species' distributional shifts we generated ensemble forecasts using four global circulation models, four representative concentration pathways, and two time periods (2050 and 2070). Furthermore, we present distributional shifts where 75%, 90%, and 100% of our ensemble models agreed. The CVC variable selection approach outperformed our biological approach for four of the six species. Model projections indicated species-specific effects of climate change on future distributions of temperate North American quail. The Gambel's quail (Callipepla gambelii) was the only species predicted to gain area in climatic suitability across all three scenarios of ensemble model agreement. Conversely, the scaled quail (Callipepla squamata) was the only species predicted to lose area in climatic suitability across all three scenarios of ensemble model agreement. Our models projected future loss of areas for the northern bobwhite (Colinus virginianus) and scaled quail in portions of their distributions which are currently areas of high abundance. Climatic variables that influence local abundance may not always scale up to influence species' distributions. Special attention should be given to selecting variables for ENMs, and tests of model performance should be used to validate the choice of variables.

  18. Characterizing sources of uncertainty from global climate models and downscaling techniques

    USGS Publications Warehouse

    Wootten, Adrienne; Terando, Adam; Reich, Brian J.; Boyles, Ryan; Semazzi, Fred

    2017-01-01

    In recent years climate model experiments have been increasingly oriented towards providing information that can support local and regional adaptation to the expected impacts of anthropogenic climate change. This shift has magnified the importance of downscaling as a means to translate coarse-scale global climate model (GCM) output to a finer scale that more closely matches the scale of interest. Applying this technique, however, introduces a new source of uncertainty into any resulting climate model ensemble. Here we present a method, based on a previously established variance decomposition method, to partition and quantify the uncertainty in climate model ensembles that is attributable to downscaling. We apply the method to the Southeast U.S. using five downscaled datasets that represent both statistical and dynamical downscaling techniques. The combined ensemble is highly fragmented, in that only a small portion of the complete set of downscaled GCMs and emission scenarios are typically available. The results indicate that the uncertainty attributable to downscaling approaches ~20% for large areas of the Southeast U.S. for precipitation and ~30% for extreme heat days (> 35°C) in the Appalachian Mountains. However, attributable quantities are significantly lower for time periods when the full ensemble is considered but only a sub-sample of all models are available, suggesting that overconfidence could be a serious problem in studies that employ a single set of downscaled GCMs. We conclude with recommendations to advance the design of climate model experiments so that the uncertainty that accrues when downscaling is employed is more fully and systematically considered.

  19. On extending Kohn-Sham density functionals to systems with fractional number of electrons.

    PubMed

    Li, Chen; Lu, Jianfeng; Yang, Weitao

    2017-06-07

    We analyze four ways of formulating the Kohn-Sham (KS) density functionals with a fractional number of electrons, through extending the constrained search space from the Kohn-Sham and the generalized Kohn-Sham (GKS) non-interacting v-representable density domain for integer systems to four different sets of densities for fractional systems. In particular, these density sets are (I) ensemble interacting N-representable densities, (II) ensemble non-interacting N-representable densities, (III) non-interacting densities by the Janak construction, and (IV) non-interacting densities whose composing orbitals satisfy the Aufbau occupation principle. By proving the equivalence of the underlying first order reduced density matrices associated with these densities, we show that sets (I), (II), and (III) are equivalent, and all reduce to the Janak construction. Moreover, for functionals with the ensemble v-representable assumption at the minimizer, (III) reduces to (IV) and thus justifies the previous use of the Aufbau protocol within the (G)KS framework in the study of the ground state of fractional electron systems, as defined in the grand canonical ensemble at zero temperature. By further analyzing the Aufbau solution for different density functional approximations (DFAs) in the (G)KS scheme, we rigorously prove that there can be one and only one fractional occupation for the Hartree Fock functional, while there can be multiple fractional occupations for general DFAs in the presence of degeneracy. This has been confirmed by numerical calculations using the local density approximation as a representative of general DFAs. This work thus clarifies important issues on density functional theory calculations for fractional electron systems.

  20. Performance of multi-physics ensembles in convective precipitation events over northeastern Spain

    NASA Astrophysics Data System (ADS)

    García-Ortega, E.; Lorenzana, J.; Merino, A.; Fernández-González, S.; López, L.; Sánchez, J. L.

    2017-07-01

    Convective precipitation with hail greatly affects southwestern Europe, causing major economic losses. The local character of this meteorological phenomenon is a serious obstacle to forecasting. Therefore, the development of reliable short-term forecasts constitutes an essential challenge to minimizing and managing risks. However, deterministic outcomes are affected by different uncertainty sources, such as physics parameterizations. This study examines the performance of different combinations of physics schemes of the Weather Research and Forecasting model to describe the spatial distribution of precipitation in convective environments with hail falls. Two 30-member multi-physics ensembles, with two and three domains of maximum resolution 9 and 3km each, were designed using various combinations of cumulus, microphysics and radiation schemes. The experiment was evaluated for 10 convective precipitation days with hail over 2005-2010 in northeastern Spain. Different indexes were used to evaluate the ability of each ensemble member to capture the precipitation patterns, which were compared with observations of a rain-gauge network. A standardized metric was constructed to identify optimal performers. Results show interesting differences between the two ensembles. In two domain simulations, the selection of cumulus parameterizations was crucial, with the Betts-Miller-Janjic scheme the best. In contrast, the Kain-Fristch cumulus scheme gave the poorest results, suggesting that it should not be used in the study area. Nevertheless, in three domain simulations, the cumulus schemes used in coarser domains were not critical and the best results depended mainly on microphysics schemes. The best performance was shown by Morrison, New Thomson and Goddard microphysics.

Top