Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case
NASA Astrophysics Data System (ADS)
Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann
2017-04-01
Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.
Regional Climate Models Downscaling in the Alpine Area with Multimodel SuperEnsemble
NASA Astrophysics Data System (ADS)
Cane, D.; Barbarino, S.; Renier, L.; Ronchi, C.
2012-04-01
The climatic scenarios show a strong signal of warming in the Alpine area already for the mid XXI century. The climate simulation, however, even when obtained with Regional Climate Models (RCMs), are affected by strong errors where compared with observations in the control period, due to their difficulties in representing the complex orography of the Alps and limitations in their physical parametrization. In this work we use a selection of RCMs runs from the ENSEMBLES project, carefully chosen in order to maximise the variety of leading Global Climate Models and of the RCMs themselves, calculated on the SRES scenario A1B. The reference observation for the Greater Alpine Area are extracted from the European dataset E-OBS produced by the project ENSEMBLES with an available resolution of 25 km. For the study area of Piemonte daily temperature and precipitation observations (1957-present) were carefully gridded on a 14-km grid over Piemonte Region with an Optimal Interpolation technique. We applied the Multimodel SuperEnsemble technique to temperature fields, reducing the high biases of RCMs temperature field compared to observations in the control period. We propose also the first application to RCMs of a brand new probabilistic Multimodel SuperEnsemble Dressing technique to estimate precipitation fields, already applied successfully to weather forecast models, with careful description of precipitation Probability Density Functions conditioned to the model outputs. This technique reduces the strong precipitation overestimation by RCMs over the alpine chain and reproduces the monthly behaviour of observed precipitation in the control period far better than the direct model outputs.
Regional climate models downscaling in the Alpine area with Multimodel SuperEnsemble
NASA Astrophysics Data System (ADS)
Cane, D.; Barbarino, S.; Renier, L. A.; Ronchi, C.
2012-08-01
The climatic scenarios show a strong signal of warming in the Alpine area already for the mid XXI century. The climate simulations, however, even when obtained with Regional Climate Models (RCMs), are affected by strong errors where compared with observations, due to their difficulties in representing the complex orography of the Alps and limitations in their physical parametrization. Therefore the aim of this work is reducing these model biases using a specific post processing statistic technique to obtain a more suitable projection of climate change scenarios in the Alpine area. For our purposes we use a selection of RCMs runs from the ENSEMBLES project, carefully chosen in order to maximise the variety of leading Global Climate Models and of the RCMs themselves, calculated on the SRES scenario A1B. The reference observation for the Greater Alpine Area are extracted from the European dataset E-OBS produced by the project ENSEMBLES with an available resolution of 25 km. For the study area of Piedmont daily temperature and precipitation observations (1957-present) were carefully gridded on a 14-km grid over Piedmont Region with an Optimal Interpolation technique. Hence, we applied the Multimodel SuperEnsemble technique to temperature fields, reducing the high biases of RCMs temperature field compared to observations in the control period. We propose also the first application to RCMS of a brand new probabilistic Multimodel SuperEnsemble Dressing technique to estimate precipitation fields, already applied successfully to weather forecast models, with careful description of precipitation Probability Density Functions conditioned to the model outputs. This technique reduces the strong precipitation overestimation by RCMs over the alpine chain and reproduces well the monthly behaviour of precipitation in the control period.
New technique for ensemble dressing combining Multimodel SuperEnsemble and precipitation PDF
NASA Astrophysics Data System (ADS)
Cane, D.; Milelli, M.
2009-09-01
The Multimodel SuperEnsemble technique (Krishnamurti et al., Science 285, 1548-1550, 1999) is a postprocessing method for the estimation of weather forecast parameters reducing direct model output errors. It differs from other ensemble analysis techniques by the use of an adequate weighting of the input forecast models to obtain a combined estimation of meteorological parameters. Weights are calculated by least-square minimization of the difference between the model and the observed field during a so-called training period. Although it can be applied successfully on the continuous parameters like temperature, humidity, wind speed and mean sea level pressure (Cane and Milelli, Meteorologische Zeitschrift, 15, 2, 2006), the Multimodel SuperEnsemble gives good results also when applied on the precipitation, a parameter quite difficult to handle with standard post-processing methods. Here we present our methodology for the Multimodel precipitation forecasts applied on a wide spectrum of results over Piemonte very dense non-GTS weather station network. We will focus particularly on an accurate statistical method for bias correction and on the ensemble dressing in agreement with the observed precipitation forecast-conditioned PDF. Acknowledgement: this work is supported by the Italian Civil Defence Department.
Interfacing broadband photonic qubits to on-chip cavity-protected rare-earth ensembles
Zhong, Tian; Kindem, Jonathan M.; Rochman, Jake; Faraon, Andrei
2017-01-01
Ensembles of solid-state optical emitters enable broadband quantum storage and transduction of photonic qubits, with applications in high-rate quantum networks for secure communications and interconnecting future quantum computers. To transfer quantum states using ensembles, rephasing techniques are used to mitigate fast decoherence resulting from inhomogeneous broadening, but these techniques generally limit the bandwidth, efficiency and active times of the quantum interface. Here, we use a dense ensemble of neodymium rare-earth ions strongly coupled to a nanophotonic resonator to demonstrate a significant cavity protection effect at the single-photon level—a technique to suppress ensemble decoherence due to inhomogeneous broadening. The protected Rabi oscillations between the cavity field and the atomic super-radiant state enable ultra-fast transfer of photonic frequency qubits to the ions (∼50 GHz bandwidth) followed by retrieval with 98.7% fidelity. With the prospect of coupling to other long-lived rare-earth spin states, this technique opens the possibilities for broadband, always-ready quantum memories and fast optical-to-microwave transducers. PMID:28090078
Interfacing broadband photonic qubits to on-chip cavity-protected rare-earth ensembles
NASA Astrophysics Data System (ADS)
Zhong, Tian; Kindem, Jonathan M.; Rochman, Jake; Faraon, Andrei
2017-01-01
Ensembles of solid-state optical emitters enable broadband quantum storage and transduction of photonic qubits, with applications in high-rate quantum networks for secure communications and interconnecting future quantum computers. To transfer quantum states using ensembles, rephasing techniques are used to mitigate fast decoherence resulting from inhomogeneous broadening, but these techniques generally limit the bandwidth, efficiency and active times of the quantum interface. Here, we use a dense ensemble of neodymium rare-earth ions strongly coupled to a nanophotonic resonator to demonstrate a significant cavity protection effect at the single-photon level--a technique to suppress ensemble decoherence due to inhomogeneous broadening. The protected Rabi oscillations between the cavity field and the atomic super-radiant state enable ultra-fast transfer of photonic frequency qubits to the ions (~50 GHz bandwidth) followed by retrieval with 98.7% fidelity. With the prospect of coupling to other long-lived rare-earth spin states, this technique opens the possibilities for broadband, always-ready quantum memories and fast optical-to-microwave transducers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
NASA Astrophysics Data System (ADS)
Watanabe, S.; Kim, H.; Utsumi, N.
2017-12-01
This study aims to develop a new approach which projects hydrology under climate change using super ensemble experiments. The use of multiple ensemble is essential for the estimation of extreme, which is a major issue in the impact assessment of climate change. Hence, the super ensemble experiments are recently conducted by some research programs. While it is necessary to use multiple ensemble, the multiple calculations of hydrological simulation for each output of ensemble simulations needs considerable calculation costs. To effectively use the super ensemble experiments, we adopt a strategy to use runoff projected by climate models directly. The general approach of hydrological projection is to conduct hydrological model simulations which include land-surface and river routing process using atmospheric boundary conditions projected by climate models as inputs. This study, on the other hand, simulates only river routing model using runoff projected by climate models. In general, the climate model output is systematically biased so that a preprocessing which corrects such bias is necessary for impact assessments. Various bias correction methods have been proposed, but, to the best of our knowledge, no method has proposed for variables other than surface meteorology. Here, we newly propose a method for utilizing the projected future runoff directly. The developed method estimates and corrects the bias based on the pseudo-observation which is a result of retrospective offline simulation. We show an application of this approach to the super ensemble experiments conducted under the program of Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI). More than 400 ensemble experiments from multiple climate models are available. The results of the validation using historical simulations by HAPPI indicates that the output of this approach can effectively reproduce retrospective runoff variability. Likewise, the bias of runoff from super ensemble climate projections is corrected, and the impact of climate change on hydrologic extremes is assessed in a cost-efficient way.
AUC-Maximizing Ensembles through Metalearning.
LeDell, Erin; van der Laan, Mark J; Petersen, Maya
2016-05-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.
AUC-Maximizing Ensembles through Metalearning
LeDell, Erin; van der Laan, Mark J.; Peterson, Maya
2016-01-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721
2009-10-08
Fratianni, L. Torrisi, D. Pallela, J. Chiggiato , M. Tudor, J. Book, P. Martin, G. Peggion, M. Rixen 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK...by the HydroMeteorological Service of ARPA Emilia Romagna, Bologna, Italy (see e.g. Chiggiato and Oddo (2008) and references herein, and http...143-151. Chiggiato , J.. Oddo, P.. 2008. Operational ocean models in the Adriatic Sea: a skill assessment. Ocean Science 4(1), 61-71. <http://www.ocean
NASA Astrophysics Data System (ADS)
Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.
2017-12-01
Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.
Progressive freezing of interacting spins in isolated finite magnetic ensembles
NASA Astrophysics Data System (ADS)
Bhattacharya, Kakoli; Dupuis, Veronique; Le-Roy, Damien; Deb, Pritam
2017-02-01
Self-organization of magnetic nanoparticles into secondary nanostructures provides an innovative way for designing functional nanomaterials with novel properties, different from the constituent primary nanoparticles as well as their bulk counterparts. Collective magnetic properties of such complex closed packing of magnetic nanoparticles makes them more appealing than the individual magnetic nanoparticles in many technological applications. This work reports the collective magnetic behaviour of magnetic ensembles comprising of single domain Fe3O4 nanoparticles. The present work reveals that the ensemble formation is based on the re-orientation and attachment of the nanoparticles in an iso-oriented fashion at the mesoscale regime. Comprehensive dc magnetic measurements show the prevalence of strong interparticle interactions in the ensembles. Due to the close range organization of primary Fe3O4 nanoparticles in the ensemble, the spins of the individual nanoparticles interact through dipolar interactions as realized from remnant magnetization measurements. Signature of super spin glass like behaviour in the ensembles is observed in the memory studies carried out in field cooled conditions. Progressive freezing of spins in the ensembles is corroborated from the Vogel-Fulcher fit of the susceptibility data. Dynamic scaling of relaxation reasserted slow spin dynamics substantiating cluster spin glass like behaviour in the ensembles.
NASA Astrophysics Data System (ADS)
Taniguchi, Kenji
2018-04-01
To investigate future variations in high-impact weather events, numerous samples are required. For the detailed assessment in a specific region, a high spatial resolution is also required. A simple ensemble simulation technique is proposed in this paper. In the proposed technique, new ensemble members were generated from one basic state vector and two perturbation vectors, which were obtained by lagged average forecasting simulations. Sensitivity experiments with different numbers of ensemble members, different simulation lengths, and different perturbation magnitudes were performed. Experimental application to a global warming study was also implemented for a typhoon event. Ensemble-mean results and ensemble spreads of total precipitation, atmospheric conditions showed similar characteristics across the sensitivity experiments. The frequencies of the maximum total and hourly precipitation also showed similar distributions. These results indicate the robustness of the proposed technique. On the other hand, considerable ensemble spread was found in each ensemble experiment. In addition, the results of the application to a global warming study showed possible variations in the future. These results indicate that the proposed technique is useful for investigating various meteorological phenomena and the impacts of global warming. The results of the ensemble simulations also enable the stochastic evaluation of differences in high-impact weather events. In addition, the impacts of a spectral nudging technique were also examined. The tracks of a typhoon were quite different between cases with and without spectral nudging; however, the ranges of the tracks among ensemble members were comparable. It indicates that spectral nudging does not necessarily suppress ensemble spread.
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Delle Monache, L.; Alessandrini, S.
2016-12-01
Accuracy of weather forecasts in Northeast U.S. has become very important in recent years, given the serious and devastating effects of extreme weather events. Despite the use of evolved forecasting tools and techniques strengthened by increased super-computing resources, the weather forecasting systems still have their limitations in predicting extreme events. In this study, we examine the combination of analog ensemble and Bayesian regression techniques to improve the prediction of storms that have impacted NE U.S., mostly defined by the occurrence of high wind speeds (i.e. blizzards, winter storms, hurricanes and thunderstorms). The predicted wind speed, wind direction and temperature by two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) are combined using the mentioned techniques, exploring various ways that those variables influence the minimization of the prediction error (systematic and random). This study is focused on retrospective simulations of 146 storms that affected the NE U.S. in the period 2005-2016. In order to evaluate the techniques, leave-one-out cross validation procedure was implemented regarding 145 storms as the training dataset. The analog ensemble method selects a set of past observations that corresponded to the best analogs of the numerical weather prediction and provides a set of ensemble members of the selected observation dataset. The set of ensemble members can then be used in a deterministic or probabilistic way. In the Bayesian regression framework, optimal variances are estimated for the training partition by minimizing the root mean square error and are applied to the out-of-sample storm. The preliminary results indicate a significant improvement in the statistical metrics of 10-m wind speed for 146 storms using both techniques (20-30% bias and error reduction in all observation-model pairs). In this presentation, we discuss the various combinations of atmospheric predictors and techniques and illustrate how the long record of predicted storms is valuable in the improvement of wind speed prediction.
Lessons from Climate Modeling on the Design and Use of Ensembles for Crop Modeling
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Mearns, Linda O.; Ruane, Alexander C.; Roetter, Reimund P.; Asseng, Senthold
2016-01-01
Working with ensembles of crop models is a recent but important development in crop modeling which promises to lead to better uncertainty estimates for model projections and predictions, better predictions using the ensemble mean or median, and closer collaboration within the modeling community. There are numerous open questions about the best way to create and analyze such ensembles. Much can be learned from the field of climate modeling, given its much longer experience with ensembles. We draw on that experience to identify questions and make propositions that should help make ensemble modeling with crop models more rigorous and informative. The propositions include defining criteria for acceptance of models in a crop MME, exploring criteria for evaluating the degree of relatedness of models in a MME, studying the effect of number of models in the ensemble, development of a statistical model of model sampling, creation of a repository for MME results, studies of possible differential weighting of models in an ensemble, creation of single model ensembles based on sampling from the uncertainty distribution of parameter values or inputs specifically oriented toward uncertainty estimation, the creation of super ensembles that sample more than one source of uncertainty, the analysis of super ensemble results to obtain information on total uncertainty and the separate contributions of different sources of uncertainty and finally further investigation of the use of the multi-model mean or median as a predictor.
NASA Astrophysics Data System (ADS)
Brochero, Darwin; Hajji, Islem; Pina, Jasson; Plana, Queralt; Sylvain, Jean-Daniel; Vergeynst, Jenna; Anctil, Francois
2015-04-01
Theories about generalization error with ensembles are mainly based on the diversity concept, which promotes resorting to many members of different properties to support mutually agreeable decisions. Kuncheva (2004) proposed the Multi Level Diversity Model (MLDM) to promote diversity in model ensembles, combining different data subsets, input subsets, models, parameters, and including a combiner level in order to optimize the final ensemble. This work tests the hypothesis about the minimisation of the generalization error with ensembles of Neural Network (NN) structures. We used the MLDM to evaluate two different scenarios: (i) ensembles from a same NN architecture, and (ii) a super-ensemble built by a combination of sub-ensembles of many NN architectures. The time series used correspond to the 12 basins of the MOdel Parameter Estimation eXperiment (MOPEX) project that were used by Duan et al. (2006) and Vos (2013) as benchmark. Six architectures are evaluated: FeedForward NN (FFNN) trained with the Levenberg Marquardt algorithm (Hagan et al., 1996), FFNN trained with SCE (Duan et al., 1993), Recurrent NN trained with a complex method (Weins et al., 2008), Dynamic NARX NN (Leontaritis and Billings, 1985), Echo State Network (ESN), and leak integrator neuron (L-ESN) (Lukosevicius and Jaeger, 2009). Each architecture performs separately an Input Variable Selection (IVS) according to a forward stepwise selection (Anctil et al., 2009) using mean square error as objective function. Post-processing by Predictor Stepwise Selection (PSS) of the super-ensemble has been done following the method proposed by Brochero et al. (2011). IVS results showed that the lagged stream flow, lagged precipitation, and Standardized Precipitation Index (SPI) (McKee et al., 1993) were the most relevant variables. They were respectively selected as one of the firsts three selected variables in 66, 45, and 28 of the 72 scenarios. A relationship between aridity index (Arora, 2002) and NN performance showed that wet basins are more easily modelled than dry basins. Nash-Sutcliffe (NS) Efficiency criterion was used to evaluate the performance of the models. Test results showed that in 9 of the 12 basins, the mean sub-ensembles performance was better than the one presented by Vos (2013). Furthermore, in 55 of 72 cases (6 NN structures x 12 basins) the mean sub-ensemble performance was better than the best individual performance, and in 10 basins the performance of the mean super-ensemble was better than the best individual super-ensemble member. As well, it was identified that members of ESN and L-ESN sub-ensembles have very similar and good performance values. Regarding the mean super-ensemble performance, we obtained an average gain in performance of 17%, and found that PSS preserves sub-ensemble members from different NN structures, indicating the pertinence of diversity in the super-ensemble. Moreover, it was demonstrated that around 100 predictors from the different structures are enough to optimize the super-ensemble. Although sub-ensembles of FFNN-SCE showed unstable performances, FFNN-SCE members were picked-up several times in the final predictor selection. References Anctil, F., M. Filion, and J. Tournebize (2009). "A neural network experiment on the simulation of daily nitrate-nitrogen and suspended sediment fluxes from a small agricultural catchment". In: Ecol. Model. 220.6, pp. 879-887. Arora, V. K. (2002). "The use of the aridity index to assess climate change effect on annual runoff". In: J. Hydrol. 265.164, pp. 164 -177 . Brochero, D., F. Anctil, and C. Gagn'e (2011). "Simplifying a hydrological ensemble prediction system with a backward greedy selection of members Part 1: Optimization criteria". In: Hydrol. Earth Syst. Sci. 15.11, pp. 3307-3325. Duan, Q., J. Schaake, V. Andr'eassian, S. Franks, G. Goteti, H. Gupta, Y. Gusev, F. Habets, A. Hall, L. Hay, T. Hogue, M. Huang, G. Leavesley, X. Liang, O. Nasonova, J. Noilhan, L. Oudin, S. Sorooshian, T. Wagener, and E. Wood (2006). "Model Parameter Estimation Experiment (MOPEX): An overview of science strategy and major results from the second and third workshops". In: J. Hydrol. 320.12, pp. 3-17. Duan, Q., V. Gupta, and S. Sorooshian (1993). "Shuffled complex evolution approach for effective and efficient global minimization". In: J. Optimiz. Theory App. 76.3, pp. 501-521. Hagan, M. T., H. B. Demuth, and M. Beale (1996). Neural network design . 1st ed. PWS Publishing Co., p. 730. Kuncheva, L. I. (2004). Combining Pattern Classifiers: Methods and Algorithms . Wiley-Interscience, p. 350. Leontaritis, I. and S. Billings (1985). "Input-output parametric models for non-linear systems Part I: deterministic non-linear systems". In: International Journal of Control 41.2, pp. 303-328. Lukosevicius, M. and H. Jaeger (2009). "Reservoir computing approaches to recurrent neural network training". In: Computer Science Review 3.3, pp. 127-149. McKee, T., N. Doesken, and J. Kleist (1993). The Relationship of Drought Frequency and Duration to Time Scales . In: Eighth Conference on Applied Climatology. Vos, N. J. de (2013). "Echo state networks as an alternative to traditional artificial neural networks in rainfall-runoff modelling". In: Hydrol. Earth Syst. Sci. 17.1, pp. 253-267. Weins, T., R. Burton, G. Schoenau, and D. Bitner (2008). Recursive Generalized Neural Networks (RGNN) for the Modeling of a Load Sensing Pump. In: ASME Joint Conference on Fluid Power, Transmission and Control.
NASA Astrophysics Data System (ADS)
Won, Rachel
2018-05-01
In the quest for nanoscopy with super-resolution, consensus from the imaging community is that super-resolution is not always needed and that scientists should choose an imaging technique based on their specific application.
Synchronization Experiments With A Global Coupled Model of Intermediate Complexity
NASA Astrophysics Data System (ADS)
Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin
2013-04-01
In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.
Mazurowski, Maciej A; Zurada, Jacek M; Tourassi, Georgia D
2009-07-01
Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC = 0.905 +/- 0.024) in performance as compared to the original IT-CAD system (AUC = 0.865 +/- 0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters.
Introduction to the virtual special issue on super-resolution imaging techniques
NASA Astrophysics Data System (ADS)
Cao, Liangcai; Liu, Zhengjun
2017-12-01
Until quite recently, the resolution of optical imaging instruments, including telescopes, cameras and microscopes, was considered to be limited by the diffraction of light and by image sensors. In the past few years, many exciting super-resolution approaches have emerged that demonstrate intriguing ways to bypass the classical limit in optics and detectors. More and more research groups are engaged in the study of advanced super-resolution schemes, devices, algorithms, systems, and applications [1-6]. Super-resolution techniques involve new methods in science and engineering of optics [7,8], measurements [9,10], chemistry [11,12] and information [13,14]. Promising applications, particularly in biomedical research and semiconductor industry, have been successfully demonstrated.
Alves, Pedro; Liu, Shuang; Wang, Daifeng; Gerstein, Mark
2018-01-01
Machine learning is an integral part of computational biology, and has already shown its use in various applications, such as prognostic tests. In the last few years in the non-biological machine learning community, ensembling techniques have shown their power in data mining competitions such as the Netflix challenge; however, such methods have not found wide use in computational biology. In this work, we endeavor to show how ensembling techniques can be applied to practical problems, including problems in the field of bioinformatics, and how they often outperform other machine learning techniques in both predictive power and robustness. Furthermore, we develop a methodology of ensembling, Multi-Swarm Ensemble (MSWE) by using multiple particle swarm optimizations and demonstrate its ability to further enhance the performance of ensembles.
Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.
2009-01-01
Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC=0.905±0.024) in performance as compared to the original IT-CAD system (AUC=0.865±0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters. PMID:19673196
Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A
2015-01-15
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.
Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.
2014-01-01
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152
SRRF: Universal live-cell super-resolution microscopy.
Culley, Siân; Tosheva, Kalina L; Matos Pereira, Pedro; Henriques, Ricardo
2018-08-01
Super-resolution microscopy techniques break the diffraction limit of conventional optical microscopy to achieve resolutions approaching tens of nanometres. The major advantage of such techniques is that they provide resolutions close to those obtainable with electron microscopy while maintaining the benefits of light microscopy such as a wide palette of high specificity molecular labels, straightforward sample preparation and live-cell compatibility. Despite this, the application of super-resolution microscopy to dynamic, living samples has thus far been limited and often requires specialised, complex hardware. Here we demonstrate how a novel analytical approach, Super-Resolution Radial Fluctuations (SRRF), is able to make live-cell super-resolution microscopy accessible to a wider range of researchers. We show its applicability to live samples expressing GFP using commercial confocal as well as laser- and LED-based widefield microscopes, with the latter achieving long-term timelapse imaging with minimal photobleaching. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Context dependent anti-aliasing image reconstruction
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.; Hunt, A.; Arlia, N.
1989-01-01
Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.
Super Ensemble-based Aviation Turbulence Guidance (SEATG) for Air Traffic Management (ATM)
NASA Astrophysics Data System (ADS)
Kim, Jung-Hoon; Chan, William; Sridhar, Banavar; Sharman, Robert
2014-05-01
Super Ensemble (ensemble of ten turbulence metrics from time-lagged ensemble members of weather forecast data)-based Aviation Turbulence Guidance (SEATG) is developed using Weather Research and Forecasting (WRF) model and in-situ eddy dissipation rate (EDR) observations equipped on commercial aircraft over the contiguous United States. SEATG is a sequence of five procedures including weather modeling, calculating turbulence metrics, mapping EDR-scale, evaluating metrics, and producing final SEATG forecast. This uses similar methodology to the operational Graphic Turbulence Guidance (GTG) with three major improvements. First, SEATG use a higher resolution (3-km) WRF model to capture cloud-resolving scale phenomena. Second, SEATG computes turbulence metrics for multiple forecasts that are combined at the same valid time resulting in an time-lagged ensemble of multiple turbulence metrics. Third, SEATG provides both deterministic and probabilistic turbulence forecasts to take into account weather uncertainties and user demands. It is found that the SEATG forecasts match well with observed radar reflectivity along a surface front as well as convectively induced turbulence outside the clouds on 7-8 Sep 2012. And, overall performance skill of deterministic SEATG against the observed EDR data during this period is superior to any single turbulence metrics. Finally, probabilistic SEATG is used as an example application of turbulence forecast for air-traffic management. In this study, a simple Wind-Optimal Route (WOR) passing through the potential areas of probabilistic SEATG and Lateral Turbulence Avoidance Route (LTAR) taking into account the SEATG are calculated at z = 35000 ft (z = 12 km) from Los Angeles to John F. Kennedy international airports. As a result, WOR takes total of 239 minutes with 16 minutes of SEATG areas for 40% of moderate turbulence potential, while LTAR takes total of 252 minutes travel time that 5% of fuel would be additionally consumed to entirely avoid the moderate SEATG regions.
Applications of Machine Learning to Downscaling and Verification
NASA Astrophysics Data System (ADS)
Prudden, R.
2017-12-01
Downscaling, sometimes known as super-resolution, means converting model data into a more detailed local forecast. It is a problem which could be highly amenable to machine learning approaches, provided that sufficient historical forecast data and observations are available. It is also closely linked to the subject of verification, since improving a forecast requires a way to measure that improvement. This talk will describe some early work towards downscaling Met Office ensemble forecasts, and discuss how the output may be usefully evaluated.
Mei, Longcan; Zhou, Yanping; Zhu, Lizhe; Liu, Changlin; Wu, Zhuo; Wang, Fangkui; Hao, Gefei; Yu, Di; Yuan, Hong; Cui, Yanfang
2018-03-20
A superkine variant of interleukin-2 with six site mutations away from the binding interface developed from the yeast display technique has been previously characterized as undergoing a distal structure alteration which is responsible for its super-potency and provides an elegant case study with which to get insight about how to utilize allosteric effect to achieve desirable protein functions. By examining the dynamic network and the allosteric pathways related to those mutated residues using various computational approaches, we found that nanosecond time scale all-atom molecular dynamics simulations can identify the dynamic network as efficient as an ensemble algorithm. The differentiated pathways for the six core residues form a dynamic network that outlines the area of structure alteration. The results offer potentials of using affordable computing power to predict allosteric structure of mutants in knowledge-based mutagenesis.
Automating the expert consensus paradigm for robust lung tissue classification
NASA Astrophysics Data System (ADS)
Rajagopalan, Srinivasan; Karwoski, Ronald A.; Raghunath, Sushravya; Bartholmai, Brian J.; Robb, Richard A.
2012-03-01
Clinicians confirm the efficacy of dynamic multidisciplinary interactions in diagnosing Lung disease/wellness from CT scans. However, routine clinical practice cannot readily accomodate such interactions. Current schemes for automating lung tissue classification are based on a single elusive disease differentiating metric; this undermines their reliability in routine diagnosis. We propose a computational workflow that uses a collection (#: 15) of probability density functions (pdf)-based similarity metrics to automatically cluster pattern-specific (#patterns: 5) volumes of interest (#VOI: 976) extracted from the lung CT scans of 14 patients. The resultant clusters are refined for intra-partition compactness and subsequently aggregated into a super cluster using a cluster ensemble technique. The super clusters were validated against the consensus agreement of four clinical experts. The aggregations correlated strongly with expert consensus. By effectively mimicking the expertise of physicians, the proposed workflow could make automation of lung tissue classification a clinical reality.
Improving medium-range ensemble streamflow forecasts through statistical post-processing
NASA Astrophysics Data System (ADS)
Mendoza, Pablo; Wood, Andy; Clark, Elizabeth; Nijssen, Bart; Clark, Martyn; Ramos, Maria-Helena; Nowak, Kenneth; Arnold, Jeffrey
2017-04-01
Probabilistic hydrologic forecasts are a powerful source of information for decision-making in water resources operations. A common approach is the hydrologic model-based generation of streamflow forecast ensembles, which can be implemented to account for different sources of uncertainties - e.g., from initial hydrologic conditions (IHCs), weather forecasts, and hydrologic model structure and parameters. In practice, hydrologic ensemble forecasts typically have biases and spread errors stemming from errors in the aforementioned elements, resulting in a degradation of probabilistic properties. In this work, we compare several statistical post-processing techniques applied to medium-range ensemble streamflow forecasts obtained with the System for Hydromet Applications, Research and Prediction (SHARP). SHARP is a fully automated prediction system for the assessment and demonstration of short-term to seasonal streamflow forecasting applications, developed by the National Center for Atmospheric Research, University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. The suite of post-processing techniques includes linear blending, quantile mapping, extended logistic regression, quantile regression, ensemble analogs, and the generalized linear model post-processor (GLMPP). We assess and compare these techniques using multi-year hindcasts in several river basins in the western US. This presentation discusses preliminary findings about the effectiveness of the techniques for improving probabilistic skill, reliability, discrimination, sharpness and resolution.
NASA Astrophysics Data System (ADS)
Zhong, Xianyun; Hou, Xi; Yang, Jinshan
2016-09-01
Nickel is the unique material in the X-ray telescopes. And it has the typical soft material characteristics with low hardness high surface damage and low stability of thermal. The traditional fabrication techniques are exposed to lots of problems, including great surface scratches, high sub-surface damage and poor surface roughness and so on. The current fabrication technology for the nickel aspheric mainly adopt the single point diamond turning(SPDT), which has lots of advantages such as high efficiency, ultra-precision surface figure, low sub-surface damage and so on. But the residual surface texture of SPDT will cause great scattering losses and fall far short from the requirement in the X-ray applications. This paper mainly investigates the magnetorheological finishing (MRF) techniques for the super-smooth processing on the nickel optics. Through the study of the MRF polishing techniques, we obtained the ideal super-smooth polishing technique based on the self-controlled MRF-fluid NS-1, and finished the high-precision surface figure lower than RMS λ/80 (λ=632.8nm) and super-smooth roughness lower than Ra 0.3nm on the plane reflector and roughness lower than Ra 0.4nm on the convex cone. The studying of the MRF techniques makes a great effort to the state-of-the-art nickel material processing level for the X-ray optical systems applications.
An efficient ensemble learning method for gene microarray classification.
Osareh, Alireza; Shadgar, Bita
2013-01-01
The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost.
Subach, Fedor V; Patterson, George H; Renz, Malte; Lippincott-Schwartz, Jennifer; Verkhusha, Vladislav V
2010-05-12
Rapidly emerging techniques of super-resolution single-molecule microscopy of living cells rely on the continued development of genetically encoded photoactivatable fluorescent proteins. On the basis of monomeric TagRFP, we have developed a photoactivatable TagRFP protein that is initially dark but becomes red fluorescent after violet light irradiation. Compared to other monomeric dark-to-red photoactivatable proteins including PAmCherry, PATagRFP has substantially higher molecular brightness, better pH stability, substantially less sensitivity to blue light, and better photostability in both ensemble and single-molecule modes. Spectroscopic analysis suggests that PATagRFP photoactivation is a two-step photochemical process involving sequential one-photon absorbance by two distinct chromophore forms. True monomeric behavior, absence of green fluorescence, and single-molecule performance in live cells make PATagRFP an excellent protein tag for two-color imaging techniques, including conventional diffraction-limited photoactivation microscopy, super-resolution photoactivated localization microscopy (PALM), and single particle tracking PALM (sptPALM) of living cells. Two-color sptPALM imaging was demonstrated using several PATagRFP tagged transmembrane proteins together with PAGFP-tagged clathrin light chain. Analysis of the resulting sptPALM images revealed that single-molecule transmembrane proteins, which are internalized into a cell via endocytosis, colocalize in space and time with plasma membrane domains enriched in clathrin light-chain molecules.
Super-Resolution Enhancement From Multiple Overlapping Images: A Fractional Area Technique
NASA Astrophysics Data System (ADS)
Michaels, Joshua A.
With the availability of large quantities of relatively low-resolution data from several decades of space borne imaging, methods of creating an accurate, higher-resolution image from the multiple lower-resolution images (i.e. super-resolution), have been developed almost since such imagery has been around. The fractional-area super-resolution technique developed in this thesis has never before been documented. Satellite orbits, like Landsat, have a quantifiable variation, which means each image is not centered on the exact same spot more than once and the overlapping information from these multiple images may be used for super-resolution enhancement. By splitting a single initial pixel into many smaller, desired pixels, a relationship can be created between them using the ratio of the area within the initial pixel. The ideal goal for this technique is to obtain smaller pixels with exact values and no error, yielding a better potential result than those methods that yield interpolated pixel values with consequential loss of spatial resolution. A Fortran 95 program was developed to perform all calculations associated with the fractional-area super-resolution technique. The fractional areas are calculated using traditional trigonometry and coordinate geometry and Linear Algebra Package (LAPACK; Anderson et al., 1999) is used to solve for the higher-resolution pixel values. In order to demonstrate proof-of-concept, a synthetic dataset was created using the intrinsic Fortran random number generator and Adobe Illustrator CS4 (for geometry). To test the real-life application, digital pictures from a Sony DSC-S600 digital point-and-shoot camera with a tripod were taken of a large US geological map under fluorescent lighting. While the fractional-area super-resolution technique works in perfect synthetic conditions, it did not successfully produce a reasonable or consistent solution in the digital photograph enhancement test. The prohibitive amount of processing time (up to 60 days for a relatively small enhancement area) severely limits the practical usefulness of fraction-area super-resolution. Fractional-area super-resolution is very sensitive to relative input image co-registration, which must be accurate to a sub-pixel degree. However, use of this technique, if input conditions permit, could be applied as a "pinpoint" super-resolution technique. Such an application could be possible by only applying it to only very small areas with very good input image co-registration.
Analytical Applications of Monte Carlo Techniques.
ERIC Educational Resources Information Center
Guell, Oscar A.; Holcombe, James A.
1990-01-01
Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)
Application of Ensemble Detection and Analysis to Modeling Uncertainty in Non Stationary Process
NASA Technical Reports Server (NTRS)
Racette, Paul
2010-01-01
Characterization of non stationary and nonlinear processes is a challenge in many engineering and scientific disciplines. Climate change modeling and projection, retrieving information from Doppler measurements of hydrometeors, and modeling calibration architectures and algorithms in microwave radiometers are example applications that can benefit from improvements in the modeling and analysis of non stationary processes. Analyses of measured signals have traditionally been limited to a single measurement series. Ensemble Detection is a technique whereby mixing calibrated noise produces an ensemble measurement set. The collection of ensemble data sets enables new methods for analyzing random signals and offers powerful new approaches to studying and analyzing non stationary processes. Derived information contained in the dynamic stochastic moments of a process will enable many novel applications.
Molecular dynamics simulations: advances and applications
Hospital, Adam; Goñi, Josep Ramon; Orozco, Modesto; Gelpí, Josep L
2015-01-01
Molecular dynamics simulations have evolved into a mature technique that can be used effectively to understand macromolecular structure-to-function relationships. Present simulation times are close to biologically relevant ones. Information gathered about the dynamic properties of macromolecules is rich enough to shift the usual paradigm of structural bioinformatics from studying single structures to analyze conformational ensembles. Here, we describe the foundations of molecular dynamics and the improvements made in the direction of getting such ensemble. Specific application of the technique to three main issues (allosteric regulation, docking, and structure refinement) is discussed. PMID:26604800
Microsphere-aided optical microscopy and its applications for super-resolution imaging
NASA Astrophysics Data System (ADS)
Upputuri, Paul Kumar; Pramanik, Manojit
2017-12-01
The spatial resolution of a standard optical microscope (SOM) is limited by diffraction. In visible spectrum, SOM can provide ∼ 200 nm resolution. To break the diffraction limit several approaches were developed including scanning near field microscopy, metamaterial super-lenses, nanoscale solid immersion lenses, super-oscillatory lenses, confocal fluorescence microscopy, techniques that exploit non-linear response of fluorophores like stimulated emission depletion microscopy, stochastic optical reconstruction microscopy, etc. Recently, photonic nanojet generated by a dielectric microsphere was used to break the diffraction limit. The microsphere-approach is simple, cost-effective and can be implemented under a standard microscope, hence it has gained enormous attention for super-resolution imaging. In this article, we briefly review the microsphere approach and its applications for super-resolution imaging in various optical imaging modalities.
Adaptive correction of ensemble forecasts
NASA Astrophysics Data System (ADS)
Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane
2017-04-01
Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.
Control of Goos-Hänchen shift via input probe field intensity
NASA Astrophysics Data System (ADS)
Ziauddin; Lee, Ray-Kuang; Qamar, Sajid
2016-11-01
We suggest a scheme to control Goos-Hänchen (GH) shift in an ensemble of strongly interacting Rydberg atoms, which act as super-atoms due to the dipole blockade mechanism. The ensemble of three-level cold Rydberg-dressed (87Rb) atoms follows a cascade configurations where two fields, i.e, a strong control and a weak field are employed [D. Petrosyan, J. Otterbach, and M. Fleischhauer, Phys. Rev. Lett. 107, 213601 (2011)]. The propagation of probe field is influenced by two-photon correlation within the blockade distance, which are damped due to the saturation of super-atoms. The amplitude of GH shift in the reflected light depends on the intensity of probe field. We observe large negative GH shift in the reflected light for small values of the probe field intensities.
Techniques for super-resolution microscopy using NV-diamond
NASA Astrophysics Data System (ADS)
Trifonov, Alexei; Glenn, David; Bar-Gill, Nir; Le Sage, David; Walsworth, Ronald
2011-05-01
We discuss the development and application of techniques for super-resolution microscopy using NV centers in diamond: stimulated emission depletion (STED), metastable ground state depletion (GSD), and stochastic optical reconstruction microscopy (STORM). NV centers do not bleach under optical excitation, are not biotoxic, and have long-lived electronic spin coherence and spin-state-dependent fluorescence. Thus NV-diamond has great potential as a fluorescent biomarker and as a magnetic biosensor.
Murray, Matthew J; Ogden, Hannah M; Mullin, Amy S
2017-10-21
An optical centrifuge is used to generate an ensemble of CO 2 super rotors with oriented angular momentum. The collision dynamics and energy transfer behavior of the super rotor molecules are investigated using high-resolution transient IR absorption spectroscopy. New multipass IR detection provides improved sensitivity to perform polarization-dependent transient studies for rotational states with 76 ≤ J ≤ 100. Polarization-dependent measurements show that the collision-induced kinetic energy release is spatially anisotropic and results from both near-resonant energy transfer between super rotor molecules and non-resonant energy transfer between super rotors and thermal molecules. J-dependent studies show that the extent and duration of the orientational anisotropy increase with rotational angular momentum. The super rotors exhibit behavior akin to molecular gyroscopes, wherein molecules with larger amounts of angular momentum are less likely to change their angular momentum orientation through collisions.
NASA Astrophysics Data System (ADS)
Murray, Matthew J.; Ogden, Hannah M.; Mullin, Amy S.
2017-10-01
An optical centrifuge is used to generate an ensemble of CO2 super rotors with oriented angular momentum. The collision dynamics and energy transfer behavior of the super rotor molecules are investigated using high-resolution transient IR absorption spectroscopy. New multipass IR detection provides improved sensitivity to perform polarization-dependent transient studies for rotational states with 76 ≤ J ≤ 100. Polarization-dependent measurements show that the collision-induced kinetic energy release is spatially anisotropic and results from both near-resonant energy transfer between super rotor molecules and non-resonant energy transfer between super rotors and thermal molecules. J-dependent studies show that the extent and duration of the orientational anisotropy increase with rotational angular momentum. The super rotors exhibit behavior akin to molecular gyroscopes, wherein molecules with larger amounts of angular momentum are less likely to change their angular momentum orientation through collisions.
Patch-Based Super-Resolution of MR Spectroscopic Images: Application to Multiple Sclerosis
Jain, Saurabh; Sima, Diana M.; Sanaei Nezhad, Faezeh; Hangel, Gilbert; Bogner, Wolfgang; Williams, Stephen; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk
2017-01-01
Purpose: Magnetic resonance spectroscopic imaging (MRSI) provides complementary information to conventional magnetic resonance imaging. Acquiring high resolution MRSI is time consuming and requires complex reconstruction techniques. Methods: In this paper, a patch-based super-resolution method is presented to increase the spatial resolution of metabolite maps computed from MRSI. The proposed method uses high resolution anatomical MR images (T1-weighted and Fluid-attenuated inversion recovery) to regularize the super-resolution process. The accuracy of the method is validated against conventional interpolation techniques using a phantom, as well as simulated and in vivo acquired human brain images of multiple sclerosis subjects. Results: The method preserves tissue contrast and structural information, and matches well with the trend of acquired high resolution MRSI. Conclusions: These results suggest that the method has potential for clinically relevant neuroimaging applications. PMID:28197066
NASA Astrophysics Data System (ADS)
Logofătu, Petre C.; Damian, Victor
2018-05-01
A super-resolution terahertz imaging technique based on subpixel estimation was applied to hyperspectral beam profiling. The topic of hyperspectral beam profiling was chosen because the beam profile and its dependence on wavelength are not well known and are important for imaging applications. Super-resolution is required here to avoid diffraction effects and to provide a stronger signal. Super-resolution usually adds supplementary information to the measurement, but in this case, it is a prerequisite for it. We report that the beam profile is almost Gaussian for many frequencies; the waist of the Gaussian profile increases with frequency while the center wobbles slightly. Knowledge of the beam profile may subsequently be used as reference for imaging.
Depletion-based techniques for super-resolution imaging of NV-diamond
NASA Astrophysics Data System (ADS)
Jaskula, Jean-Christophe; Trifonov, Alexei; Glenn, David; Walsworth, Ronald
2012-06-01
We discuss the development and application of depletion-based techniques for super-resolution imaging of NV centers in diamond: stimulated emission depletion (STED), metastable ground state depletion (GSD), and dark state depletion (DSD). NV centers in diamond do not bleach under optical excitation, are not biotoxic, and have long-lived electronic spin coherence and spin-state-dependent fluorescence. Thus NV-diamond has great potential as a fluorescent biomarker and as a magnetic biosensor.
Bayesian Hierarchical Models to Augment the Mediterranean Forecast System
2010-09-30
In part 2 (Bonazzi et al., 2010), the impact of the ensemble forecast methodology based on MFS-Wind-BHM perturbations is documented. Forecast...absence of dt data stage inputs, the forecast impact of MFS-Error-BHM is neutral. Experiments are underway now to introduce dt back into the MFS-Error...BHM and quantify forecast impacts at MFS. MFS-SuperEnsemble-BHM We have assembled all needed datasets and completed algorithmic development
NASA Astrophysics Data System (ADS)
Lu, Chieh Han; Chen, Peilin; Chen, Bi-Chang
2017-02-01
Optical imaging techniques provide much important information in understanding life science especially cellular structure and morphology because "seeing is believing". However, the resolution of optical imaging is limited by the diffraction limit, which is discovered by Ernst Abbe, i.e. λ/2(NA) (NA is the numerical aperture of the objective lens). Fluorescence super-resolution microscopic techniques such as Stimulated emission depletion microscopy (STED), Photoactivated localization microscopy (PALM), and Stochastic optical reconstruction microscopy (STORM) are invented to have the capability of seeing biological entities down to molecular level that are smaller than the diffraction limit (around 200-nm in lateral resolution). These techniques do not physically violate the Abbe limit of resolution but exploit the photoluminescence properties and labelling specificity of fluorescence molecules to achieve super-resolution imaging. However, these super-resolution techniques limit most of their applications to the 2D imaging of fixed or dead samples due to the high laser power needed or slow speed for the localization process. Extended from 2D imaging, light sheet microscopy has been proven to have a lot of applications on 3D imaging at much better spatiotemporal resolutions due to its intrinsic optical sectioning and high imaging speed. Herein, we combine the advantage of localization microscopy and light-sheet microscopy to have super-resolved cellular imaging in 3D across large field of view. With high-density labeled spontaneous blinking fluorophore and wide-field detection of light-sheet microscopy, these allow us to construct 3D super-resolution multi-cellular imaging at high speed ( minutes) by light-sheet single-molecule localization microscopy.
Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu
2018-09-01
The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.
Neutral Kaon Mixing from Lattice QCD
NASA Astrophysics Data System (ADS)
Bai, Ziyuan
In this work, we report the lattice calculation of two important quantities which emerge from second order, K0 - K¯0 mixing : DeltaMK and epsilonK. The RBC-UKQCD collaboration has performed the first calculation of DeltaMK with unphysical kinematics [1]. We now extend this calculation to near-physical and physical ensembles. In these physical or near-physical calculations, the two-pion energies are below the kaon threshold, and we have to examine the two-pion intermediate states contribution to DeltaMK, as well as the enhanced finite volume corrections arising from these two-pion intermediate states. We also report the ?rst lattice calculation of the long-distance contribution to the indirect CP violation parameter, the epsilonK. This calculation involves the treatment of a short-distance, ultra-violet divergence that is absent in the calculation of DeltaMK, and we will report our techniques for correcting this divergence on the lattice. In this calculation, we used unphysical quark masses on the same ensemble that we used in [1]. Therefore, rather than providing a physical result, this calculation demonstrates the technique for calculating epsilonK, and provides an approximate understanding the size of the long-distance contributions. Various new techniques are employed in this work, such as the use of All-Mode-Averaging (AMA), the All-to-All (A2A) propagators and the use of super-jackknife method in analyzing the data.
Peer-Teaching in the Secondary Music Ensemble
ERIC Educational Resources Information Center
Johnson, Erik
2015-01-01
Peer-teaching is an instructional technique that has been used by teachers world-wide to successfully engage, exercise and deepen student learning. Yet, in some instances, teachers find the application of peer-teaching in large music ensembles at the secondary level to be daunting. This article is meant to be a practical resource for secondary…
Bashir, Saba; Qamar, Usman; Khan, Farhan Hassan
2016-02-01
Accuracy plays a vital role in the medical field as it concerns with the life of an individual. Extensive research has been conducted on disease classification and prediction using machine learning techniques. However, there is no agreement on which classifier produces the best results. A specific classifier may be better than others for a specific dataset, but another classifier could perform better for some other dataset. Ensemble of classifiers has been proved to be an effective way to improve classification accuracy. In this research we present an ensemble framework with multi-layer classification using enhanced bagging and optimized weighting. The proposed model called "HM-BagMoov" overcomes the limitations of conventional performance bottlenecks by utilizing an ensemble of seven heterogeneous classifiers. The framework is evaluated on five different heart disease datasets, four breast cancer datasets, two diabetes datasets, two liver disease datasets and one hepatitis dataset obtained from public repositories. The analysis of the results show that ensemble framework achieved the highest accuracy, sensitivity and F-Measure when compared with individual classifiers for all the diseases. In addition to this, the ensemble framework also achieved the highest accuracy when compared with the state of the art techniques. An application named "IntelliHealth" is also developed based on proposed model that may be used by hospitals/doctors for diagnostic advice. Copyright © 2015 Elsevier Inc. All rights reserved.
Ensemble analyses improve signatures of tumour hypoxia and reveal inter-platform differences
2014-01-01
Background The reproducibility of transcriptomic biomarkers across datasets remains poor, limiting clinical application. We and others have suggested that this is in-part caused by differential error-structure between datasets, and their incomplete removal by pre-processing algorithms. Methods To test this hypothesis, we systematically assessed the effects of pre-processing on biomarker classification using 24 different pre-processing methods and 15 distinct signatures of tumour hypoxia in 10 datasets (2,143 patients). Results We confirm strong pre-processing effects for all datasets and signatures, and find that these differ between microarray versions. Importantly, exploiting different pre-processing techniques in an ensemble technique improved classification for a majority of signatures. Conclusions Assessing biomarkers using an ensemble of pre-processing techniques shows clear value across multiple diseases, datasets and biomarkers. Importantly, ensemble classification improves biomarkers with initially good results but does not result in spuriously improved performance for poor biomarkers. While further research is required, this approach has the potential to become a standard for transcriptomic biomarkers. PMID:24902696
NASA Astrophysics Data System (ADS)
Rowley, C. D.; Hogan, P. J.; Martin, P.; Thoppil, P.; Wei, M.
2017-12-01
An extended range ensemble forecast system is being developed in the US Navy Earth System Prediction Capability (ESPC), and a global ocean ensemble generation capability to represent uncertainty in the ocean initial conditions has been developed. At extended forecast times, the uncertainty due to the model error overtakes the initial condition as the primary source of forecast uncertainty. Recently, stochastic parameterization or stochastic forcing techniques have been applied to represent the model error in research and operational atmospheric, ocean, and coupled ensemble forecasts. A simple stochastic forcing technique has been developed for application to US Navy high resolution regional and global ocean models, for use in ocean-only and coupled atmosphere-ocean-ice-wave ensemble forecast systems. Perturbation forcing is added to the tendency equations for state variables, with the forcing defined by random 3- or 4-dimensional fields with horizontal, vertical, and temporal correlations specified to characterize different possible kinds of error. Here, we demonstrate the stochastic forcing in regional and global ensemble forecasts with varying perturbation amplitudes and length and time scales, and assess the change in ensemble skill measured by a range of deterministic and probabilistic metrics.
Micelle-templated composite quantum dots for super-resolution imaging.
Xu, Jianquan; Fan, Qirui; Mahajan, Kalpesh D; Ruan, Gang; Herrington, Andrew; Tehrani, Kayvan F; Kner, Peter; Winter, Jessica O
2014-05-16
Quantum dots (QDs) have tremendous potential for biomedical imaging, including super-resolution techniques that permit imaging below the diffraction limit. However, most QDs are produced via organic methods, and hence require surface treatment to render them water-soluble for biological applications. Previously, we reported a micelle-templating method that yields nanocomposites containing multiple core/shell ZnS-CdSe QDs within the same nanocarrier, increasing overall particle brightness and virtually eliminating QD blinking. Here, this technique is extended to the encapsulation of Mn-doped ZnSe QDs (Mn-ZnSe QDs), which have potential applications in super-resolution imaging as a result of the introduction of Mn(2+) dopant energy levels. The size, shape and fluorescence characteristics of these doped QD-micelles were compared to those of micelles created using core/shell ZnS-CdSe QDs (ZnS-CdSe QD-micelles). Additionally, the stability of both types of particles to photo-oxidation was investigated. Compared to commercial QDs, micelle-templated QDs demonstrated superior fluorescence intensity, higher signal-to-noise ratios, and greater stability against photo-oxidization,while reducing blinking. Additionally, the fluorescence of doped QD-micelles could be modulated from a bright 'on' state to a dark 'off' state, with a modulation depth of up to 76%, suggesting the potential of doped QD-micelles for applications in super-resolution imaging.
Creation of the BMA ensemble for SST using a parallel processing technique
NASA Astrophysics Data System (ADS)
Kim, Kwangjin; Lee, Yang Won
2013-10-01
Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.
From single-molecule spectroscopy to super-resolution imaging of the neuron: a review
Laine, Romain F; Kaminski Schierle, Gabriele S; van de Linde, Sebastian; Kaminski, Clemens F
2016-01-01
Abstract For more than 20 years, single-molecule spectroscopy has been providing invaluable insights into nature at the molecular level. The field has received a powerful boost with the development of the technique into super-resolution imaging methods, ca. 10 years ago, which overcome the limitations imposed by optical diffraction. Today, single molecule super-resolution imaging is routinely used in the study of macromolecular function and structure in the cell. Concomitantly, computational methods have been developed that provide information on numbers and positions of molecules at the nanometer-scale. In this overview, we outline the technical developments that have led to the emergence of localization microscopy techniques from single-molecule spectroscopy. We then provide a comprehensive review on the application of the technique in the field of neuroscience research. PMID:28809165
Super-Resolution Microscopy Unveils Dynamic Heterogeneities in Nanoparticle Protein Corona.
Feiner-Gracia, Natalia; Beck, Michaela; Pujals, Sílvia; Tosi, Sébastien; Mandal, Tamoghna; Buske, Christian; Linden, Mika; Albertazzi, Lorenzo
2017-11-01
The adsorption of serum proteins, leading to the formation of a biomolecular corona, is a key determinant of the biological identity of nanoparticles in vivo. Therefore, gaining knowledge on the formation, composition, and temporal evolution of the corona is of utmost importance for the development of nanoparticle-based therapies. Here, it is shown that the use of super-resolution optical microscopy enables the imaging of the protein corona on mesoporous silica nanoparticles with single protein sensitivity. Particle-by-particle quantification reveals a significant heterogeneity in protein absorption under native conditions. Moreover, the diversity of the corona evolves over time depending on the surface chemistry and degradability of the particles. This paper investigates the consequences of protein adsorption for specific cell targeting by antibody-functionalized nanoparticles providing a detailed understanding of corona-activity relations. The methodology is widely applicable to a variety of nanostructures and complements the existing ensemble approaches for protein corona study. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Single-molecule techniques in biophysics: a review of the progress in methods and applications.
Miller, Helen; Zhou, Zhaokun; Shepherd, Jack; Wollman, Adam J M; Leake, Mark C
2018-02-01
Single-molecule biophysics has transformed our understanding of biology, but also of the physics of life. More exotic than simple soft matter, biomatter lives far from thermal equilibrium, covering multiple lengths from the nanoscale of single molecules to up to several orders of magnitude higher in cells, tissues and organisms. Biomolecules are often characterized by underlying instability: multiple metastable free energy states exist, separated by levels of just a few multiples of the thermal energy scale k B T, where k B is the Boltzmann constant and T absolute temperature, implying complex inter-conversion kinetics in the relatively hot, wet environment of active biological matter. A key benefit of single-molecule biophysics techniques is their ability to probe heterogeneity of free energy states across a molecular population, too challenging in general for conventional ensemble average approaches. Parallel developments in experimental and computational techniques have catalysed the birth of multiplexed, correlative techniques to tackle previously intractable biological questions. Experimentally, progress has been driven by improvements in sensitivity and speed of detectors, and the stability and efficiency of light sources, probes and microfluidics. We discuss the motivation and requirements for these recent experiments, including the underpinning mathematics. These methods are broadly divided into tools which detect molecules and those which manipulate them. For the former we discuss the progress of super-resolution microscopy, transformative for addressing many longstanding questions in the life sciences, and for the latter we include progress in 'force spectroscopy' techniques that mechanically perturb molecules. We also consider in silico progress of single-molecule computational physics, and how simulation and experimentation may be drawn together to give a more complete understanding. Increasingly, combinatorial techniques are now used, including correlative atomic force microscopy and fluorescence imaging, to probe questions closer to native physiological behaviour. We identify the trade-offs, limitations and applications of these techniques, and discuss exciting new directions.
Single-molecule techniques in biophysics: a review of the progress in methods and applications
NASA Astrophysics Data System (ADS)
Miller, Helen; Zhou, Zhaokun; Shepherd, Jack; Wollman, Adam J. M.; Leake, Mark C.
2018-02-01
Single-molecule biophysics has transformed our understanding of biology, but also of the physics of life. More exotic than simple soft matter, biomatter lives far from thermal equilibrium, covering multiple lengths from the nanoscale of single molecules to up to several orders of magnitude higher in cells, tissues and organisms. Biomolecules are often characterized by underlying instability: multiple metastable free energy states exist, separated by levels of just a few multiples of the thermal energy scale k B T, where k B is the Boltzmann constant and T absolute temperature, implying complex inter-conversion kinetics in the relatively hot, wet environment of active biological matter. A key benefit of single-molecule biophysics techniques is their ability to probe heterogeneity of free energy states across a molecular population, too challenging in general for conventional ensemble average approaches. Parallel developments in experimental and computational techniques have catalysed the birth of multiplexed, correlative techniques to tackle previously intractable biological questions. Experimentally, progress has been driven by improvements in sensitivity and speed of detectors, and the stability and efficiency of light sources, probes and microfluidics. We discuss the motivation and requirements for these recent experiments, including the underpinning mathematics. These methods are broadly divided into tools which detect molecules and those which manipulate them. For the former we discuss the progress of super-resolution microscopy, transformative for addressing many longstanding questions in the life sciences, and for the latter we include progress in ‘force spectroscopy’ techniques that mechanically perturb molecules. We also consider in silico progress of single-molecule computational physics, and how simulation and experimentation may be drawn together to give a more complete understanding. Increasingly, combinatorial techniques are now used, including correlative atomic force microscopy and fluorescence imaging, to probe questions closer to native physiological behaviour. We identify the trade-offs, limitations and applications of these techniques, and discuss exciting new directions.
Frank, Joachim; Gonzalez, Ruben L.
2015-01-01
At equilibrium, thermodynamic and kinetic information can be extracted from biomolecular energy landscapes by many techniques. However, while static, ensemble techniques yield thermodynamic data, often only dynamic, single-molecule techniques can yield the kinetic data that describes transition-state energy barriers. Here we present a generalized framework based upon dwell-time distributions that can be used to connect such static, ensemble techniques with dynamic, single-molecule techniques, and thus characterize energy landscapes to greater resolutions. We demonstrate the utility of this framework by applying it to cryogenic electron microscopy and single-molecule fluorescence resonance energy transfer studies of the bacterial ribosomal pretranslocation complex. Among other benefits, application of this framework to these data explains why two transient, intermediate conformations of the pretranslocation complex, which are observed in a cryogenic electron microscopy study, may not be observed in several single-molecule fluorescence resonance energy transfer studies. PMID:25785884
Thompson, Colin D Kinz; Sharma, Ajeet K; Frank, Joachim; Gonzalez, Ruben L; Chowdhury, Debashish
2015-08-27
At equilibrium, thermodynamic and kinetic information can be extracted from biomolecular energy landscapes by many techniques. However, while static, ensemble techniques yield thermodynamic data, often only dynamic, single-molecule techniques can yield the kinetic data that describe transition-state energy barriers. Here we present a generalized framework based upon dwell-time distributions that can be used to connect such static, ensemble techniques with dynamic, single-molecule techniques, and thus characterize energy landscapes to greater resolutions. We demonstrate the utility of this framework by applying it to cryogenic electron microscopy (cryo-EM) and single-molecule fluorescence resonance energy transfer (smFRET) studies of the bacterial ribosomal pre-translocation complex. Among other benefits, application of this framework to these data explains why two transient, intermediate conformations of the pre-translocation complex, which are observed in a cryo-EM study, may not be observed in several smFRET studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlueter, R.D.; Halbach, K.
1991-12-04
This memo presents the formulation of an expression for eddy currents induced in a thin-walled conductor due to a time-dependent electromagnet field excitation. Then follows an analytical development for prediction of vacuum chamber eddy current induced field harmonics in iron-core electromagnets. A passive technique for harmonics suppression is presented with specific application to the design of the Superconducting Super Collider (SSC) Low Energy B (LEB) Magnets.
Super-Resolution Microscopy: Shedding Light on the Cellular Plasma Membrane.
Stone, Matthew B; Shelby, Sarah A; Veatch, Sarah L
2017-06-14
Lipids and the membranes they form are fundamental building blocks of cellular life, and their geometry and chemical properties distinguish membranes from other cellular environments. Collective processes occurring within membranes strongly impact cellular behavior and biochemistry, and understanding these processes presents unique challenges due to the often complex and myriad interactions between membrane components. Super-resolution microscopy offers a significant gain in resolution over traditional optical microscopy, enabling the localization of individual molecules even in densely labeled samples and in cellular and tissue environments. These microscopy techniques have been used to examine the organization and dynamics of plasma membrane components, providing insight into the fundamental interactions that determine membrane functions. Here, we broadly introduce the structure and organization of the mammalian plasma membrane and review recent applications of super-resolution microscopy to the study of membranes. We then highlight some inherent challenges faced when using super-resolution microscopy to study membranes, and we discuss recent technical advancements that promise further improvements to super-resolution microscopy and its application to the plasma membrane.
Super-stable Poissonian structures
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2012-10-01
In this paper we characterize classes of Poisson processes whose statistical structures are super-stable. We consider a flow generated by a one-dimensional ordinary differential equation, and an ensemble of particles ‘surfing’ the flow. The particles start from random initial positions, and are propagated along the flow by stochastic ‘wave processes’ with general statistics and general cross correlations. Setting the initial positions to be Poisson processes, we characterize the classes of Poisson processes that render the particles’ positions—at all times, and invariantly with respect to the wave processes—statistically identical to their initial positions. These Poisson processes are termed ‘super-stable’ and facilitate the generalization of the notion of stationary distributions far beyond the realm of Markov dynamics.
NASA Astrophysics Data System (ADS)
Fox, Neil I.; Micheas, Athanasios C.; Peng, Yuqiang
2016-07-01
This paper introduces the use of Bayesian full Procrustes shape analysis in object-oriented meteorological applications. In particular, the Procrustes methodology is used to generate mean forecast precipitation fields from a set of ensemble forecasts. This approach has advantages over other ensemble averaging techniques in that it can produce a forecast that retains the morphological features of the precipitation structures and present the range of forecast outcomes represented by the ensemble. The production of the ensemble mean avoids the problems of smoothing that result from simple pixel or cell averaging, while producing credible sets that retain information on ensemble spread. Also in this paper, the full Bayesian Procrustes scheme is used as an object verification tool for precipitation forecasts. This is an extension of a previously presented Procrustes shape analysis based verification approach into a full Bayesian format designed to handle the verification of precipitation forecasts that match objects from an ensemble of forecast fields to a single truth image. The methodology is tested on radar reflectivity nowcasts produced in the Warning Decision Support System - Integrated Information (WDSS-II) by varying parameters in the K-means cluster tracking scheme.
NASA Astrophysics Data System (ADS)
Hut, Rolf; Amisigo, Barnabas A.; Steele-Dunne, Susan; van de Giesen, Nick
2015-12-01
Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF) is introduced as a variant on the Ensemble Kalman Filter (EnKF). RumEnKF differs from EnKF in that it does not store the entire ensemble, but rather only saves the first two moments of the ensemble distribution. In this way, the number of ensemble members that can be calculated is less dependent on available memory, and mainly on available computing power (CPU). RumEnKF is developed to make optimal use of current generation super computer architecture, where the number of available floating point operations (flops) increases more rapidly than the available memory and where inter-node communication can quickly become a bottleneck. RumEnKF reduces the used memory compared to the EnKF when the number of ensemble members is greater than half the number of state variables. In this paper, three simple models are used (auto-regressive, low dimensional Lorenz and high dimensional Lorenz) to show that RumEnKF performs similarly to the EnKF. Furthermore, it is also shown that increasing the ensemble size has a similar impact on the estimation error from the three algorithms.
Super-resolution optical imaging and magnetometry using NV centers in diamond
NASA Astrophysics Data System (ADS)
Jaskula, Jean-Christophe; Trifonov, Alexei; Glenn, David; Bar-Gill, Nir; Walsworth, Ronald
2013-05-01
We report progress done on the development and application of depletion-based techniques for super-resolution (nanoscale) optical imaging and magnetometry using NV centers in diamond. In particulare we are integrating stimulated emission depletion (STED) and ground state depletion (GSD) imaging techniques with advanced pulsed sequences for AC magnetometry. NV centers in diamond do not bleach under optical excitation, have long-lived electronic spin coherence and spin-state-dependent fluorescence, and are not biotoxic. Thus NV-diamond has great potential in quantum science and as a nanoscale magnetic biosensor.
Image Change Detection via Ensemble Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Benjamin W; Vatsavai, Raju
2013-01-01
The concept of geographic change detection is relevant in many areas. Changes in geography can reveal much information about a particular location. For example, analysis of changes in geography can identify regions of population growth, change in land use, and potential environmental disturbance. A common way to perform change detection is to use a simple method such as differencing to detect regions of change. Though these techniques are simple, often the application of these techniques is very limited. Recently, use of machine learning methods such as neural networks for change detection has been explored with great success. In this work,more » we explore the use of ensemble learning methodologies for detecting changes in bitemporal synthetic aperture radar (SAR) images. Ensemble learning uses a collection of weak machine learning classifiers to create a stronger classifier which has higher accuracy than the individual classifiers in the ensemble. The strength of the ensemble lies in the fact that the individual classifiers in the ensemble create a mixture of experts in which the final classification made by the ensemble classifier is calculated from the outputs of the individual classifiers. Our methodology leverages this aspect of ensemble learning by training collections of weak decision tree based classifiers to identify regions of change in SAR images collected of a region in the Staten Island, New York area during Hurricane Sandy. Preliminary studies show that the ensemble method has approximately 11.5% higher change detection accuracy than an individual classifier.« less
Wavelength scanning achieves pixel super-resolution in holographic on-chip microscopy
NASA Astrophysics Data System (ADS)
Luo, Wei; Göröcs, Zoltan; Zhang, Yibo; Feizi, Alborz; Greenbaum, Alon; Ozcan, Aydogan
2016-03-01
Lensfree holographic on-chip imaging is a potent solution for high-resolution and field-portable bright-field imaging over a wide field-of-view. Previous lensfree imaging approaches utilize a pixel super-resolution technique, which relies on sub-pixel lateral displacements between the lensfree diffraction patterns and the image sensor's pixel-array, to achieve sub-micron resolution under unit magnification using state-of-the-art CMOS imager chips, commonly used in e.g., mobile-phones. Here we report, for the first time, a wavelength scanning based pixel super-resolution technique in lensfree holographic imaging. We developed an iterative super-resolution algorithm, which generates high-resolution reconstructions of the specimen from low-resolution (i.e., under-sampled) diffraction patterns recorded at multiple wavelengths within a narrow spectral range (e.g., 10-30 nm). Compared with lateral shift-based pixel super-resolution, this wavelength scanning approach does not require any physical shifts in the imaging setup, and the resolution improvement is uniform in all directions across the sensor-array. Our wavelength scanning super-resolution approach can also be integrated with multi-height and/or multi-angle on-chip imaging techniques to obtain even higher resolution reconstructions. For example, using wavelength scanning together with multi-angle illumination, we achieved a halfpitch resolution of 250 nm, corresponding to a numerical aperture of 1. In addition to pixel super-resolution, the small scanning steps in wavelength also enable us to robustly unwrap phase, revealing the specimen's optical path length in our reconstructed images. We believe that this new wavelength scanning based pixel super-resolution approach can provide competitive microscopy solutions for high-resolution and field-portable imaging needs, potentially impacting tele-pathology applications in resource-limited-settings.
Super earth interiors and validity of Birch's Law for ultra-high pressure metals and ionic solids
NASA Astrophysics Data System (ADS)
Ware, Lucas Andrew
2015-01-01
Super Earths, recently detected by the Kepler Mission, expand the ensemble of known terrestrial planets beyond our Solar System's limited group. Birch's Law and velocity-density systematics have been crucial in constraining our knowledge of the composition of Earth's mantle and core. Recently published static diamond anvil cell experimental measurements of sound velocities in iron, a key deep element in most super Earth models, are inconsistent with each other with regard to the validity of Birch's Law. We examine the range of validity of Birch's Law for several metallic elements, including iron, and ionic solids shocked with a two-stage light gas gun into the ultra-high pressure, temperature fluid state and make comparisons to the recent static data.
Evaluating Alignment of Shapes by Ensemble Visualization
Raj, Mukund; Mirzargar, Mahsa; Preston, J. Samuel; Kirby, Robert M.; Whitaker, Ross T.
2016-01-01
The visualization of variability in surfaces embedded in 3D, which is a type of ensemble uncertainty visualization, provides a means of understanding the underlying distribution of a collection or ensemble of surfaces. Although ensemble visualization for isosurfaces has been described in the literature, we conduct an expert-based evaluation of various ensemble visualization techniques in a particular medical imaging application: the construction of atlases or templates from a population of images. In this work, we extend contour boxplot to 3D, allowing us to evaluate it against an enumeration-style visualization of the ensemble members and other conventional visualizations used by atlas builders, namely examining the atlas image and the corresponding images/data provided as part of the construction process. We present feedback from domain experts on the efficacy of contour boxplot compared to other modalities when used as part of the atlas construction and analysis stages of their work. PMID:26186768
Examining Chaotic Convection with Super-Parameterization Ensembles
NASA Astrophysics Data System (ADS)
Jones, Todd R.
This study investigates a variety of features present in a new configuration of the Community Atmosphere Model (CAM) variant, SP-CAM 2.0. The new configuration (multiple-parameterization-CAM, MP-CAM) changes the manner in which the super-parameterization (SP) concept represents physical tendency feedbacks to the large-scale by using the mean of 10 independent two-dimensional cloud-permitting model (CPM) curtains in each global model column instead of the conventional single CPM curtain. The climates of the SP and MP configurations are examined to investigate any significant differences caused by the application of convective physical tendencies that are more deterministic in nature, paying particular attention to extreme precipitation events and large-scale weather systems, such as the Madden-Julian Oscillation (MJO). A number of small but significant changes in the mean state climate are uncovered, and it is found that the new formulation degrades MJO performance. Despite these deficiencies, the ensemble of possible realizations of convective states in the MP configuration allows for analysis of uncertainty in the small-scale solution, lending to examination of those weather regimes and physical mechanisms associated with strong, chaotic convection. Methods of quantifying precipitation predictability are explored, and use of the most reliable of these leads to the conclusion that poor precipitation predictability is most directly related to the proximity of the global climate model column state to atmospheric critical points. Secondarily, the predictability is tied to the availability of potential convective energy, the presence of mesoscale convective organization on the CPM grid, and the directive power of the large-scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Hongbo; Qiao, Zemin; Liu, Xiao
Highlights: • Sol–gel route is combined with polymerization without using modifier. • Supercritical drying control is the key to obtain super-hydrophobic surfaces. • The whole fabrication is technologically controllable and with low costs. • The production rate is higher than 90%. • The method provides a cost-effective way for industry applications. - Abstract: We successfully synthesized one type of cheap super-hydrophobic hybrid porous materials in a sol–gel process. In this route, hydrophilic polymers and TEOS-base sol are used as precursors, the ultraviolet ray-initiated polymerization and supercritical fluid drying techniques are combined together to fulfill this task. All fabricated samples exhibitmore » lotus-leaf-like surface structures with super-hydrophobicity. The underlying mechanisms are carefully investigated using a field-emission scanning electron microscopy (FESEM) and an X-ray photoelectron spectroscopy (XPS). We found that a well-controlled drying process is crucial to the formation of such super-hydrophobic surfaces. As high as 90% production rate is obtained in our route and thus, it might provide a cost-effective way to produce super-hydrophobic hybrid materials for industry applications.« less
Dances with Membranes: Breakthroughs from Super-resolution Imaging
Curthoys, Nikki M.; Parent, Matthew; Mlodzianoski, Michael; Nelson, Andrew J.; Lilieholm, Jennifer; Butler, Michael B.; Valles, Matthew; Hess, Samuel T.
2017-01-01
Biological membrane organization mediates numerous cellular functions and has also been connected with an immense number of human diseases. However, until recently, experimental methodologies have been unable to directly visualize the nanoscale details of biological membranes, particularly in intact living cells. Numerous models explaining membrane organization have been proposed, but testing those models has required indirect methods; the desire to directly image proteins and lipids in living cell membranes is a strong motivation for the advancement of technology. The development of super-resolution microscopy has provided powerful tools for quantification of membrane organization at the level of individual proteins and lipids, and many of these tools are compatible with living cells. Previously inaccessible questions are now being addressed, and the field of membrane biology is developing rapidly. This chapter discusses how the development of super-resolution microscopy has led to fundamental advances in the field of biological membrane organization. We summarize the history and some models explaining how proteins are organized in cell membranes, and give an overview of various super-resolution techniques and methods of quantifying super-resolution data. We discuss the application of super-resolution techniques to membrane biology in general, and also with specific reference to the fields of actin and actin-binding proteins, virus infection, mitochondria, immune cell biology, and phosphoinositide signaling. Finally, we present our hopes and expectations for the future of super-resolution microscopy in the field of membrane biology. PMID:26015281
NASA Astrophysics Data System (ADS)
Lee, S.; Petrykin, V.; Molodyk, A.; Samoilenkov, S.; Kaul, A.; Vavilov, A.; Vysotsky, V.; Fetisov, S.
2014-04-01
The SuperOx and SuperOx Japan LLC companies were founded with the goal of developing a cost-effective technology for second generation HTS (2G HTS) tapes by utilizing a combination of the most advanced chemical and physical deposition techniques, together with implementing original tape architectures. In this paper we present a brief overview of our production and experimental facilities and recent results of 2G HTS tape fabrication, and describe the first tests of the tapes in model cables for AC and DC power application.
Ensemble transcript interaction networks: a case study on Alzheimer's disease.
Armañanzas, Rubén; Larrañaga, Pedro; Bielza, Concha
2012-10-01
Systems biology techniques are a topic of recent interest within the neurological field. Computational intelligence (CI) addresses this holistic perspective by means of consensus or ensemble techniques ultimately capable of uncovering new and relevant findings. In this paper, we propose the application of a CI approach based on ensemble Bayesian network classifiers and multivariate feature subset selection to induce probabilistic dependences that could match or unveil biological relationships. The research focuses on the analysis of high-throughput Alzheimer's disease (AD) transcript profiling. The analysis is conducted from two perspectives. First, we compare the expression profiles of hippocampus subregion entorhinal cortex (EC) samples of AD patients and controls. Second, we use the ensemble approach to study four types of samples: EC and dentate gyrus (DG) samples from both patients and controls. Results disclose transcript interaction networks with remarkable structures and genes not directly related to AD by previous studies. The ensemble is able to identify a variety of transcripts that play key roles in other neurological pathologies. Classical statistical assessment by means of non-parametric tests confirms the relevance of the majority of the transcripts. The ensemble approach pinpoints key metabolic mechanisms that could lead to new findings in the pathogenesis and development of AD. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Costa, A.; Folch, A.; Macedonio, G.; Giaccio, B.; Isaia, R.; Smith, V. C.
2012-04-01
Distal and ultra-distal volcanic ash dispersal during a super-eruption was reconstructed for the first time, providing insights into eruption dynamics and the impact of these gigantic events. A novel computational methodology was applied to the ash fallout of the Campanian Ignimbrite (CI), the most powerful volcanic eruption in Europe in the last 200 kyrs. The method uses a 3D time-dependent computational ash dispersion model, an ensemble of wind fields, and hundreds of thickness observations of the CI tephra deposit. Results reveal that 250-300 km3 of fallout material was produced during the eruption, blanketing a region of ~3.7 million km2 with more than 5 mm of fine ash. The model also indicates that the column height was ~37-40 km, and the eruption lasted 2-4 days. The eruption would have caused a volcanic winter within the coldest and driest Heinrich event. Fluorine-bearing leachate from the volcanic ash and acid rain would have further affected food sources and severely impacted Late Middle Paleolithic groups in Southern and Eastern Europe.
Super-hydrophobic, highly adhesive, polydimethylsiloxane (PDMS) surfaces.
Stanton, Morgan M; Ducker, Robert E; MacDonald, John C; Lambert, Christopher R; McGimpsey, W Grant
2012-02-01
Super-hydrophobic surfaces have been fabricated by casting polydimethylsiloxane (PDMS) on a textured substrate of known surface topography, and were characterized using contact angle, atomic force microscopy, surface free energy calculations, and adhesion measurements. The resulting PDMS has a micro-textured surface with a static contact angle of 153.5° and a hysteresis of 27° when using de-ionized water. Unlike many super-hydrophobic materials, the textured PDMS is highly adhesive, allowing water drops as large as 25.0 μL to be inverted. This high adhesion, super-hydrophobic behavior is an illustration of the "petal effect". This rapid, reproducible technique has promising applications in transport and analysis of microvolume samples. Copyright © 2011 Elsevier Inc. All rights reserved.
Verifying and Postprocesing the Ensemble Spread-Error Relationship
NASA Astrophysics Data System (ADS)
Hopson, Tom; Knievel, Jason; Liu, Yubao; Roux, Gregory; Wu, Wanli
2013-04-01
With the increased utilization of ensemble forecasts in weather and hydrologic applications, there is a need to verify their benefit over less expensive deterministic forecasts. One such potential benefit of ensemble systems is their capacity to forecast their own forecast error through the ensemble spread-error relationship. The paper begins by revisiting the limitations of the Pearson correlation alone in assessing this relationship. Next, we introduce two new metrics to consider in assessing the utility an ensemble's varying dispersion. We argue there are two aspects of an ensemble's dispersion that should be assessed. First, and perhaps more fundamentally: is there enough variability in the ensembles dispersion to justify the maintenance of an expensive ensemble prediction system (EPS), irrespective of whether the EPS is well-calibrated or not? To diagnose this, the factor that controls the theoretical upper limit of the spread-error correlation can be useful. Secondly, does the variable dispersion of an ensemble relate to variable expectation of forecast error? Representing the spread-error correlation in relation to its theoretical limit can provide a simple diagnostic of this attribute. A context for these concepts is provided by assessing two operational ensembles: 30-member Western US temperature forecasts for the U.S. Army Test and Evaluation Command and 51-member Brahmaputra River flow forecasts of the Climate Forecast and Applications Project for Bangladesh. Both of these systems utilize a postprocessing technique based on quantile regression (QR) under a step-wise forward selection framework leading to ensemble forecasts with both good reliability and sharpness. In addition, the methodology utilizes the ensemble's ability to self-diagnose forecast instability to produce calibrated forecasts with informative skill-spread relationships. We will describe both ensemble systems briefly, review the steps used to calibrate the ensemble forecast, and present verification statistics using error-spread metrics, along with figures from operational ensemble forecasts before and after calibration.
NASA Astrophysics Data System (ADS)
Jünger, Felix; Olshausen, Philipp V.; Rohrbach, Alexander
2016-07-01
Living cells are highly dynamic systems with cellular structures being often below the optical resolution limit. Super-resolution microscopes, usually based on fluorescence cell labelling, are usually too slow to resolve small, dynamic structures. We present a label-free microscopy technique, which can generate thousands of super-resolved, high contrast images at a frame rate of 100 Hertz and without any post-processing. The technique is based on oblique sample illumination with coherent light, an approach believed to be not applicable in life sciences because of too many interference artefacts. However, by circulating an incident laser beam by 360° during one image acquisition, relevant image information is amplified. By combining total internal reflection illumination with dark-field detection, structures as small as 150 nm become separable through local destructive interferences. The technique images local changes in refractive index through scattered laser light and is applied to living mouse macrophages and helical bacteria revealing unexpected dynamic processes.
Jünger, Felix; Olshausen, Philipp v.; Rohrbach, Alexander
2016-01-01
Living cells are highly dynamic systems with cellular structures being often below the optical resolution limit. Super-resolution microscopes, usually based on fluorescence cell labelling, are usually too slow to resolve small, dynamic structures. We present a label-free microscopy technique, which can generate thousands of super-resolved, high contrast images at a frame rate of 100 Hertz and without any post-processing. The technique is based on oblique sample illumination with coherent light, an approach believed to be not applicable in life sciences because of too many interference artefacts. However, by circulating an incident laser beam by 360° during one image acquisition, relevant image information is amplified. By combining total internal reflection illumination with dark-field detection, structures as small as 150 nm become separable through local destructive interferences. The technique images local changes in refractive index through scattered laser light and is applied to living mouse macrophages and helical bacteria revealing unexpected dynamic processes. PMID:27465033
NASA Astrophysics Data System (ADS)
Dixon, Kenneth
A lightning data assimilation technique is developed for use with observations from the World Wide Lightning Location Network (WWLLN). The technique nudges the water vapor mixing ratio toward saturation within 10 km of a lightning observation. This technique is applied to deterministic forecasts of convective events on 29 June 2012, 17 November 2013, and 19 April 2011 as well as an ensemble forecast of the 29 June 2012 event using the Weather Research and Forecasting (WRF) model. Lightning data are assimilated over the first 3 hours of the forecasts, and the subsequent impact on forecast quality is evaluated. The nudged deterministic simulations for all events produce composite reflectivity fields that are closer to observations. For the ensemble forecasts of the 29 June 2012 event, the improvement in forecast quality from lightning assimilation is more subtle than for the deterministic forecasts, suggesting that the lightning assimilation may improve ensemble convective forecasts where conventional observations (e.g., aircraft, surface, radiosonde, satellite) are less dense or unavailable.
A Machine Learning Framework for Plan Payment Risk Adjustment.
Rose, Sherri
2016-12-01
To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R 2 . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.
Spectroscopic Studies of Double Beta Decays and MOON
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ejiri, H.; Nuclear Science, Czech Technical University, Brehova, Prague, Czech Republic, National Institute of Radiological Sciences, Chiba, 263-8555
2007-10-12
This is a brief review of future spectroscopic experiments of neutrino-less double beta decays (0{nu}{beta}{beta}) and the MOON (Mo Observatory Of Neutrinos) project. Spectroscopic 0{nu}{beta}{beta} experiments of MOON, SuperNEMO and DCBA are planned to study Majorana masses in the quasi-degenerate (QD) and inverted mass hierarchy (IH) regions. MOON aims at 0{nu}{beta}{beta} studies with the {nu}-mass sensitivities of 100-30 meV by means of a super ensemble of multi-layer modules, each being consist of a scintillator plate, two tracking detector planes and a thin {beta}{beta} source film.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenyon, Scott J.; Bromley, Benjamin C., E-mail: skenyon@cfa.harvard.edu, E-mail: bromley@physics.utah.edu
2014-01-01
We investigate formation mechanisms for icy super-Earth-mass planets orbiting at 2-20 AU around 0.1-0.5 M {sub ☉} stars. A large ensemble of coagulation calculations demonstrates a new formation channel: disks composed of large planetesimals with radii of 30-300 km form super-Earths on timescales of ∼1 Gyr. In other gas-poor disks, a collisional cascade grinds planetesimals to dust before the largest planets reach super-Earth masses. Once icy Earth-mass planets form, they migrate through the leftover swarm of planetesimals at rates of 0.01-1 AU Myr{sup –1}. On timescales of 10 Myr to 1 Gyr, many of these planets migrate through the diskmore » of leftover planetesimals from semimajor axes of 5-10 AU to 1-2 AU. A few percent of super-Earths might migrate to semimajor axes of 0.1-0.2 AU. When the disk has an initial mass comparable with the minimum-mass solar nebula, scaled to the mass of the central star, the predicted frequency of super-Earths matches the observed frequency.« less
NASA Astrophysics Data System (ADS)
Henley, E. M.; Pope, E. C. D.
2017-12-01
This commentary concerns recent work on solar wind forecasting by Owens and Riley (2017). The approach taken makes effective use of tools commonly used in terrestrial weather—notably, via use of a simple model—generation of an "ensemble" forecast, and application of a "cost-loss" analysis to the resulting probabilistic information, to explore the benefit of this forecast to users with different risk appetites. This commentary aims to highlight these useful techniques to the wider space weather audience and to briefly discuss the general context of application of terrestrial weather approaches to space weather.
Mortality risk score prediction in an elderly population using machine learning.
Rose, Sherri
2013-03-01
Standard practice for prediction often relies on parametric regression methods. Interesting new methods from the machine learning literature have been introduced in epidemiologic studies, such as random forest and neural networks. However, a priori, an investigator will not know which algorithm to select and may wish to try several. Here I apply the super learner, an ensembling machine learning approach that combines multiple algorithms into a single algorithm and returns a prediction function with the best cross-validated mean squared error. Super learning is a generalization of stacking methods. I used super learning in the Study of Physical Performance and Age-Related Changes in Sonomans (SPPARCS) to predict death among 2,066 residents of Sonoma, California, aged 54 years or more during the period 1993-1999. The super learner for predicting death (risk score) improved upon all single algorithms in the collection of algorithms, although its performance was similar to that of several algorithms. Super learner outperformed the worst algorithm (neural networks) by 44% with respect to estimated cross-validated mean squared error and had an R2 value of 0.201. The improvement of super learner over random forest with respect to R2 was approximately 2-fold. Alternatives for risk score prediction include the super learner, which can provide improved performance.
O'Neill, Liam; Dexter, Franklin
2005-11-01
We compare two techniques for increasing the transparency and face validity of Data Envelopment Analysis (DEA) results for managers at a single decision-making unit: multifactor efficiency (MFE) and non-radial super-efficiency (NRSE). Both methods incorporate the slack values from the super-efficient DEA model to provide a more robust performance measure than radial super-efficiency scores. MFE and NRSE are equivalent for unique optimal solutions and a single output. MFE incorporates the slack values from multiple output variables, whereas NRSE does not. MFE can be more transparent to managers since it involves no additional optimization steps beyond the DEA, whereas NRSE requires several. We compare results for operating room managers at an Iowa hospital evaluating its growth potential for multiple surgical specialties. In addition, we address the problem of upward bias of the slack values of the super-efficient DEA model.
NASA Astrophysics Data System (ADS)
Iglesias, Marco; Sawlan, Zaid; Scavino, Marco; Tempone, Raúl; Wood, Christopher
2018-07-01
In this work, we present the ensemble-marginalized Kalman filter (EnMKF), a sequential algorithm analogous to our previously proposed approach (Ruggeri et al 2017 Bayesian Anal. 12 407–33, Iglesias et al 2018 Int. J. Heat Mass Transfer 116 417–31), for estimating the state and parameters of linear parabolic partial differential equations in initial-boundary value problems when the boundary data are noisy. We apply EnMKF to infer the thermal properties of building walls and to estimate the corresponding heat flux from real and synthetic data. Compared with a modified ensemble Kalman filter (EnKF) that is not marginalized, EnMKF reduces the bias error, avoids the collapse of the ensemble without needing to add inflation, and converges to the mean field posterior using or less of the ensemble size required by EnKF. According to our results, the marginalization technique in EnMKF is key to performance improvement with smaller ensembles at any fixed time.
Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles
2016-01-01
Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522
NASA Astrophysics Data System (ADS)
Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.
2017-12-01
Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.
Importance of rotational adiabaticity in collisions of CO2 super rotors with Ar and He
NASA Astrophysics Data System (ADS)
Murray, Matthew J.; Ogden, Hannah M.; Mullin, Amy S.
2018-02-01
The collision dynamics of optically centrifuged CO2 with Ar and He are reported here. The optical centrifuge produces an ensemble of CO2 molecules in high rotational states (with J ˜ 220) with oriented angular momentum. Polarization-dependent high-resolution transient IR absorption spectroscopy was used to measure the relaxation dynamics in the presence of Ar or He by probing the CO2 J = 76 and 100 states with Er o t=2306 and 3979 cm-1, respectively. The data show that He relaxes the CO2 super rotors more quickly than Ar. Doppler-broadened line profiles show that He collisions induce substantially larger rotation-to-translation energy transfer. CO2 super rotors have greater orientational anisotropy with He collisions and the anisotropy from the He collisions persists longer than with Ar. Super rotor relaxation dynamics are discussed in terms of mass effects related to classical gyroscope physics and collisional rotational adiabaticity.
Clustering-Based Ensemble Learning for Activity Recognition in Smart Homes
Jurek, Anna; Nugent, Chris; Bi, Yaxin; Wu, Shengli
2014-01-01
Application of sensor-based technology within activity monitoring systems is becoming a popular technique within the smart environment paradigm. Nevertheless, the use of such an approach generates complex constructs of data, which subsequently requires the use of intricate activity recognition techniques to automatically infer the underlying activity. This paper explores a cluster-based ensemble method as a new solution for the purposes of activity recognition within smart environments. With this approach activities are modelled as collections of clusters built on different subsets of features. A classification process is performed by assigning a new instance to its closest cluster from each collection. Two different sensor data representations have been investigated, namely numeric and binary. Following the evaluation of the proposed methodology it has been demonstrated that the cluster-based ensemble method can be successfully applied as a viable option for activity recognition. Results following exposure to data collected from a range of activities indicated that the ensemble method had the ability to perform with accuracies of 94.2% and 97.5% for numeric and binary data, respectively. These results outperformed a range of single classifiers considered as benchmarks. PMID:25014095
Clustering-based ensemble learning for activity recognition in smart homes.
Jurek, Anna; Nugent, Chris; Bi, Yaxin; Wu, Shengli
2014-07-10
Application of sensor-based technology within activity monitoring systems is becoming a popular technique within the smart environment paradigm. Nevertheless, the use of such an approach generates complex constructs of data, which subsequently requires the use of intricate activity recognition techniques to automatically infer the underlying activity. This paper explores a cluster-based ensemble method as a new solution for the purposes of activity recognition within smart environments. With this approach activities are modelled as collections of clusters built on different subsets of features. A classification process is performed by assigning a new instance to its closest cluster from each collection. Two different sensor data representations have been investigated, namely numeric and binary. Following the evaluation of the proposed methodology it has been demonstrated that the cluster-based ensemble method can be successfully applied as a viable option for activity recognition. Results following exposure to data collected from a range of activities indicated that the ensemble method had the ability to perform with accuracies of 94.2% and 97.5% for numeric and binary data, respectively. These results outperformed a range of single classifiers considered as benchmarks.
TEACHER-PRODUCED INSTRUCTIONAL FILMS IN CHEMISTRY, 8MM AND SUPER 8.
ERIC Educational Resources Information Center
O'CONNOR, ROD; SLABAUGH, WENDELL
TECHNIQUES FOR PRODUCING 8MM INSTRUCTIONAL FILMS IN CHEMISTRY ARE PRESENTED. IN PART I A PHILOSOPHY OF TEACHER-PRODUCED FILMS IS DEVELOPED, EMPHASIZING THE VALUE OF THE LOCAL SETTING, AND CUSTOM-MADE CONTENTS. APPLICATIONS SUGGESTED ARE (1) TECHNIQUE INSTRUCTION, (2) FILMED EXPERIMENTS, (3) INSTRUMENT FAMILIARIZATION, (4) LECTURE AIDS, AND (5)…
Super-Resolution Microscopy Techniques and Their Potential for Applications in Radiation Biophysics.
Eberle, Jan Philipp; Rapp, Alexander; Krufczik, Matthias; Eryilmaz, Marion; Gunkel, Manuel; Erfle, Holger; Hausmann, Michael
2017-01-01
Fluorescence microscopy is an essential tool for imaging tagged biological structures. Due to the wave nature of light, the resolution of a conventional fluorescence microscope is limited laterally to about 200 nm and axially to about 600 nm, which is often referred to as the Abbe limit. This hampers the observation of important biological structures and dynamics in the nano-scaled range ~10 nm to ~100 nm. Consequentially, various methods have been developed circumventing this limit of resolution. Super-resolution microscopy comprises several of those methods employing physical and/or chemical properties, such as optical/instrumental modifications and specific labeling of samples. In this article, we will give a brief insight into a variety of selected optical microscopy methods reaching super-resolution beyond the Abbe limit. We will survey three different concepts in connection to biological applications in radiation research without making a claim to be complete.
Simulation of an ensemble of future climate time series with an hourly weather generator
NASA Astrophysics Data System (ADS)
Caporali, E.; Fatichi, S.; Ivanov, V. Y.; Kim, J.
2010-12-01
There is evidence that climate change is occurring in many regions of the world. The necessity of climate change predictions at the local scale and fine temporal resolution is thus warranted for hydrological, ecological, geomorphological, and agricultural applications that can provide thematic insights into the corresponding impacts. Numerous downscaling techniques have been proposed to bridge the gap between the spatial scales adopted in General Circulation Models (GCM) and regional analyses. Nevertheless, the time and spatial resolutions obtained as well as the type of meteorological variables may not be sufficient for detailed studies of climate change effects at the local scales. In this context, this study presents a stochastic downscaling technique that makes use of an hourly weather generator to simulate time series of predicted future climate. Using a Bayesian approach, the downscaling procedure derives distributions of factors of change for several climate statistics from a multi-model ensemble of GCMs. Factors of change are sampled from their distributions using a Monte Carlo technique to entirely account for the probabilistic information obtained with the Bayesian multi-model ensemble. Factors of change are subsequently applied to the statistics derived from observations to re-evaluate the parameters of the weather generator. The weather generator can reproduce a wide set of climate variables and statistics over a range of temporal scales, from extremes, to the low-frequency inter-annual variability. The final result of such a procedure is the generation of an ensemble of hourly time series of meteorological variables that can be considered as representative of future climate, as inferred from GCMs. The generated ensemble of scenarios also accounts for the uncertainty derived from multiple GCMs used in downscaling. Applications of the procedure in reproducing present and future climates are presented for different locations world-wide: Tucson (AZ), Detroit (MI), and Firenze (Italy). The stochastic downscaling is carried out with eight GCMs from the CMIP3 multi-model dataset (IPCC 4AR, A1B scenario).
Super-resolution optical microscopy for studying membrane structure and dynamics.
Sezgin, Erdinc
2017-07-12
Investigation of cell membrane structure and dynamics requires high spatial and temporal resolution. The spatial resolution of conventional light microscopy is limited due to the diffraction of light. However, recent developments in microscopy enabled us to access the nano-scale regime spatially, thus to elucidate the nanoscopic structures in the cellular membranes. In this review, we will explain the resolution limit, address the working principles of the most commonly used super-resolution microscopy techniques and summarise their recent applications in the biomembrane field.
Quantifying selective alignment of ensemble nitrogen-vacancy centers in (111) diamond
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tahara, Kosuke; Ozawa, Hayato; Iwasaki, Takayuki
2015-11-09
Selective alignment of nitrogen-vacancy (NV) centers in diamond is an important technique towards its applications. Quantification of the alignment ratio is necessary to design the optimized diamond samples. However, this is not a straightforward problem for dense ensemble of the NV centers. We estimate the alignment ratio of ensemble NV centers along the [111] direction in (111) diamond by optically detected magnetic resonance measurements. Diamond films deposited by N{sub 2} doped chemical vapor deposition have NV center densities over 1 × 10{sup 15 }cm{sup −3} and alignment ratios over 75%. Although spin coherence time (T{sub 2}) is limited to a few μs bymore » electron spins of nitrogen impurities, the combination of the selective alignment and the high density can be a possible way to optimize NV-containing diamond samples for the sensing applications.« less
Underwater video enhancement using multi-camera super-resolution
NASA Astrophysics Data System (ADS)
Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.
2017-12-01
Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.
A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models
NASA Astrophysics Data System (ADS)
Keller, J. D.; Bach, L.; Hense, A.
2012-12-01
The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.
Extensions and applications of ensemble-of-trees methods in machine learning
NASA Astrophysics Data System (ADS)
Bleich, Justin
Ensemble-of-trees algorithms have emerged to the forefront of machine learning due to their ability to generate high forecasting accuracy for a wide array of regression and classification problems. Classic ensemble methodologies such as random forests (RF) and stochastic gradient boosting (SGB) rely on algorithmic procedures to generate fits to data. In contrast, more recent ensemble techniques such as Bayesian Additive Regression Trees (BART) and Dynamic Trees (DT) focus on an underlying Bayesian probability model to generate the fits. These new probability model-based approaches show much promise versus their algorithmic counterparts, but also offer substantial room for improvement. The first part of this thesis focuses on methodological advances for ensemble-of-trees techniques with an emphasis on the more recent Bayesian approaches. In particular, we focus on extensions of BART in four distinct ways. First, we develop a more robust implementation of BART for both research and application. We then develop a principled approach to variable selection for BART as well as the ability to naturally incorporate prior information on important covariates into the algorithm. Next, we propose a method for handling missing data that relies on the recursive structure of decision trees and does not require imputation. Last, we relax the assumption of homoskedasticity in the BART model to allow for parametric modeling of heteroskedasticity. The second part of this thesis returns to the classic algorithmic approaches in the context of classification problems with asymmetric costs of forecasting errors. First we consider the performance of RF and SGB more broadly and demonstrate its superiority to logistic regression for applications in criminology with asymmetric costs. Next, we use RF to forecast unplanned hospital readmissions upon patient discharge with asymmetric costs taken into account. Finally, we explore the construction of stable decision trees for forecasts of violence during probation hearings in court systems.
Kopek, Benjamin G.; Paez-Segala, Maria G.; Shtengel, Gleb; Sochacki, Kem A.; Sun, Mei G.; Wang, Yalin; Xu, C. Shan; van Engelenburg, Schuyler B.; Taraska, Justin W.; Looger, Loren L.; Hess, Harald F.
2017-01-01
Our groups have recently developed related approaches for sample preparation for super-resolution imaging within endogenous cellular environments using correlative light and electron microscopy (CLEM). Four distinct techniques for preparing and acquiring super-resolution CLEM datasets on aldehyde-fixed specimens are provided, including Tokuyasu cryosectioning, whole-cell mount, cell unroofing and platinum replication, and resin embedding and sectioning. Choice of the best protocol for a given application depends on a number of criteria that are discussed in detail. Tokuyasu cryosectioning is relatively rapid but is limited to small, delicate specimens. Whole-cell mount has the simplest sample preparation but is restricted to surface structures. Cell unroofing and platinum replica creates high-contrast, 3-dimensional images of the cytoplasmic surface of the plasma membrane, but is more challenging than whole-cell mount. Resin embedding permits serial sectioning of large samples, but is limited to osmium-resistant probes, and is technically difficult. Expected results from these protocols include super-resolution localization (~10–50 nm) of fluorescent targets within the context of electron microscopy ultrastructure, which can help address cell biological questions. These protocols can be completed in 2–7 days, are compatible with a number of super-resolution imaging protocols, and are broadly applicable across biology. PMID:28384138
The 2015 super-resolution microscopy roadmap
NASA Astrophysics Data System (ADS)
Hell, Stefan W.; Sahl, Steffen J.; Bates, Mark; Zhuang, Xiaowei; Heintzmann, Rainer; Booth, Martin J.; Bewersdorf, Joerg; Shtengel, Gleb; Hess, Harald; Tinnefeld, Philip; Honigmann, Alf; Jakobs, Stefan; Testa, Ilaria; Cognet, Laurent; Lounis, Brahim; Ewers, Helge; Davis, Simon J.; Eggeling, Christian; Klenerman, David; Willig, Katrin I.; Vicidomini, Giuseppe; Castello, Marco; Diaspro, Alberto; Cordes, Thorben
2015-11-01
Far-field optical microscopy using focused light is an important tool in a number of scientific disciplines including chemical, (bio)physical and biomedical research, particularly with respect to the study of living cells and organisms. Unfortunately, the applicability of the optical microscope is limited, since the diffraction of light imposes limitations on the spatial resolution of the image. Consequently the details of, for example, cellular protein distributions, can be visualized only to a certain extent. Fortunately, recent years have witnessed the development of ‘super-resolution’ far-field optical microscopy (nanoscopy) techniques such as stimulated emission depletion (STED), ground state depletion (GSD), reversible saturated optical (fluorescence) transitions (RESOLFT), photoactivation localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), structured illumination microscopy (SIM) or saturated structured illumination microscopy (SSIM), all in one way or another addressing the problem of the limited spatial resolution of far-field optical microscopy. While SIM achieves a two-fold improvement in spatial resolution compared to conventional optical microscopy, STED, RESOLFT, PALM/STORM, or SSIM have all gone beyond, pushing the limits of optical image resolution to the nanometer scale. Consequently, all super-resolution techniques open new avenues of biomedical research. Because the field is so young, the potential capabilities of different super-resolution microscopy approaches have yet to be fully explored, and uncertainties remain when considering the best choice of methodology. Thus, even for experts, the road to the future is sometimes shrouded in mist. The super-resolution optical microscopy roadmap of Journal of Physics D: Applied Physics addresses this need for clarity. It provides guidance to the outstanding questions through a collection of short review articles from experts in the field, giving a thorough discussion on the concepts underlying super-resolution optical microscopy, the potential of different approaches, the importance of label optimization (such as reversible photoswitchable proteins) and applications in which these methods will have a significant impact. Mark Bates, Christian Eggeling
Ensemble Solute Transport in 2-D Operator-Stable Random Fields
NASA Astrophysics Data System (ADS)
Monnig, N. D.; Benson, D. A.
2006-12-01
The heterogeneous velocity field that exists at many scales in an aquifer will typically cause a dissolved solute plume to grow at a rate faster than Fick's Law predicts. Some statistical model must be adopted to account for the aquifer structure that engenders the velocity heterogeneity. A fractional Brownian motion (fBm) model has been shown to create the long-range correlation that can produce continually faster-than-Fickian plume growth. Previous fBm models have assumed isotropic scaling (defined here by a scalar Hurst coefficient). Motivated by field measurements of aquifer hydraulic conductivity, recent techniques were developed to construct random fields with anisotropic scaling with a self-similarity parameter that is defined by a matrix. The growth of ensemble plumes is analyzed for transport through 2-D "operator- stable" fBm hydraulic conductivity (K) fields. Both the longitudinal and transverse Hurst coefficients are important to both plume growth rates and the timing and duration of breakthrough. Smaller Hurst coefficients in the transverse direction lead to more "continuity" or stratification in the direction of transport. The result is continually faster-than-Fickian growth rates, highly non-Gaussian ensemble plumes, and a longer tail early in the breakthrough curve. Contrary to some analytic stochastic theories for monofractal K fields, the plume growth rate never exceeds Mercado's [1967] purely stratified aquifer growth rate of plume apparent dispersivity proportional to mean distance. Apparent super-Mercado growth must be the result of other factors, such as larger plumes corresponding to either a larger initial plume size or greater variance of the ln(K) field.
NASA Astrophysics Data System (ADS)
Shen, Feifei; Xu, Dongmei; Xue, Ming; Min, Jinzhong
2017-07-01
This study examines the impacts of assimilating radar radial velocity (Vr) data for the simulation of hurricane Ike (2008) with two different ensemble generation techniques in the framework of the hybrid ensemble-variational (EnVar) data assimilation system of Weather Research and Forecasting model. For the generation of ensemble perturbations we apply two techniques, the ensemble transform Kalman filter (ETKF) and the ensemble of data assimilation (EDA). For the ETKF-EnVar, the forecast ensemble perturbations are updated by the ETKF, while for the EDA-EnVar, the hybrid is employed to update each ensemble member with perturbed observations. The ensemble mean is analyzed by the hybrid method with flow-dependent ensemble covariance for both EnVar. The sensitivity of analyses and forecasts to the two applied ensemble generation techniques is investigated in our current study. It is found that the EnVar system is rather stable with different ensemble update techniques in terms of its skill on improving the analyses and forecasts. The EDA-EnVar-based ensemble perturbations are likely to include slightly less organized spatial structures than those in ETKF-EnVar, and the perturbations of the latter are constructed more dynamically. Detailed diagnostics reveal that both of the EnVar schemes not only produce positive temperature increments around the hurricane center but also systematically adjust the hurricane location with the hurricane-specific error covariance. On average, the analysis and forecast from the ETKF-EnVar have slightly smaller errors than that from the EDA-EnVar in terms of track, intensity, and precipitation forecast. Moreover, ETKF-EnVar yields better forecasts when verified against conventional observations.
Three-Dimensional Super-Resolution: Theory, Modeling, and Field Tests Results
NASA Technical Reports Server (NTRS)
Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Vincent E.; Hines, Glenn; Pierrottet, Diego; Reisse, Robert
2014-01-01
Many flash lidar applications continue to demand higher three-dimensional image resolution beyond the current state-of-the-art technology of the detector arrays and their associated readout circuits. Even with the available number of focal plane pixels, the required number of photons for illuminating all the pixels may impose impractical requirements on the laser pulse energy or the receiver aperture size. Therefore, image resolution enhancement by means of a super-resolution algorithm in near real time presents a very attractive solution for a wide range of flash lidar applications. This paper describes a superresolution technique and illustrates its performance and merits for generating three-dimensional image frames at a video rate.
The Fukushima-137Cs deposition case study: properties of the multi-model ensemble.
Solazzo, E; Galmarini, S
2015-01-01
In this paper we analyse the properties of an eighteen-member ensemble generated by the combination of five atmospheric dispersion modelling systems and six meteorological data sets. The models have been applied to the total deposition of (137)Cs, following the nuclear accident of the Fukushima power plant in March 2011. Analysis is carried out with the scope of determining whether the ensemble is reliable, sufficiently diverse and if its accuracy and precision can be improved. Although ensemble practice is becoming more and more popular in many geophysical applications, good practice guidelines are missing as to how models should be combined for the ensembles to offer an improvement over single model realisations. We show that the ensemble of models share large portions of bias and variance and make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble mean with the advantage of being poorly correlated, allowing to save computational resources and reduce noise (and thus improving accuracy). We further propose and discuss two methods for selecting subsets of skilful and diverse members, and prove that, in the contingency of the present analysis, their mean outscores the full ensemble mean in terms of both accuracy (error) and precision (variance). Copyright © 2014. Published by Elsevier Ltd.
Image quality improvement in cone-beam CT using the super-resolution technique.
Oyama, Asuka; Kumagai, Shinobu; Arai, Norikazu; Takata, Takeshi; Saikawa, Yusuke; Shiraishi, Kenshiro; Kobayashi, Takenori; Kotoku, Jun'ichi
2018-04-05
This study was conducted to improve cone-beam computed tomography (CBCT) image quality using the super-resolution technique, a method of inferring a high-resolution image from a low-resolution image. This technique is used with two matrices, so-called dictionaries, constructed respectively from high-resolution and low-resolution image bases. For this study, a CBCT image, as a low-resolution image, is represented as a linear combination of atoms, the image bases in the low-resolution dictionary. The corresponding super-resolution image was inferred by multiplying the coefficients and the high-resolution dictionary atoms extracted from planning CT images. To evaluate the proposed method, we computed the root mean square error (RMSE) and structural similarity (SSIM). The resulting RMSE and SSIM between the super-resolution images and the planning CT images were, respectively, as much as 0.81 and 1.29 times better than those obtained without using the super-resolution technique. We used super-resolution technique to improve the CBCT image quality.
Genetic programming based ensemble system for microarray data classification.
Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To
2015-01-01
Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved.
Genetic Programming Based Ensemble System for Microarray Data Classification
Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To
2015-01-01
Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved. PMID:25810748
NASA Astrophysics Data System (ADS)
Gelb, Lev D.; Chakraborty, Somendra Nath
2011-12-01
The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase.
Detection of chewing from piezoelectric film sensor signals using ensemble classifiers.
Farooq, Muhammad; Sazonov, Edward
2016-08-01
Selection and use of pattern recognition algorithms is application dependent. In this work, we explored the use of several ensembles of weak classifiers to classify signals captured from a wearable sensor system to detect food intake based on chewing. Three sensor signals (Piezoelectric sensor, accelerometer, and hand to mouth gesture) were collected from 12 subjects in free-living conditions for 24 hrs. Sensor signals were divided into 10 seconds epochs and for each epoch combination of time and frequency domain features were computed. In this work, we present a comparison of three different ensemble techniques: boosting (AdaBoost), bootstrap aggregation (bagging) and stacking, each trained with 3 different weak classifiers (Decision Trees, Linear Discriminant Analysis (LDA) and Logistic Regression). Type of feature normalization used can also impact the classification results. For each ensemble method, three feature normalization techniques: (no-normalization, z-score normalization, and minmax normalization) were tested. A 12 fold cross-validation scheme was used to evaluate the performance of each model where the performance was evaluated in terms of precision, recall, and accuracy. Best results achieved here show an improvement of about 4% over our previous algorithms.
NASA Astrophysics Data System (ADS)
Khajehei, Sepideh; Moradkhani, Hamid
2015-04-01
Producing reliable and accurate hydrologic ensemble forecasts are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model structure, and model parameters. Producing reliable and skillful precipitation ensemble forecasts is one approach to reduce the total uncertainty in hydrological applications. Currently, National Weather Prediction (NWP) models are developing ensemble forecasts for various temporal ranges. It is proven that raw products from NWP models are biased in mean and spread. Given the above state, there is a need for methods that are able to generate reliable ensemble forecasts for hydrological applications. One of the common techniques is to apply statistical procedures in order to generate ensemble forecast from NWP-generated single-value forecasts. The procedure is based on the bivariate probability distribution between the observation and single-value precipitation forecast. However, one of the assumptions of the current method is fitting Gaussian distribution to the marginal distributions of observed and modeled climate variable. Here, we have described and evaluated a Bayesian approach based on Copula functions to develop an ensemble precipitation forecast from the conditional distribution of single-value precipitation forecasts. Copula functions are known as the multivariate joint distribution of univariate marginal distributions, which are presented as an alternative procedure in capturing the uncertainties related to meteorological forcing. Copulas are capable of modeling the joint distribution of two variables with any level of correlation and dependency. This study is conducted over a sub-basin in the Columbia River Basin in USA using the monthly precipitation forecasts from Climate Forecast System (CFS) with 0.5x0.5 Deg. spatial resolution to reproduce the observations. The verification is conducted on a different period and the superiority of the procedure is compared with Ensemble Pre-Processor approach currently used by National Weather Service River Forecast Centers in USA.
Lithium niobate ultrasonic transducer design for Enhanced Oil Recovery.
Wang, Zhenjun; Xu, Yuanming; Gu, Yuting
2015-11-01
Due to the strong piezoelectric effect possessed by lithium niobate, a new idea that uses lithium niobate to design high-power ultrasonic transducer for Enhanced Oil Recovery technology is proposed. The purpose of this paper is to lay the foundation for the further research and development of high-power ultrasonic oil production technique. The main contents of this paper are as follows: firstly, structure design technique and application of a new high-power ultrasonic transducer are introduced; secondly, the experiment for reducing the viscosity of super heavy oil by this transducer is done, the optimum ultrasonic parameters for reducing the viscosity of super heavy oil are given. Experimental results show that heavy large molecules in super heavy oil can be cracked into light hydrocarbon substances under strong cavitation effect caused by high-intensity ultrasonic wave. Experiment proves that it is indeed feasible to design high-power ultrasonic transducer for ultrasonic oil production technology using lithium niobate. Copyright © 2015 Elsevier B.V. All rights reserved.
Machine Learning Predictions of a Multiresolution Climate Model Ensemble
NASA Astrophysics Data System (ADS)
Anderson, Gemma J.; Lucas, Donald D.
2018-05-01
Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.
An experimental toolbox for the generation of cold and ultracold polar molecules
NASA Astrophysics Data System (ADS)
Zeppenfeld, Martin; Gantner, Thomas; Glöckner, Rosa; Ibrügger, Martin; Koller, Manuel; Prehn, Alexander; Wu, Xing; Chervenkov, Sotir; Rempe, Gerhard
2017-01-01
Cold and ultracold molecules enable fascinating applications in quantum science. We present our toolbox of techniques to generate the required molecule ensembles, including buffergas cooling, centrifuge deceleration and optoelectrical Sisyphus cooling. We obtain excellent control over both the motional and internal molecular degrees of freedom, allowing us to aim at various applications.
Wegel, Eva; Göhler, Antonia; Lagerholm, B Christoffer; Wainman, Alan; Uphoff, Stephan; Kaufmann, Rainer; Dobbie, Ian M
2016-06-06
Many biological questions require fluorescence microscopy with a resolution beyond the diffraction limit of light. Super-resolution methods such as Structured Illumination Microscopy (SIM), STimulated Emission Depletion (STED) microscopy and Single Molecule Localisation Microscopy (SMLM) enable an increase in image resolution beyond the classical diffraction-limit. Here, we compare the individual strengths and weaknesses of each technique by imaging a variety of different subcellular structures in fixed cells. We chose examples ranging from well separated vesicles to densely packed three dimensional filaments. We used quantitative and correlative analyses to assess the performance of SIM, STED and SMLM with the aim of establishing a rough guideline regarding the suitability for typical applications and to highlight pitfalls associated with the different techniques.
A Statistical Description of Neural Ensemble Dynamics
Long, John D.; Carmena, Jose M.
2011-01-01
The growing use of multi-channel neural recording techniques in behaving animals has produced rich datasets that hold immense potential for advancing our understanding of how the brain mediates behavior. One limitation of these techniques is they do not provide important information about the underlying anatomical connections among the recorded neurons within an ensemble. Inferring these connections is often intractable because the set of possible interactions grows exponentially with ensemble size. This is a fundamental challenge one confronts when interpreting these data. Unfortunately, the combination of expert knowledge and ensemble data is often insufficient for selecting a unique model of these interactions. Our approach shifts away from modeling the network diagram of the ensemble toward analyzing changes in the dynamics of the ensemble as they relate to behavior. Our contribution consists of adapting techniques from signal processing and Bayesian statistics to track the dynamics of ensemble data on time-scales comparable with behavior. We employ a Bayesian estimator to weigh prior information against the available ensemble data, and use an adaptive quantization technique to aggregate poorly estimated regions of the ensemble data space. Importantly, our method is capable of detecting changes in both the magnitude and structure of correlations among neurons missed by firing rate metrics. We show that this method is scalable across a wide range of time-scales and ensemble sizes. Lastly, the performance of this method on both simulated and real ensemble data is used to demonstrate its utility. PMID:22319486
NASA Astrophysics Data System (ADS)
Brown, James; Seo, Dong-Jun
2010-05-01
Operational forecasts of hydrometeorological and hydrologic variables often contain large uncertainties, for which ensemble techniques are increasingly used. However, the utility of ensemble forecasts depends on the unbiasedness of the forecast probabilities. We describe a technique for quantifying and removing biases from ensemble forecasts of hydrometeorological and hydrologic variables, intended for use in operational forecasting. The technique makes no a priori assumptions about the distributional form of the variables, which is often unknown or difficult to model parametrically. The aim is to estimate the conditional cumulative distribution function (ccdf) of the observed variable given a (possibly biased) real-time ensemble forecast from one or several forecasting systems (multi-model ensembles). The technique is based on Bayesian optimal linear estimation of indicator variables, and is analogous to indicator cokriging (ICK) in geostatistics. By developing linear estimators for the conditional expectation of the observed variable at many thresholds, ICK provides a discrete approximation of the full ccdf. Since ICK minimizes the conditional error variance of the indicator expectation at each threshold, it effectively minimizes the Continuous Ranked Probability Score (CRPS) when infinitely many thresholds are employed. However, the ensemble members used as predictors in ICK, and other bias-correction techniques, are often highly cross-correlated, both within and between models. Thus, we propose an orthogonal transform of the predictors used in ICK, which is analogous to using their principal components in the linear system of equations. This leads to a well-posed problem in which a minimum number of predictors are used to provide maximum information content in terms of the total variance explained. The technique is used to bias-correct precipitation ensemble forecasts from the NCEP Global Ensemble Forecast System (GEFS), for which independent validation results are presented. Extension to multimodel ensembles from the NCEP GFS and Short Range Ensemble Forecast (SREF) systems is also proposed.
NASA Astrophysics Data System (ADS)
Cifelli, R.; Mahoney, K. M.; Webb, R. S.; McCormick, B.
2017-12-01
To ensure structural and operational safety of dams and other water management infrastructure, water resources managers and engineers require information about the potential for heavy precipitation. The methods and data used to estimate extreme rainfall amounts for managing risk are based on 40-year-old science and in need of improvement. The need to evaluate new approaches based on the best science available has led the states of Colorado and New Mexico to engage a body of scientists and engineers in an innovative "ensemble approach" to updating extreme precipitation estimates. NOAA is at the forefront of one of three technical approaches that make up the "ensemble study"; the three approaches are conducted concurrently and in collaboration with each other. One approach is the conventional deterministic, "storm-based" method, another is a risk-based regional precipitation frequency estimation tool, and the third is an experimental approach utilizing NOAA's state-of-the-art High Resolution Rapid Refresh (HRRR) physically-based dynamical weather prediction model. The goal of the overall project is to use the individual strengths of these different methods to define an updated and broadly acceptable state of the practice for evaluation and design of dam spillways. This talk will highlight the NOAA research and NOAA's role in the overarching goal to better understand and characterizing extreme precipitation estimation uncertainty. The research led by NOAA explores a novel high-resolution dataset and post-processing techniques using a super-ensemble of hourly forecasts from the HRRR model. We also investigate how this rich dataset may be combined with statistical methods to optimally cast the data in probabilistic frameworks. NOAA expertise in the physical processes that drive extreme precipitation is also employed to develop careful testing and improved understanding of the limitations of older estimation methods and assumptions. The process of decision making in the midst of uncertainty is a major part of this study. We will speak to how the ensemble approach may be used in concert with one another to manage risk and enhance resiliency in the midst of uncertainty. Finally, the presentation will also address the implications of including climate change in future extreme precipitation estimation studies.
A comparison of breeding and ensemble transform vectors for global ensemble generation
NASA Astrophysics Data System (ADS)
Deng, Guo; Tian, Hua; Li, Xiaoli; Chen, Jing; Gong, Jiandong; Jiao, Meiyan
2012-02-01
To compare the initial perturbation techniques using breeding vectors and ensemble transform vectors, three ensemble prediction systems using both initial perturbation methods but with different ensemble member sizes based on the spectral model T213/L31 are constructed at the National Meteorological Center, China Meteorological Administration (NMC/CMA). A series of ensemble verification scores such as forecast skill of the ensemble mean, ensemble resolution, and ensemble reliability are introduced to identify the most important attributes of ensemble forecast systems. The results indicate that the ensemble transform technique is superior to the breeding vector method in light of the evaluation of anomaly correlation coefficient (ACC), which is a deterministic character of the ensemble mean, the root-mean-square error (RMSE) and spread, which are of probabilistic attributes, and the continuous ranked probability score (CRPS) and its decomposition. The advantage of the ensemble transform approach is attributed to its orthogonality among ensemble perturbations as well as its consistence with the data assimilation system. Therefore, this study may serve as a reference for configuration of the best ensemble prediction system to be used in operation.
Petersson, N. Anders; Sjogreen, Bjorn
2015-07-20
We develop a fourth order accurate finite difference method for solving the three-dimensional elastic wave equation in general heterogeneous anisotropic materials on curvilinear grids. The proposed method is an extension of the method for isotropic materials, previously described in the paper by Sjögreen and Petersson (2012) [11]. The method we proposed discretizes the anisotropic elastic wave equation in second order formulation, using a node centered finite difference method that satisfies the principle of summation by parts. The summation by parts technique results in a provably stable numerical method that is energy conserving. Also, we generalize and evaluate the super-grid far-fieldmore » technique for truncating unbounded domains. Unlike the commonly used perfectly matched layers (PML), the super-grid technique is stable for general anisotropic material, because it is based on a coordinate stretching combined with an artificial dissipation. Moreover, the discretization satisfies an energy estimate, proving that the numerical approximation is stable. We demonstrate by numerical experiments that sufficiently wide super-grid layers result in very small artificial reflections. Applications of the proposed method are demonstrated by three-dimensional simulations of anisotropic wave propagation in crystals.« less
Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina
2015-09-07
Antifreeze proteins (AFPs) play a pivotal role in the antifreeze effect of overwintering organisms. They have a wide range of applications in numerous fields, such as improving the production of crops and the quality of frozen foods. Accurate identification of AFPs may provide important clues to decipher the underlying mechanisms of AFPs in ice-binding and to facilitate the selection of the most appropriate AFPs for several applications. Based on an ensemble learning technique, this study proposes an AFP identification system called AFP-Ensemble. In this system, random forest classifiers are trained by different training subsets and then aggregated into a consensus classifier by majority voting. The resulting predictor yields a sensitivity of 0.892, a specificity of 0.940, an accuracy of 0.938 and a balanced accuracy of 0.916 on an independent dataset, which are far better than the results obtained by previous methods. These results reveal that AFP-Ensemble is an effective and promising predictor for large-scale determination of AFPs. The detailed feature analysis in this study may give useful insights into the molecular mechanisms of AFP-ice interactions and provide guidance for the related experimental validation. A web server has been designed to implement the proposed method.
Ensemble Clustering using Semidefinite Programming with Applications
Singh, Vikas; Mukherjee, Lopamudra; Peng, Jiming; Xu, Jinhui
2011-01-01
In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better captured using a 2D string encoding rather than a voting strategy, which is common among existing approaches. Our optimization proceeds by first constructing a non-linear objective function which is then transformed into a 0–1 Semidefinite program (SDP) using novel convexification techniques. This model can be subsequently relaxed to a polynomial time solvable SDP. In addition to the theoretical contributions, our experimental results on standard machine learning and synthetic datasets show that this approach leads to improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. In addition, we identify several new application scenarios for this problem. These include combining multiple image segmentations and generating tissue maps from multiple-channel Diffusion Tensor brain images to identify the underlying structure of the brain. PMID:21927539
Ensemble Clustering using Semidefinite Programming with Applications.
Singh, Vikas; Mukherjee, Lopamudra; Peng, Jiming; Xu, Jinhui
2010-05-01
In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better captured using a 2D string encoding rather than a voting strategy, which is common among existing approaches. Our optimization proceeds by first constructing a non-linear objective function which is then transformed into a 0-1 Semidefinite program (SDP) using novel convexification techniques. This model can be subsequently relaxed to a polynomial time solvable SDP. In addition to the theoretical contributions, our experimental results on standard machine learning and synthetic datasets show that this approach leads to improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. In addition, we identify several new application scenarios for this problem. These include combining multiple image segmentations and generating tissue maps from multiple-channel Diffusion Tensor brain images to identify the underlying structure of the brain.
Arroyo-Camejo, Silvia; Adam, Marie-Pierre; Besbes, Mondher; Hugonin, Jean-Paul; Jacques, Vincent; Greffet, Jean-Jacques; Roch, Jean-François; Hell, Stefan W; Treussart, François
2013-12-23
Nitrogen-vacancy (NV) color centers in nanodiamonds are highly promising for bioimaging and sensing. However, resolving individual NV centers within nanodiamond particles and the controlled addressing and readout of their spin state has remained a major challenge. Spatially stochastic super-resolution techniques cannot provide this capability in principle, whereas coordinate-controlled super-resolution imaging methods, like stimulated emission depletion (STED) microscopy, have been predicted to fail in nanodiamonds. Here we show that, contrary to these predictions, STED can resolve single NV centers in 40-250 nm sized nanodiamonds with a resolution of ≈10 nm. Even multiple adjacent NVs located in single nanodiamonds can be imaged individually down to relative distances of ≈15 nm. Far-field optical super-resolution of NVs inside nanodiamonds is highly relevant for bioimaging applications of these fluorescent nanolabels. The targeted addressing and readout of individual NV(-) spins inside nanodiamonds by STED should also be of high significance for quantum sensing and information applications.
Robust isotropic super-resolution by maximizing a Laplace posterior for MRI volumes
NASA Astrophysics Data System (ADS)
Han, Xian-Hua; Iwamoto, Yutaro; Shiino, Akihiko; Chen, Yen-Wei
2014-03-01
Magnetic resonance imaging can only acquire volume data with finite resolution due to various factors. In particular, the resolution in one direction (such as the slice direction) is much lower than others (such as the in-plane direction), yielding un-realistic visualizations. This study explores to reconstruct MRI isotropic resolution volumes from three orthogonal scans. This proposed super- resolution reconstruction is formulated as a maximum a posterior (MAP) problem, which relies on the generation model of the acquired scans from the unknown high-resolution volumes. Generally, the deviation ensemble of the reconstructed high-resolution (HR) volume from the available LR ones in the MAP is represented as a Gaussian distribution, which usually results in some noise and artifacts in the reconstructed HR volume. Therefore, this paper investigates a robust super-resolution by formulating the deviation set as a Laplace distribution, which assumes sparsity in the deviation ensemble based on the possible insight of the appeared large values only around some unexpected regions. In addition, in order to achieve reliable HR MRI volume, we integrates the priors such as bilateral total variation (BTV) and non-local mean (NLM) into the proposed MAP framework for suppressing artifacts and enriching visual detail. We validate the proposed robust SR strategy using MRI mouse data with high-definition resolution in two direction and low-resolution in one direction, which are imaged in three orthogonal scans: axial, coronal and sagittal planes. Experiments verifies that the proposed strategy can achieve much better HR MRI volumes than the conventional MAP method even with very high-magnification factor: 10.
NASA Astrophysics Data System (ADS)
Santos, Léonard; Thirel, Guillaume; Perrin, Charles
2017-04-01
Errors made by hydrological models may come from a problem in parameter estimation, uncertainty on observed measurements, numerical problems and from the model conceptualization that simplifies the reality. Here we focus on this last issue of hydrological modeling. One of the solutions to reduce structural uncertainty is to use a multimodel method, taking advantage of the great number and the variability of existing hydrological models. In particular, because different models are not similarly good in all situations, using multimodel approaches can improve the robustness of modeled outputs. Traditionally, in hydrology, multimodel methods are based on the output of the model (the simulated flow series). The aim of this poster is to introduce a different approach based on the internal variables of the models. The method is inspired by the SUper MOdel (SUMO, van den Berge et al., 2011) developed for climatology. The idea of the SUMO method is to correct the internal variables of a model taking into account the values of the internal variables of (an)other model(s). This correction is made bilaterally between the different models. The ensemble of the different models constitutes a super model in which all the models exchange information on their internal variables with each other at each time step. Due to this continuity in the exchanges, this multimodel algorithm is more dynamic than traditional multimodel methods. The method will be first tested using two GR4J models (in a state-space representation) with different parameterizations. The results will be presented and compared to traditional multimodel methods that will serve as benchmarks. In the future, other rainfall-runoff models will be used in the super model. References van den Berge, L. A., Selten, F. M., Wiegerinck, W., and Duane, G. S. (2011). A multi-model ensemble method that combines imperfect models through learning. Earth System Dynamics, 2(1) :161-177.
Super-resolution fluorescence microscopy by stepwise optical saturation
Zhang, Yide; Nallathamby, Prakash D.; Vigil, Genevieve D.; Khan, Aamir A.; Mason, Devon E.; Boerckel, Joel D.; Roeder, Ryan K.; Howard, Scott S.
2018-01-01
Super-resolution fluorescence microscopy is an important tool in biomedical research for its ability to discern features smaller than the diffraction limit. However, due to its difficult implementation and high cost, the super-resolution microscopy is not feasible in many applications. In this paper, we propose and demonstrate a saturation-based super-resolution fluorescence microscopy technique that can be easily implemented and requires neither additional hardware nor complex post-processing. The method is based on the principle of stepwise optical saturation (SOS), where M steps of raw fluorescence images are linearly combined to generate an image with a M-fold increase in resolution compared with conventional diffraction-limited images. For example, linearly combining (scaling and subtracting) two images obtained at regular powers extends the resolution by a factor of 1.4 beyond the diffraction limit. The resolution improvement in SOS microscopy is theoretically infinite but practically is limited by the signal-to-noise ratio. We perform simulations and experimentally demonstrate super-resolution microscopy with both one-photon (confocal) and multiphoton excitation fluorescence. We show that with the multiphoton modality, the SOS microscopy can provide super-resolution imaging deep in scattering samples. PMID:29675306
Ensemble Pruning for Glaucoma Detection in an Unbalanced Data Set.
Adler, Werner; Gefeller, Olaf; Gul, Asma; Horn, Folkert K; Khan, Zardad; Lausen, Berthold
2016-12-07
Random forests are successful classifier ensemble methods consisting of typically 100 to 1000 classification trees. Ensemble pruning techniques reduce the computational cost, especially the memory demand, of random forests by reducing the number of trees without relevant loss of performance or even with increased performance of the sub-ensemble. The application to the problem of an early detection of glaucoma, a severe eye disease with low prevalence, based on topographical measurements of the eye background faces specific challenges. We examine the performance of ensemble pruning strategies for glaucoma detection in an unbalanced data situation. The data set consists of 102 topographical features of the eye background of 254 healthy controls and 55 glaucoma patients. We compare the area under the receiver operating characteristic curve (AUC), and the Brier score on the total data set, in the majority class, and in the minority class of pruned random forest ensembles obtained with strategies based on the prediction accuracy of greedily grown sub-ensembles, the uncertainty weighted accuracy, and the similarity between single trees. To validate the findings and to examine the influence of the prevalence of glaucoma in the data set, we additionally perform a simulation study with lower prevalences of glaucoma. In glaucoma classification all three pruning strategies lead to improved AUC and smaller Brier scores on the total data set with sub-ensembles as small as 30 to 80 trees compared to the classification results obtained with the full ensemble consisting of 1000 trees. In the simulation study, we were able to show that the prevalence of glaucoma is a critical factor and lower prevalence decreases the performance of our pruning strategies. The memory demand for glaucoma classification in an unbalanced data situation based on random forests could effectively be reduced by the application of pruning strategies without loss of performance in a population with increased risk of glaucoma.
Deriving Global Convection Maps From SuperDARN Measurements
NASA Astrophysics Data System (ADS)
Gjerloev, J. W.; Waters, C. L.; Barnes, R. J.
2018-04-01
A new statistical modeling technique for determining the global ionospheric convection is described. The principal component regression (PCR)-based technique is based on Super Dual Auroral Radar Network (SuperDARN) observations and is an advanced version of the PCR technique that Waters et al. (https//:doi.org.10.1002/2015JA021596) used for the SuperMAG data. While SuperMAG ground magnetic field perturbations are vector measurements, SuperDARN provides line-of-sight measurements of the ionospheric convection flow. Each line-of-sight flow has a known azimuth (or direction), which must be converted into the actual vector flow. However, the component perpendicular to the azimuth direction is unknown. Our method uses historical data from the SuperDARN database and PCR to determine a fill-in model convection distribution for any given universal time. The fill-in data process is driven by a list of state descriptors (magnetic indices and the solar zenith angle). The final solution is then derived from a spherical cap harmonic fit to the SuperDARN measurements and the fill-in model. When compared with the standard SuperDARN fill-in model, we find that our fill-in model provides improved solutions, and the final solutions are in better agreement with the SuperDARN measurements. Our solutions are far less dynamic than the standard SuperDARN solutions, which we interpret as being due to a lack of magnetosphere-ionosphere inertia and communication delays in the standard SuperDARN technique while it is inherently included in our approach. Rather, we argue that the magnetosphere-ionosphere system has inertia that prevents the global convection from changing abruptly in response to an interplanetary magnetic field change.
On hydrophilicity improvement of the porous anodic alumina film by hybrid nano/micro structuring
NASA Astrophysics Data System (ADS)
Wang, Weichao; Zhao, Wei; Wang, Kaige; Wang, Lei; Wang, Xuewen; Wang, Shuang; Zhang, Chen; Bai, Jintao
2017-09-01
In both, laboratory and industry, tremendous attention is paid to discover an effective technique to produce uniform, controllable and (super) hydrophilic surfaces over large areas that are useful in a wide range of applications. In this investigation, by combing porous anodic alumina (PAA) film with nano-structures and microarray of aluminum, the hydrophilicity of hybrid nano-micro structure has been significantly improved. It is found some factors can affect the hydrophilicity of film, such as the size and aspect ratio of microarray, the thickness of nano-PAA film etc. Comparing with pure nano-PAA films and microarray, the hybrid nano-micro structure can provide uniform surface with significantly better hydrophilicity. The improvement can be up to 84%. Also, this technique exhibits good stability and repeatability for industrial production. By optimizing the thickness of nano-PAA film and aspect ratio of micro-structures, super-hydrophilicity can be reached. This study has obvious prospect in the fields of chemical industry, biomedical engineering and lab-on-a-chip applications.
NASA Astrophysics Data System (ADS)
Zhao, Liang; Xu, Shun; Tu, Yu-Song; Zhou, Xin
2017-06-01
Not Available Project supported by the National Natural Science Foundation for Outstanding Young Scholars, China (Grant No. 11422542), the National Natural Science Foundation of China (Grant Nos. 11605151 and 11675138), and the Shanghai Supercomputer Center of China and Special Program for Applied Research on Super Computation of the NSFC-Guangdong Joint Fund (the second phase).
2007-04-01
0602435N 6. AUTHOR(S) 5d. PROJECT NUMBER Michel Rixen, Jeffery W. Book, Paul J. Martin, Nadia Pinardi, Paolo Oddo, Jacopo Chiggiato , Nello Russo 5e. TASK...PREDICTION: AN OPERATIONAL EXAMPLE USING A KALMAN FILTER IN THE ADRIATIC SEA M. Rixen J, . Book 2, P. Martin 2, N. Pinardi 3, p. Oddo 3, j. Chiggiato ’, N
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petkov, Valeri; Prasai, Binay; Shastri, Sarvjit
Practical applications require the production and usage of metallic nanocrystals (NCs) in large ensembles. Besides, due to their cluster-bulk solid duality, metallic NCs exhibit a large degree of structural diversity. This poses the question as to what atomic-scale basis is to be used when the structure–function relationship for metallic NCs is to be quantified precisely. In this paper, we address the question by studying bi-functional Fe core-Pt skin type NCs optimized for practical applications. In particular, the cluster-like Fe core and skin-like Pt surface of the NCs exhibit superparamagnetic properties and a superb catalytic activity for the oxygen reduction reaction,more » respectively. We determine the atomic-scale structure of the NCs by non-traditional resonant high-energy X-ray diffraction coupled to atomic pair distribution function analysis. Using the experimental structure data we explain the observed magnetic and catalytic behavior of the NCs in a quantitative manner. Lastly, we demonstrate that NC ensemble-averaged 3D positions of atoms obtained by advanced X-ray scattering techniques are a very proper basis for not only establishing but also quantifying the structure–function relationship for the increasingly complex metallic NCs explored for practical applications.« less
NASA Technical Reports Server (NTRS)
Salik, J.
1984-01-01
The application of the ion beam technique to the nitriding of steels is described. It is indicated that the technique can be successfully applied to nitriding. Some of the structural changes obtained by this technique are similar to those obtained by ion nitriding. The main difference is the absence of the iron nitride diffraction lines. It is found that the dependence of the resultant microhardness on beam voltage for super nitralloy is different from that of 304 stainless steel.
NASA Astrophysics Data System (ADS)
Soltanzadeh, I.; Azadi, M.; Vakili, G. A.
2011-07-01
Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.
Gelb, Lev D; Chakraborty, Somendra Nath
2011-12-14
The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase. © 2011 American Institute of Physics
Theory of nonlinear optical response of ensembles of double quantum dots
NASA Astrophysics Data System (ADS)
Sitek, Anna; Machnikowski, Paweł
2009-09-01
We study theoretically the time-resolved four-wave mixing (FWM) response of an ensemble of pairs of quantum dots undergoing radiative recombination. At short (picosecond) delay times, the response signal shows beats that may be dominated by the subensemble of resonant pairs, which gives access to the information on the interdot coupling. At longer delay times, the decay of the FWM signal is governed by two rates which result from the collective interaction between the two dots and the radiation modes. The two rates correspond to the subradiant and super-radiant components in the radiative decay. Coupling between the dots enhances the collective effects and makes them observable even when the average energy mismatch between the dots is relatively large.
Selecting a climate model subset to optimise key ensemble properties
NASA Astrophysics Data System (ADS)
Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.
2018-02-01
End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.
NASA Astrophysics Data System (ADS)
Newman, A. J.; Clark, M. P.; Nijssen, B.; Wood, A.; Gutmann, E. D.; Mizukami, N.; Longman, R. J.; Giambelluca, T. W.; Cherry, J.; Nowak, K.; Arnold, J.; Prein, A. F.
2016-12-01
Gridded precipitation and temperature products are inherently uncertain due to myriad factors. These include interpolation from a sparse observation network, measurement representativeness, and measurement errors. Despite this inherent uncertainty, uncertainty is typically not included, or is a specific addition to each dataset without much general applicability across different datasets. A lack of quantitative uncertainty estimates for hydrometeorological forcing fields limits their utility to support land surface and hydrologic modeling techniques such as data assimilation, probabilistic forecasting and verification. To address this gap, we have developed a first of its kind gridded, observation-based ensemble of precipitation and temperature at a daily increment for the period 1980-2012 over the United States (including Alaska and Hawaii). A longer, higher resolution version (1970-present, 1/16th degree) has also been implemented to support real-time hydrologic- monitoring and prediction in several regional US domains. We will present the development and evaluation of the dataset, along with initial applications of the dataset for ensemble data assimilation and probabilistic evaluation of high resolution regional climate model simulations. We will also present results on the new high resolution products for Alaska and Hawaii (2 km and 250 m respectively), to complete the first ensemble observation based product suite for the entire 50 states. Finally, we will present plans to improve the ensemble dataset, focusing on efforts to improve the methods used for station interpolation and ensemble generation, as well as methods to fuse station data with numerical weather prediction model output.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
Repurposing a photosynthetic antenna protein as a super-resolution microscopy label.
Barnett, Samuel F H; Hitchcock, Andrew; Mandal, Amit K; Vasilev, Cvetelin; Yuen, Jonathan M; Morby, James; Brindley, Amanda A; Niedzwiedzki, Dariusz M; Bryant, Donald A; Cadby, Ashley J; Holten, Dewey; Hunter, C Neil
2017-12-01
Techniques such as Stochastic Optical Reconstruction Microscopy (STORM) and Structured Illumination Microscopy (SIM) have increased the achievable resolution of optical imaging, but few fluorescent proteins are suitable for super-resolution microscopy, particularly in the far-red and near-infrared emission range. Here we demonstrate the applicability of CpcA, a subunit of the photosynthetic antenna complex in cyanobacteria, for STORM and SIM imaging. The periodicity and width of fabricated nanoarrays of CpcA, with a covalently attached phycoerythrobilin (PEB) or phycocyanobilin (PCB) chromophore, matched the lines in reconstructed STORM images. SIM and STORM reconstructions of Escherichia coli cells harbouring CpcA-labelled cytochrome bd 1 ubiquinol oxidase in the cytoplasmic membrane show that CpcA-PEB and CpcA-PCB are suitable for super-resolution imaging in vivo. The stability, ease of production, small size and brightness of CpcA-PEB and CpcA-PCB demonstrate the potential of this largely unexplored protein family as novel probes for super-resolution microscopy.
Clustering cancer gene expression data by projective clustering ensemble
Yu, Xianxue; Yu, Guoxian
2017-01-01
Gene expression data analysis has paramount implications for gene treatments, cancer diagnosis and other domains. Clustering is an important and promising tool to analyze gene expression data. Gene expression data is often characterized by a large amount of genes but with limited samples, thus various projective clustering techniques and ensemble techniques have been suggested to combat with these challenges. However, it is rather challenging to synergy these two kinds of techniques together to avoid the curse of dimensionality problem and to boost the performance of gene expression data clustering. In this paper, we employ a projective clustering ensemble (PCE) to integrate the advantages of projective clustering and ensemble clustering, and to avoid the dilemma of combining multiple projective clusterings. Our experimental results on publicly available cancer gene expression data show PCE can improve the quality of clustering gene expression data by at least 4.5% (on average) than other related techniques, including dimensionality reduction based single clustering and ensemble approaches. The empirical study demonstrates that, to further boost the performance of clustering cancer gene expression data, it is necessary and promising to synergy projective clustering with ensemble clustering. PCE can serve as an effective alternative technique for clustering gene expression data. PMID:28234920
Cluster-based analysis of multi-model climate ensembles
NASA Astrophysics Data System (ADS)
Hyde, Richard; Hossaini, Ryan; Leeson, Amber A.
2018-06-01
Clustering - the automated grouping of similar data - can provide powerful and unique insight into large and complex data sets, in a fast and computationally efficient manner. While clustering has been used in a variety of fields (from medical image processing to economics), its application within atmospheric science has been fairly limited to date, and the potential benefits of the application of advanced clustering techniques to climate data (both model output and observations) has yet to be fully realised. In this paper, we explore the specific application of clustering to a multi-model climate ensemble. We hypothesise that clustering techniques can provide (a) a flexible, data-driven method of testing model-observation agreement and (b) a mechanism with which to identify model development priorities. We focus our analysis on chemistry-climate model (CCM) output of tropospheric ozone - an important greenhouse gas - from the recent Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP). Tropospheric column ozone from the ACCMIP ensemble was clustered using the Data Density based Clustering (DDC) algorithm. We find that a multi-model mean (MMM) calculated using members of the most-populous cluster identified at each location offers a reduction of up to ˜ 20 % in the global absolute mean bias between the MMM and an observed satellite-based tropospheric ozone climatology, with respect to a simple, all-model MMM. On a spatial basis, the bias is reduced at ˜ 62 % of all locations, with the largest bias reductions occurring in the Northern Hemisphere - where ozone concentrations are relatively large. However, the bias is unchanged at 9 % of all locations and increases at 29 %, particularly in the Southern Hemisphere. The latter demonstrates that although cluster-based subsampling acts to remove outlier model data, such data may in fact be closer to observed values in some locations. We further demonstrate that clustering can provide a viable and useful framework in which to assess and visualise model spread, offering insight into geographical areas of agreement among models and a measure of diversity across an ensemble. Finally, we discuss caveats of the clustering techniques and note that while we have focused on tropospheric ozone, the principles underlying the cluster-based MMMs are applicable to other prognostic variables from climate models.
Confident Surgical Decision Making in Temporal Lobe Epilepsy by Heterogeneous Classifier Ensembles
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Jafari-Khouzani, Kourosh; Elisevich, Kost; Fotouhi, Farshad
2015-01-01
In medical domains with low tolerance for invalid predictions, classification confidence is highly important and traditional performance measures such as overall accuracy cannot provide adequate insight into classifications reliability. In this paper, a confident-prediction rate (CPR) which measures the upper limit of confident predictions has been proposed based on receiver operating characteristic (ROC) curves. It has been shown that heterogeneous ensemble of classifiers improves this measure. This ensemble approach has been applied to lateralization of focal epileptogenicity in temporal lobe epilepsy (TLE) and prediction of surgical outcomes. A goal of this study is to reduce extraoperative electrocorticography (eECoG) requirement which is the practice of using electrodes placed directly on the exposed surface of the brain. We have shown that such goal is achievable with application of data mining techniques. Furthermore, all TLE surgical operations do not result in complete relief from seizures and it is not always possible for human experts to identify such unsuccessful cases prior to surgery. This study demonstrates the capability of data mining techniques in prediction of undesirable outcome for a portion of such cases. PMID:26609547
NASA Astrophysics Data System (ADS)
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2017-03-01
Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.
Multivariate localization methods for ensemble Kalman filtering
NASA Astrophysics Data System (ADS)
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-05-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Landsgesell, Jonas; Holm, Christian; Smiatek, Jens
2017-02-14
We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.
Super-pixel extraction based on multi-channel pulse coupled neural network
NASA Astrophysics Data System (ADS)
Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun
2018-04-01
Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.
A short-term ensemble wind speed forecasting system for wind power applications
NASA Astrophysics Data System (ADS)
Baidya Roy, S.; Traiteur, J. J.; Callicutt, D.; Smith, M.
2011-12-01
This study develops an adaptive, blended forecasting system to provide accurate wind speed forecasts 1 hour ahead of time for wind power applications. The system consists of an ensemble of 21 forecasts with different configurations of the Weather Research and Forecasting Single Column Model (WRFSCM) and a persistence model. The ensemble is calibrated against observations for a 2 month period (June-July, 2008) at a potential wind farm site in Illinois using the Bayesian Model Averaging (BMA) technique. The forecasting system is evaluated against observations for August 2008 at the same site. The calibrated ensemble forecasts significantly outperform the forecasts from the uncalibrated ensemble while significantly reducing forecast uncertainty under all environmental stability conditions. The system also generates significantly better forecasts than persistence, autoregressive (AR) and autoregressive moving average (ARMA) models during the morning transition and the diurnal convective regimes. This forecasting system is computationally more efficient than traditional numerical weather prediction models and can generate a calibrated forecast, including model runs and calibration, in approximately 1 minute. Currently, hour-ahead wind speed forecasts are almost exclusively produced using statistical models. However, numerical models have several distinct advantages over statistical models including the potential to provide turbulence forecasts. Hence, there is an urgent need to explore the role of numerical models in short-term wind speed forecasting. This work is a step in that direction and is likely to trigger a debate within the wind speed forecasting community.
Papanikolaou, Yannis; Tsoumakas, Grigorios; Laliotis, Manos; Markantonatos, Nikos; Vlahavas, Ioannis
2017-09-22
In this paper we present the approach that we employed to deal with large scale multi-label semantic indexing of biomedical papers. This work was mainly implemented within the context of the BioASQ challenge (2013-2017), a challenge concerned with biomedical semantic indexing and question answering. Our main contribution is a MUlti-Label Ensemble method (MULE) that incorporates a McNemar statistical significance test in order to validate the combination of the constituent machine learning algorithms. Some secondary contributions include a study on the temporal aspects of the BioASQ corpus (observations apply also to the BioASQ's super-set, the PubMed articles collection) and the proper parametrization of the algorithms used to deal with this challenging classification task. The ensemble method that we developed is compared to other approaches in experimental scenarios with subsets of the BioASQ corpus giving positive results. In our participation in the BioASQ challenge we obtained the first place in 2013 and the second place in the four following years, steadily outperforming MTI, the indexing system of the National Library of Medicine (NLM). The results of our experimental comparisons, suggest that employing a statistical significance test to validate the ensemble method's choices, is the optimal approach for ensembling multi-label classifiers, especially in contexts with many rare labels.
Wavelet Filter Banks for Super-Resolution SAR Imaging
NASA Technical Reports Server (NTRS)
Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess
2011-01-01
This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.
Goodwyn, Pablo Perez; De Souza, Emerson; Fujisaki, Kenji; Gorb, Stanislav
2008-05-01
Water striders (Insecta, Heteroptera, Gerridae) have a complex three-dimensional waterproof hairy cover which renders them super-hydrophobic. This paper experimentally demonstrates for the first time the mechanism of the super-hydrophobicity of the cuticle of water striders. The complex two-level microstructure of the surface, including the smallest microtrichia (200-300 nm wide, 7-9 microm long), was successfully replicated using a two-step moulding technique. The mould surface exhibited super-hydrophobic properties similar to the original insect surface. The average water contact angle (CA) of the mould was 164.7 degrees , whereas the CA of the flat polymer was about 92 degrees . These results show that (i) in water striders, the topography of the surface plays a dominant role in super-hydrophobicity, (ii) very low surface energy bulk material (typically smaller than 0.020 N m(-1)) is not necessary to achieve super-hydrophobicity; and (3) the two-step moulding technique may be used to mimic quite complex biological functional surfaces.
NASA Astrophysics Data System (ADS)
Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel
2016-04-01
Assimilation of Surface Soil Moisture (SSM) observations obtained from remote sensing techniques have been shown to improve streamflow prediction at different time scales of hydrological modeling. Different sensors and methods have been tested for their application in SSM estimation, especially in the microwave region of the electromagnetic spectrum. The available observation devices include passive microwave sensors such as the Advanced Microwave Scanning Radiometer - Earth Observation System (AMSR-E) onboard the Aqua satellite and the Soil Moisture and Ocean Salinity (SMOS) mission. On the other hand, active microwave systems include Scatterometers (SCAT) onboard the European Remote Sensing satellites (ERS-1/2) and the Advanced Scatterometer (ASCAT) onboard MetOp-A satellite. Data assimilation (DA) include different techniques that have been applied in hydrology and other fields for decades. These techniques include, among others, Kalman Filtering (KF), Variational Assimilation or Particle Filtering. From the initial KF method, different techniques were developed to suit its application to different systems. The Ensemble Kalman Filter (EnKF), extensively applied in hydrological modeling improvement, shows its capability to deal with nonlinear model dynamics without linearizing model equations, as its main advantage. The objective of this study was to investigate whether data assimilation of SSM ASCAT observations, through the EnKF method, could improve streamflow simulation of mediterranean catchments with TOPLATS hydrological complex model. The DA technique was programmed in FORTRAN, and applied to hourly simulations of TOPLATS catchment model. TOPLATS (TOPMODEL-based Land-Atmosphere Transfer Scheme) was applied on its lumped version for two mediterranean catchments of similar size, located in northern Spain (Arga, 741 km2) and central Italy (Nestore, 720 km2). The model performs a separated computation of energy and water balances. In those balances, the soil is divided into two layers, the upper Surface Zone (SZ), and the deeper Transmission Zone (TZ). In this study, the SZ depth was fixed to 5 cm, for adequate assimilation of observed data. Available data was distributed as follows: first, the model was calibrated for the 2001-2007 period; then the 2007-2010 period was used for satellite data rescaling purposes. Finally, data assimilation was applied during the validation (2010-2013) period. Application of the EnKF required the following steps: 1) rescaling of satellite data, 2) transformation of rescaled data into Soil Water Index (SWI) through a moving average filter, where a T = 9 calibrated value was applied, 3) generation of a 50 member ensemble through perturbation of inputs (rainfall and temperature) and three selected parameters, 4) validation of the ensemble through the compliance of two criteria based on ensemble's spread, mean square error and skill and, 5) Kalman Gain calculation. In this work, comparison of three satellite data rescaling techniques: 1) cumulative distribution Function (CDF) matching, 2) variance matching and 3) linear least square regression was also performed. Results obtained in this study showed slight improvements of hourly Nash-Sutcliffe Efficiency (NSE) in both catchments, with the different rescaling methods evaluated. Larger improvements were found in terms of seasonal simulated volume error reduction.
Concept and clinical application of the resin-coating technique for indirect restorations.
Nikaido, Toru; Tagami, Junji; Yatani, Hirofumi; Ohkubo, Chikahiro; Nihei, Tomotaro; Koizumi, Hiroyasu; Maseki, Toshio; Nishiyama, Yuichiro; Takigawa, Tomoyoshi; Tsubota, Yuji
2018-03-30
The resin-coating technique is one of the successful bonding techniques used for the indirect restorations. The dentin surfaces exposed after cavity preparation are coated with a thin film of a coating material or a dentin bonding system combined with a flowable composite resin. Resin coating can minimize pulp irritation and improve the bond strength between a resin cement and tooth structures. The technique can also be applied to endodontically treated teeth, resulting in prevention of coronal leakage of the restorations. Application of a resin coating to root surface provides the additional benefit of preventing root caries in elderly patients. Therefore, the coating materials have the potential to reinforce sound tooth ("Super Tooth" formation), leading to preservation of maximum tooth structures.
Optimization of magnet end-winding geometry
NASA Astrophysics Data System (ADS)
Reusch, Michael F.; Weissenburger, Donald W.; Nearing, James C.
1994-03-01
A simple, almost entirely analytic, method for the optimization of stress-reduced magnet-end winding paths for ribbon-like superconducting cable is presented. This technique is based on characterization of these paths as developable surfaces, i.e., surfaces whose intrinsic geometry is flat. The method is applicable to winding mandrels of arbitrary geometry. Computational searches for optimal winding paths are easily implemented via the technique. Its application to the end configuration of cylindrical Superconducting Super Collider (SSC)-type magnets is discussed. The method may be useful for other engineering problems involving the placement of thin sheets of material.
Efficient data assimilation algorithm for bathymetry application
NASA Astrophysics Data System (ADS)
Ghorbanidehno, H.; Lee, J. H.; Farthing, M.; Hesser, T.; Kitanidis, P. K.; Darve, E. F.
2017-12-01
Information on the evolving state of the nearshore zone bathymetry is crucial to shoreline management, recreational safety, and naval operations. The high cost and complex logistics of using ship-based surveys for bathymetry estimation have encouraged the use of remote sensing techniques. Data assimilation methods combine the remote sensing data and nearshore hydrodynamic models to estimate the unknown bathymetry and the corresponding uncertainties. In particular, several recent efforts have combined Kalman Filter-based techniques such as ensembled-based Kalman filters with indirect video-based observations to address the bathymetry inversion problem. However, these methods often suffer from ensemble collapse and uncertainty underestimation. Here, the Compressed State Kalman Filter (CSKF) method is used to estimate the bathymetry based on observed wave celerity. In order to demonstrate the accuracy and robustness of the CSKF method, we consider twin tests with synthetic observations of wave celerity, while the bathymetry profiles are chosen based on surveys taken by the U.S. Army Corps of Engineer Field Research Facility (FRF) in Duck, NC. The first test case is a bathymetry estimation problem for a spatially smooth and temporally constant bathymetry profile. The second test case is a bathymetry estimation problem for a temporally evolving bathymetry from a smooth to a non-smooth profile. For both problems, we compare the results of CSKF with those obtained by the local ensemble transform Kalman filter (LETKF), which is a popular ensemble-based Kalman filter method.
Super-Hubble de Sitter fluctuations and the dynamical RG
NASA Astrophysics Data System (ADS)
Burgess, C. P.; Leblond, L.; Holman, R.; Shandera, S.
2010-03-01
Perturbative corrections to correlation functions for interacting theories in de Sitter spacetime often grow secularly with time, due to the properties of fluctuations on super-Hubble scales. This growth can lead to a breakdown of perturbation theory at late times. We argue that Dynamical Renormalization Group (DRG) techniques provide a convenient framework for interpreting and resumming these secularly growing terms. In the case of a massless scalar field in de Sitter with quartic self-interaction, the resummed result is also less singular in the infrared, in precisely the manner expected if a dynamical mass is generated. We compare this improved infrared behavior with large-N expansions when applicable.
Perspectives in Super-resolved Fluorescence Microscopy: What comes next?
NASA Astrophysics Data System (ADS)
Cremer, Christoph; Birk, Udo
2016-04-01
The Nobel Prize in Chemistry 2014 has been awarded to three scientists involved in the development of STED and PALM super-resolution fluorescence microscopy (SRM) methods. They have proven that it is possible to overcome the hundred year old theoretical limit for the resolution potential of light microscopy (of about 200 nm for visible light), which for decades has precluded a direct glimpse of the molecular machinery of life. None of the present-day super-resolution techniques have invalidated the Abbe limit for light optical detection; however, they have found clever ways around it. In this report, we discuss some of the challenges still to be resolved before arising SRM approaches will be fit to bring about the revolution in Biology and Medicine envisaged. Some of the challenges discussed are the applicability to image live and/or large samples, the further enhancement of resolution, future developments of labels, and multi-spectral approaches.
Cluster ensemble based on Random Forests for genetic data.
Alhusain, Luluah; Hafez, Alaaeldin M
2017-01-01
Clustering plays a crucial role in several application domains, such as bioinformatics. In bioinformatics, clustering has been extensively used as an approach for detecting interesting patterns in genetic data. One application is population structure analysis, which aims to group individuals into subpopulations based on shared genetic variations, such as single nucleotide polymorphisms. Advances in DNA sequencing technology have facilitated the obtainment of genetic datasets with exceptional sizes. Genetic data usually contain hundreds of thousands of genetic markers genotyped for thousands of individuals, making an efficient means for handling such data desirable. Random Forests (RFs) has emerged as an efficient algorithm capable of handling high-dimensional data. RFs provides a proximity measure that can capture different levels of co-occurring relationships between variables. RFs has been widely considered a supervised learning method, although it can be converted into an unsupervised learning method. Therefore, RF-derived proximity measure combined with a clustering technique may be well suited for determining the underlying structure of unlabeled data. This paper proposes, RFcluE, a cluster ensemble approach for determining the underlying structure of genetic data based on RFs. The approach comprises a cluster ensemble framework to combine multiple runs of RF clustering. Experiments were conducted on high-dimensional, real genetic dataset to evaluate the proposed approach. The experiments included an examination of the impact of parameter changes, comparing RFcluE performance against other clustering methods, and an assessment of the relationship between the diversity and quality of the ensemble and its effect on RFcluE performance. This paper proposes, RFcluE, a cluster ensemble approach based on RF clustering to address the problem of population structure analysis and demonstrate the effectiveness of the approach. The paper also illustrates that applying a cluster ensemble approach, combining multiple RF clusterings, produces more robust and higher-quality results as a consequence of feeding the ensemble with diverse views of high-dimensional genetic data obtained through bagging and random subspace, the two key features of the RF algorithm.
Instances selection algorithm by ensemble margin
NASA Astrophysics Data System (ADS)
Saidi, Meryem; Bechar, Mohammed El Amine; Settouti, Nesma; Chikh, Mohamed Amine
2018-05-01
The main limit of data mining algorithms is their inability to deal with the huge amount of available data in a reasonable processing time. A solution of producing fast and accurate results is instances and features selection. This process eliminates noisy or redundant data in order to reduce the storage and computational cost without performances degradation. In this paper, a new instance selection approach called Ensemble Margin Instance Selection (EMIS) algorithm is proposed. This approach is based on the ensemble margin. To evaluate our approach, we have conducted several experiments on different real-world classification problems from UCI Machine learning repository. The pixel-based image segmentation is a field where the storage requirement and computational cost of applied model become higher. To solve these limitations we conduct a study based on the application of EMIS and other instance selection techniques for the segmentation and automatic recognition of white blood cells WBC (nucleus and cytoplasm) in cytological images.
Unbiased, scalable sampling of protein loop conformations from probabilistic priors.
Zhang, Yajia; Hauser, Kris
2013-01-01
Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.
Unbiased, scalable sampling of protein loop conformations from probabilistic priors
2013-01-01
Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175
Imaging live cells at high spatiotemporal resolution for lab-on-a-chip applications.
Chin, Lip Ket; Lee, Chau-Hwang; Chen, Bi-Chang
2016-05-24
Conventional optical imaging techniques are limited by the diffraction limit and difficult-to-image biomolecular and sub-cellular processes in living specimens. Novel optical imaging techniques are constantly evolving with the desire to innovate an imaging tool that is capable of seeing sub-cellular processes in a biological system, especially in three dimensions (3D) over time, i.e. 4D imaging. For fluorescence imaging on live cells, the trade-offs among imaging depth, spatial resolution, temporal resolution and photo-damage are constrained based on the limited photons of the emitters. The fundamental solution to solve this dilemma is to enlarge the photon bank such as the development of photostable and bright fluorophores, leading to the innovation in optical imaging techniques such as super-resolution microscopy and light sheet microscopy. With the synergy of microfluidic technology that is capable of manipulating biological cells and controlling their microenvironments to mimic in vivo physiological environments, studies of sub-cellular processes in various biological systems can be simplified and investigated systematically. In this review, we provide an overview of current state-of-the-art super-resolution and 3D live cell imaging techniques and their lab-on-a-chip applications, and finally discuss future research trends in new and breakthrough research areas of live specimen 4D imaging in controlled 3D microenvironments.
Brown, Philip S.; Bhushan, Bharat
2015-01-01
Coatings with specific surface wetting properties are of interest for anti-fouling, anti-fogging, anti-icing, self-cleaning, anti-smudge, and oil-water separation applications. Many previous bioinspired surfaces are of limited use due to a lack of mechanical durability. Here, a layer-by-layer technique is utilized to create coatings with four combinations of water and oil repellency and affinity. An adapted layer-by-layer approach is tailored to yield specific surface properties, resulting in a durable, functional coating. This technique provides necessary flexibility to improve substrate adhesion combined with desirable surface chemistry. Polyelectrolyte binder, SiO2 nanoparticles, and silane or fluorosurfactant layers are deposited, combining surface roughness and necessary chemistry to result in four different coatings: superhydrophilic/superoleophilic, superhydrophobic/superoleophilic, superhydrophobic/superoleophobic, and superhydrophilic/superoleophobic. The superoleophobic coatings display hexadecane contact angles >150° with tilt angles <5°, whilst the superhydrophobic coatings display water contact angles >160° with tilt angles <2°. One coating combines both oleophobic and hydrophobic properties, whilst others mix and match oil and water repellency and affinity. Coating durability was examined through the use of micro/macrowear experiments. These coatings display transparency acceptable for some applications. Fabrication via this novel combination of techniques results in durable, functional coatings displaying improved performance compared to existing work where either durability or functionality is compromised. PMID:26353971
Li, Qiang; Pan, Deng; Wei, Hong; Xu, Hongxing
2018-03-14
Hybrid systems composed of multiple quantum emitters coupled with plasmonic waveguides are promising building blocks for future integrated quantum nanophotonic circuits. The techniques that can super-resolve and selectively excite contiguous quantum emitters in a diffraction-limited area are of great importance for studying the plasmon-mediated interaction between quantum emitters and manipulating the single plasmon generation and propagation in plasmonic circuits. Here we show that multiple quantum dots coupled with a silver nanowire can be controllably excited by tuning the interference field of surface plasmons on the nanowire. Because of the period of the interference pattern is much smaller than the diffraction limit, we demonstrate the selective excitation of two quantum dots separated by a distance as short as 100 nm. We also numerically demonstrate a new kind of super-resolution imaging method that combines the tunable surface plasmon interference pattern on the NW with the structured illumination microscopy technique. Our work provides a novel high-resolution optical excitation and imaging method for the coupled systems of multiple quantum emitters and plasmonic waveguides, which adds a new tool for studying and manipulating single quantum emitters and single plasmons for quantum plasmonic circuitry applications.
ANALYSIS OF SAMPLING TECHNIQUES FOR IMBALANCED DATA: AN N=648 ADNI STUDY
Dubey, Rashmi; Zhou, Jiayu; Wang, Yalin; Thompson, Paul M.; Ye, Jieping
2013-01-01
Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer’s disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and under sampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1). a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2). sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. PMID:24176869
Analysis of sampling techniques for imbalanced data: An n = 648 ADNI study.
Dubey, Rashmi; Zhou, Jiayu; Wang, Yalin; Thompson, Paul M; Ye, Jieping
2014-02-15
Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer's disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and undersampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1) a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2) sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. © 2013 Elsevier Inc. All rights reserved.
A Diagnostics Tool to detect ensemble forecast system anomaly and guide operational decisions
NASA Astrophysics Data System (ADS)
Park, G. H.; Srivastava, A.; Shrestha, E.; Thiemann, M.; Day, G. N.; Draijer, S.
2017-12-01
The hydrologic community is moving toward using ensemble forecasts to take uncertainty into account during the decision-making process. The New York City Department of Environmental Protection (DEP) implements several types of ensemble forecasts in their decision-making process: ensemble products for a statistical model (Hirsch and enhanced Hirsch); the National Weather Service (NWS) Advanced Hydrologic Prediction Service (AHPS) forecasts based on the classical Ensemble Streamflow Prediction (ESP) technique; and the new NWS Hydrologic Ensemble Forecasting Service (HEFS) forecasts. To remove structural error and apply the forecasts to additional forecast points, the DEP post processes both the AHPS and the HEFS forecasts. These ensemble forecasts provide mass quantities of complex data, and drawing conclusions from these forecasts is time-consuming and difficult. The complexity of these forecasts also makes it difficult to identify system failures resulting from poor data, missing forecasts, and server breakdowns. To address these issues, we developed a diagnostic tool that summarizes ensemble forecasts and provides additional information such as historical forecast statistics, forecast skill, and model forcing statistics. This additional information highlights the key information that enables operators to evaluate the forecast in real-time, dynamically interact with the data, and review additional statistics, if needed, to make better decisions. We used Bokeh, a Python interactive visualization library, and a multi-database management system to create this interactive tool. This tool compiles and stores data into HTML pages that allows operators to readily analyze the data with built-in user interaction features. This paper will present a brief description of the ensemble forecasts, forecast verification results, and the intended applications for the diagnostic tool.
Gofton, Wade; Fitch, David A
2016-03-01
The purpose of this study was to compare the in-hospital costs associated with the tissue-sparing supercapsular percutaneously-assisted total hip (SuperPath) and traditional Lateral surgical techniques for total hip replacement (THR). Between April 2013 and January 2014, in-hospital costs were reviewed for all THRs performed using the SuperPath technique by a single surgeon and all THRs performed using the Lateral technique by another surgeon at the same institution. Overall, costs were 28.4% higher in the Lateral group. This was largely attributable to increased costs associated with transfusion (+92.5%), patient rooms (+60.4%), patient food (+62.8%), narcotics (+42.5%), physical therapy (+52.5%), occupational therapy (+88.6%), and social work (+92.9%). The only costs noticeably increased for SuperPath were for imaging (+105.9%), and this was because the SuperPath surgeon performed intraoperative radiographs on all patients while the Lateral surgeon did not. The use of the SuperPath technique resulted in in-hospital cost reductions of over 28%, suggesting that this tissue-sparing surgical technique can be cost-effective primarily by facilitating early mobilisation and patient discharge even during a surgeon's initial experience with the approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petkov, Valeri; Prasai, Binay; Shastri, Sarvjit
2017-09-12
Practical applications require the production and usage of metallic nanocrystals (NCs) in large ensembles. Besides, due to their cluster-bulk solid duality, metallic NCs exhibit a large degree of structural diversity. This poses the question as to what atomic-scale basis is to be used when the structure–function relationship for metallic NCs is to be quantified precisely. In this paper, we address the question by studying bi-functional Fe core-Pt skin type NCs optimized for practical applications. In particular, the cluster-like Fe core and skin-like Pt surface of the NCs exhibit superparamagnetic properties and a superb catalytic activity for the oxygen reduction reaction,more » respectively. We determine the atomic-scale structure of the NCs by non-traditional resonant high-energy X-ray diffraction coupled to atomic pair distribution function analysis. Using the experimental structure data we explain the observed magnetic and catalytic behavior of the NCs in a quantitative manner. Lastly, we demonstrate that NC ensemble-averaged 3D positions of atoms obtained by advanced X-ray scattering techniques are a very proper basis for not only establishing but also quantifying the structure–function relationship for the increasingly complex metallic NCs explored for practical applications.« less
Advanced Atmospheric Ensemble Modeling Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, R.; Chiswell, S.; Kurzeja, R.
Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two releasemore » times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.« less
Hyper-Parallel Tempering Monte Carlo Method and It's Applications
NASA Astrophysics Data System (ADS)
Yan, Qiliang; de Pablo, Juan
2000-03-01
A new generalized hyper-parallel tempering Monte Carlo molecular simulation method is presented for study of complex fluids. The method is particularly useful for simulation of many-molecule complex systems, where rough energy landscapes and inherently long characteristic relaxation times can pose formidable obstacles to effective sampling of relevant regions of configuration space. The method combines several key elements from expanded ensemble formalisms, parallel-tempering, open ensemble simulations, configurational bias techniques, and histogram reweighting analysis of results. It is found to accelerate significantly the diffusion of a complex system through phase-space. In this presentation, we demonstrate the effectiveness of the new method by implementing it in grand canonical ensembles for a Lennard-Jones fluid, for the restricted primitive model of electrolyte solutions (RPM), and for polymer solutions and blends. Our results indicate that the new algorithm is capable of overcoming the large free energy barriers associated with phase transitions, thereby greatly facilitating the simulation of coexistence properties. It is also shown that the method can be orders of magnitude more efficient than previously available techniques. More importantly, the method is relatively simple and can be incorporated into existing simulation codes with minor efforts.
On the Effect of Sphere-Overlap on Super Coarse-Grained Models of Protein Assemblies
NASA Astrophysics Data System (ADS)
Degiacomi, Matteo T.
2018-05-01
Ion mobility mass spectrometry (IM/MS) can provide structural information on intact protein complexes. Such data, including connectivity and collision cross sections (CCS) of assemblies' subunits, can in turn be used as a guide to produce representative super coarse-grained models. These models are constituted by ensembles of overlapping spheres, each representing a protein subunit. A model is considered plausible if the CCS and sphere-overlap levels of its subunits fall within predetermined confidence intervals. While the first is determined by experimental error, the latter is based on a statistical analysis on a range of protein dimers. Here, we first propose a new expression to describe the overlap between two spheres. Then we analyze the effect of specific overlap cutoff choices on the precision and accuracy of super coarse-grained models. Finally, we propose a method to determine overlap cutoff levels on a per-case scenario, based on collected CCS data, and show that it can be applied to the characterization of the assembly topology of symmetrical homo-multimers. [Figure not available: see fulltext.
Cacha, L A; Parida, S; Dehuri, S; Cho, S-B; Poznanski, R R
2016-12-01
The huge number of voxels in fMRI over time poses a major challenge to for effective analysis. Fast, accurate, and reliable classifiers are required for estimating the decoding accuracy of brain activities. Although machine-learning classifiers seem promising, individual classifiers have their own limitations. To address this limitation, the present paper proposes a method based on the ensemble of neural networks to analyze fMRI data for cognitive state classification for application across multiple subjects. Similarly, the fuzzy integral (FI) approach has been employed as an efficient tool for combining different classifiers. The FI approach led to the development of a classifiers ensemble technique that performs better than any of the single classifier by reducing the misclassification, the bias, and the variance. The proposed method successfully classified the different cognitive states for multiple subjects with high accuracy of classification. Comparison of the performance improvement, while applying ensemble neural networks method, vs. that of the individual neural network strongly points toward the usefulness of the proposed method.
Multivariate localization methods for ensemble Kalman filtering
NASA Astrophysics Data System (ADS)
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-12-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Evaluation of TIGGE Ensemble Forecasts of Precipitation in Distinct Climate Regions in Iran
NASA Astrophysics Data System (ADS)
Aminyavari, Saleh; Saghafian, Bahram; Delavar, Majid
2018-04-01
The application of numerical weather prediction (NWP) products is increasing dramatically. Existing reports indicate that ensemble predictions have better skill than deterministic forecasts. In this study, numerical ensemble precipitation forecasts in the TIGGE database were evaluated using deterministic, dichotomous (yes/no), and probabilistic techniques over Iran for the period 2008-16. Thirteen rain gauges spread over eight homogeneous precipitation regimes were selected for evaluation. The Inverse Distance Weighting and Kriging methods were adopted for interpolation of the prediction values, downscaled to the stations at lead times of one to three days. To enhance the forecast quality, NWP values were post-processed via Bayesian Model Averaging. The results showed that ECMWF had better scores than other products. However, products of all centers underestimated precipitation in high precipitation regions while overestimating precipitation in other regions. This points to a systematic bias in forecasts and demands application of bias correction techniques. Based on dichotomous evaluation, NCEP did better at most stations, although all centers overpredicted the number of precipitation events. Compared to those of ECMWF and NCEP, UKMO yielded higher scores in mountainous regions, but performed poorly at other selected stations. Furthermore, the evaluations showed that all centers had better skill in wet than in dry seasons. The quality of post-processed predictions was better than those of the raw predictions. In conclusion, the accuracy of the NWP predictions made by the selected centers could be classified as medium over Iran, while post-processing of predictions is recommended to improve the quality.
ZnO deposition on metal substrates: Relating fabrication, morphology, and wettability
NASA Astrophysics Data System (ADS)
Beaini, Sara S.; Kronawitter, Coleman X.; Carey, Van P.; Mao, Samuel S.
2013-05-01
It is not common practice to deposit thin films on metal substrates, especially copper, which is a common heat exchanger metal and practical engineering material known for its heat transfer properties. While single crystal substrates offer ideal surfaces with uniform structure for compatibility with oxide deposition, metallic surfaces needed for industrial applications exhibit non-idealities that complicate the fabrication of oxide nanostructure arrays. The following study explored different ZnO fabrication techniques to deposit a (super)hydrophobic thin film of ZnO on a metal substrate, specifically copper, in order to explore its feasibility as an enhanced condensing surface. ZnO was selected for its non-toxicity, ability to be made (super)hydrophobic with hierarchical roughness, and its photoinduced hydrophilicity characteristic, which could be utilized to pattern it to have both hydrophobic-hydrophilic regions. We investigated the variation of ZnO's morphology and wetting state, using SEMs and sessile drop contact angle measurements, as a function of different fabrication techniques: sputtering, pulsed laser deposition (PLD), electrodeposition and annealing Zn. We successfully fabricated (super)hydrophobic ZnO on a mirror finish, commercially available copper substrate using the scalable electrodeposition technique. PLD for ZnO deposition did not prove viable, as the ZnO samples on metal substrates were hydrophilic and the process does not lend itself to scalability. The annealed Zn sheets did not exhibit consistent wetting state results.
An iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring
NASA Astrophysics Data System (ADS)
Li, J. Y.; Kitanidis, P. K.
2013-12-01
Reservoir forecasting and management are increasingly relying on an integrated reservoir monitoring approach, which involves data assimilation to calibrate the complex process of multi-phase flow and transport in the porous medium. The numbers of unknowns and measurements arising in such joint inversion problems are usually very large. The ensemble Kalman filter and other ensemble-based techniques are popular because they circumvent the computational barriers of computing Jacobian matrices and covariance matrices explicitly and allow nonlinear error propagation. These algorithms are very useful but their performance is not well understood and it is not clear how many realizations are needed for satisfactory results. In this presentation we introduce an iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring. It is intended for problems for which the posterior or conditional probability density function is not too different from a Gaussian, despite nonlinearity in the state transition and observation equations. The algorithm generates realizations that have the potential to adequately represent the conditional probability density function (pdf). Theoretical analysis sheds light on the conditions under which this algorithm should work well and explains why some applications require very few realizations while others require many. This algorithm is compared with the classical ensemble Kalman filter (Evensen, 2003) and with Gu and Oliver's (2007) iterative ensemble Kalman filter on a synthetic problem of monitoring a reservoir using wellbore pressure and flux data.
2009-01-01
Hulbert, Andrew J Quaid, Wesley A. Goode, Ruth H. Preller, Nadia Pinardi, Paolo Oddo, Antonio Guarnieri, Jacopo Chiggiato , Sandro Carniel, A...Raicich (1994). Persistence of the daily discharge measured one day previously is used for the Po river. Additional details can be found in Chiggiato ...S226 (this issue). Chiggiato . J., Oddo. P., 2008. Operational ocean models in the Adriatic Sea: a skill assessment. Ocean Sci. 4.61-71. Cushman
Image resolution enhancement via image restoration using neural network
NASA Astrophysics Data System (ADS)
Zhang, Shuangteng; Lu, Yihong
2011-04-01
Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.
NASA Astrophysics Data System (ADS)
Chen, BinQiang; Zhang, ZhouSuo; Zi, YanYang; He, ZhengJia; Sun, Chuang
2013-10-01
Detecting transient vibration signatures is of vital importance for vibration-based condition monitoring and fault detection of the rotating machinery. However, raw mechanical signals collected by vibration sensors are generally mixtures of physical vibrations of the multiple mechanical components installed in the examined machinery. Fault-generated incipient vibration signatures masked by interfering contents are difficult to be identified. The fast kurtogram (FK) is a concise and smart gadget for characterizing these vibration features. The multi-rate filter-bank (MRFB) and the spectral kurtosis (SK) indicator of the FK are less powerful when strong interfering vibration contents exist, especially when the FK are applied to vibration signals of short duration. It is encountered that the impulsive interfering contents not authentically induced by mechanical faults complicate the optimal analyzing process and lead to incorrect choosing of the optimal analysis subband, therefore the original FK may leave out the essential fault signatures. To enhance the analyzing performance of FK for industrial applications, an improved version of fast kurtogram, named as "fast spatial-spectral ensemble kurtosis kurtogram", is presented. In the proposed technique, discrete quasi-analytic wavelet tight frame (QAWTF) expansion methods are incorporated as the detection filters. The QAWTF, constructed based on dual tree complex wavelet transform, possesses better vibration transient signature extracting ability and enhanced time-frequency localizability compared with conventional wavelet packet transforms (WPTs). Moreover, in the constructed QAWTF, a non-dyadic ensemble wavelet subband generating strategy is put forward to produce extra wavelet subbands that are capable of identifying fault features located in transition-band of WPT. On the other hand, an enhanced signal impulsiveness evaluating indicator, named "spatial-spectral ensemble kurtosis" (SSEK), is put forward and utilized as the quantitative measure to select optimal analyzing parameters. The SSEK indicator is robuster in evaluating the impulsiveness intensity of vibration signals due to its better suppressing ability of Gaussian noise, harmonics and sporadic impulsive shocks. Numerical validations, an experimental test and two engineering applications were used to verify the effectiveness of the proposed technique. The analyzing results of the numerical validations, experimental tests and engineering applications demonstrate that the proposed technique possesses robuster transient vibration content detecting performance in comparison with the original FK and the WPT-based FK method, especially when they are applied to the processing of vibration signals of relative limited duration.
Nuclear Ensemble Approach with Importance Sampling.
Kossoski, Fábris; Barbatti, Mario
2018-06-12
We show that the importance sampling technique can effectively augment the range of problems where the nuclear ensemble approach can be applied. A sampling probability distribution function initially determines the collection of initial conditions for which calculations are performed, as usual. Then, results for a distinct target distribution are computed by introducing compensating importance sampling weights for each sampled point. This mapping between the two probability distributions can be performed whenever they are both explicitly constructed. Perhaps most notably, this procedure allows for the computation of temperature dependent observables. As a test case, we investigated the UV absorption spectra of phenol, which has been shown to have a marked temperature dependence. Application of the proposed technique to a range that covers 500 K provides results that converge to those obtained with conventional sampling. We further show that an overall improved rate of convergence is obtained when sampling is performed at intermediate temperatures. The comparison between calculated and the available measured cross sections is very satisfactory, as the main features of the spectra are correctly reproduced. As a second test case, one of Tully's classical models was revisited, and we show that the computation of dynamical observables also profits from the importance sampling technique. In summary, the strategy developed here can be employed to assess the role of temperature for any property calculated within the nuclear ensemble method, with the same computational cost as doing so for a single temperature.
Inverse Regional Modeling with Adjoint-Free Technique
NASA Astrophysics Data System (ADS)
Yaremchuk, M.; Martin, P.; Panteleev, G.; Beattie, C.
2016-02-01
The ongoing parallelization trend in computer technologies facilitates the use ensemble methods in geophysical data assimilation. Of particular interest are ensemble techniques which do not require the development of tangent linear numerical models and their adjoints for optimization. These ``adjoint-free'' methods minimize the cost function within the sequence of subspaces spanned by a carefully chosen sets perturbations of the control variables. In this presentation, an adjoint-free variational technique (a4dVar) is demonstrated in an application estimating initial conditions of two numerical models: the Navy Coastal Ocean Model (NCOM), and the surface wave model (WAM). With the NCOM, performance of both adjoint and adjoint-free 4dVar data assimilation techniques is compared in application to the hydrographic surveys and velocity observations collected in the Adriatic Sea in 2006. Numerical experiments have shown that a4dVar is capable of providing forecast skill similar to that of conventional 4dVar at comparable computational expense while being less susceptible to excitation of ageostrophic modes that are not supported by observations. Adjoint-free technique constrained by the WAM model is tested in a series of data assimilation experiments with synthetic observations in the southern Chukchi Sea. The types of considered observations are directional spectra estimated from point measurements by stationary buoys, significant wave height (SWH) observations by coastal high-frequency radars and along-track SWH observations by satellite altimeters. The a4dVar forecast skill is shown to be 30-40% better than the skill of the sequential assimilaiton method based on optimal interpolation which is currently used in operations. Prospects of further development of the a4dVar methods in regional applications are discussed.
Higher moments of multiplicity fluctuations in a hadron-resonance gas with exact conservation laws
NASA Astrophysics Data System (ADS)
Fu, Jing-Hua
2017-09-01
Higher moments of multiplicity fluctuations of hadrons produced in central nucleus-nucleus collisions are studied within the hadron-resonance gas model in the canonical ensemble. Exact conservation of three charges, baryon number, electric charge, and strangeness is enforced in the large volume limit. Moments up to the fourth order of various particles are calculated at CERN Super Proton Synchrotron, BNL Relativistic Heavy Ion Collider (RHIC), and CERN Large Hadron Collider energies. The asymptotic fluctuations within a simplified model with only one conserved charge in the canonical ensemble are discussed where simple analytical expressions for moments of multiplicity distributions can be obtained. Moments products of net-proton, net-kaon, and net-charge distributions in Au + Au collisions at RHIC energies are calculated. The pseudorapidity coverage dependence of net-charge fluctuation is discussed.
Tracking individual membrane proteins and their biochemistry: The power of direct observation.
Barden, Adam O; Goler, Adam S; Humphreys, Sara C; Tabatabaei, Samaneh; Lochner, Martin; Ruepp, Marc-David; Jack, Thomas; Simonin, Jonathan; Thompson, Andrew J; Jones, Jeffrey P; Brozik, James A
2015-11-01
The advent of single molecule fluorescence microscopy has allowed experimental molecular biophysics and biochemistry to transcend traditional ensemble measurements, where the behavior of individual proteins could not be precisely sampled. The recent explosion in popularity of new super-resolution and super-localization techniques coupled with technical advances in optical designs and fast highly sensitive cameras with single photon sensitivity and millisecond time resolution have made it possible to track key motions, reactions, and interactions of individual proteins with high temporal resolution and spatial resolution well beyond the diffraction limit. Within the purview of membrane proteins and ligand gated ion channels (LGICs), these outstanding advances in single molecule microscopy allow for the direct observation of discrete biochemical states and their fluctuation dynamics. Such observations are fundamentally important for understanding molecular-level mechanisms governing these systems. Examples reviewed here include the effects of allostery on the stoichiometry of ligand binding in the presence of fluorescent ligands; the observation of subdomain partitioning of membrane proteins due to microenvironment effects; and the use of single particle tracking experiments to elucidate characteristics of membrane protein diffusion and the direct measurement of thermodynamic properties, which govern the free energy landscape of protein dimerization. The review of such characteristic topics represents a snapshot of efforts to push the boundaries of fluorescence microscopy of membrane proteins to the absolute limit. This article is part of the Special Issue entitled 'Fluorescent Tools in Neuropharmacology'. Copyright © 2015 Elsevier Ltd. All rights reserved.
Internal Spin Control, Squeezing and Decoherence in Ensembles of Alkali Atomic Spins
NASA Astrophysics Data System (ADS)
Norris, Leigh Morgan
Large atomic ensembles interacting with light are one of the most promising platforms for quantum information processing. In the past decade, novel applications for these systems have emerged in quantum communication, quantum computing, and metrology. Essential to all of these applications is the controllability of the atomic ensemble, which is facilitated by a strong coupling between the atoms and light. Non-classical spin squeezed states are a crucial step in attaining greater ensemble control. The degree of entanglement present in these states, furthermore, serves as a benchmark for the strength of the atom-light interaction. Outside the broader context of quantum information processing with atomic ensembles, spin squeezed states have applications in metrology, where their quantum correlations can be harnessed to improve the precision of magnetometers and atomic clocks. This dissertation focuses upon the production of spin squeezed states in large ensembles of cold trapped alkali atoms interacting with optical fields. While most treatments of spin squeezing consider only the case in which the ensemble is composed of two level systems or qubits, we utilize the entire ground manifold of an alkali atom with hyperfine spin f greater than or equal to 1/2, a qudit. Spin squeezing requires non-classical correlations between the constituent atomic spins, which are generated through the atoms' collective coupling to the light. Either through measurement or multiple interactions with the atoms, the light mediates an entangling interaction that produces quantum correlations. Because the spin squeezing treated in this dissertation ultimately originates from the coupling between the light and atoms, conventional approaches of improving this squeezing have focused on increasing the optical density of the ensemble. The greater number of internal degrees of freedom and the controllability of the spin-f ground hyperfine manifold enable novel methods of enhancing squeezing. In particular, we find that state preparation using control of the internal hyperfine spin increases the entangling power of squeezing protocols when f>1/2. Post-processing of the ensemble using additional internal spin control converts this entanglement into metrologically useful spin squeezing. By employing a variation of the Holstein-Primakoff approximation, in which the collective spin observables of the atomic ensemble are treated as quadratures of a bosonic mode, we model entanglement generation, spin squeezing and the effects of internal spin control. The Holstein-Primakoff formalism also enables us to take into account the decoherence of the ensemble due to optical pumping. While most works ignore or treat optical pumping phenomenologically, we employ a master equation derived from first principles. Our analysis shows that state preparation and the hyperfine spin size have a substantial impact upon both the generation of spin squeezing and the decoherence of the ensemble. Through a numerical search, we determine state preparations that enhance squeezing protocols while remaining robust to optical pumping. Finally, most work on spin squeezing in atomic ensembles has treated the light as a plane wave that couples identically to all atoms. In the final part of this dissertation, we go beyond the customary plane wave approximation on the light and employ focused paraxial beams, which are more efficiently mode matched to the radiation pattern of the atomic ensemble. The mathematical formalism and the internal spin control techniques that we applied in the plane wave case are generalized to accommodate the non-homogeneous paraxial probe. We find the optimal geometries of the atomic ensemble and the probe for mode matching and generation of spin squeezing.
Internal Spin Control, Squeezing and Decoherence in Ensembles of Alkali Atomic Spins
NASA Astrophysics Data System (ADS)
Norris, Leigh Morgan
Large atomic ensembles interacting with light are one of the most promising platforms for quantum information processing. In the past decade, novel applications for these systems have emerged in quantum communication, quantum computing, and metrology. Essential to all of these applications is the controllability of the atomic ensemble, which is facilitated by a strong coupling between the atoms and light. Non-classical spin squeezed states are a crucial step in attaining greater ensemble control. The degree of entanglement present in these states, furthermore, serves as a benchmark for the strength of the atom-light interaction. Outside the broader context of quantum information processing with atomic ensembles, spin squeezed states have applications in metrology, where their quantum correlations can be harnessed to improve the precision of magnetometers and atomic clocks. This dissertation focuses upon the production of spin squeezed states in large ensembles of cold trapped alkali atoms interacting with optical fields. While most treatments of spin squeezing consider only the case in which the ensemble is composed of two level systems or qubits, we utilize the entire ground manifold of an alkali atom with hyperfine spin f greater or equal to 1/2, a qudit. Spin squeezing requires non-classical correlations between the constituent atomic spins, which are generated through the atoms' collective coupling to the light. Either through measurement or multiple interactions with the atoms, the light mediates an entangling interaction that produces quantum correlations. Because the spin squeezing treated in this dissertation ultimately originates from the coupling between the light and atoms, conventional approaches of improving this squeezing have focused on increasing the optical density of the ensemble. The greater number of internal degrees of freedom and the controllability of the spin-f ground hyperfine manifold enable novel methods of enhancing squeezing. In particular, we find that state preparation using control of the internal hyperfine spin increases the entangling power of squeezing protocols when f >1/2. Post-processing of the ensemble using additional internal spin control converts this entanglement into metrologically useful spin squeezing. By employing a variation of the Holstein-Primakoff approximation, in which the collective spin observables of the atomic ensemble are treated as quadratures of a bosonic mode, we model entanglement generation, spin squeezing and the effects of internal spin control. The Holstein-Primakoff formalism also enables us to take into account the decoherence of the ensemble due to optical pumping. While most works ignore or treat optical pumping phenomenologically, we employ a master equation derived from first principles. Our analysis shows that state preparation and the hyperfine spin size have a substantial impact upon both the generation of spin squeezing and the decoherence of the ensemble. Through a numerical search, we determine state preparations that enhance squeezing protocols while remaining robust to optical pumping. Finally, most work on spin squeezing in atomic ensembles has treated the light as a plane wave that couples identically to all atoms. In the final part of this dissertation, we go beyond the customary plane wave approximation on the light and employ focused paraxial beams, which are more efficiently mode matched to the radiation pattern of the atomic ensemble. The mathematical formalism and the internal spin control techniques that we applied in the plane wave case are generalized to accommodate the non-homogeneous paraxial probe. We find the optimal geometries of the atomic ensemble and the probe for mode matching and generation of spin squeezing.
Super-resolution reconstruction of hyperspectral images.
Akgun, Toygar; Altunbasak, Yucel; Mersereau, Russell M
2005-11-01
Hyperspectral images are used for aerial and space imagery applications, including target detection, tracking, agricultural, and natural resource exploration. Unfortunately, atmospheric scattering, secondary illumination, changing viewing angles, and sensor noise degrade the quality of these images. Improving their resolution has a high payoff, but applying super-resolution techniques separately to every spectral band is problematic for two main reasons. First, the number of spectral bands can be in the hundreds, which increases the computational load excessively. Second, considering the bands separately does not make use of the information that is present across them. Furthermore, separate band super-resolution does not make use of the inherent low dimensionality of the spectral data, which can effectively be used to improve the robustness against noise. In this paper, we introduce a novel super-resolution method for hyperspectral images. An integral part of our work is to model the hyperspectral image acquisition process. We propose a model that enables us to represent the hyperspectral observations from different wavelengths as weighted linear combinations of a small number of basis image planes. Then, a method for applying super resolution to hyperspectral images using this model is presented. The method fuses information from multiple observations and spectral bands to improve spatial resolution and reconstruct the spectrum of the observed scene as a combination of a small number of spectral basis functions.
Measuring the performance of super-resolution reconstruction algorithms
NASA Astrophysics Data System (ADS)
Dijk, Judith; Schutte, Klamer; van Eekeren, Adam W. M.; Bijl, Piet
2012-06-01
For many military operations situational awareness is of great importance. This situational awareness and related tasks such as Target Acquisition can be acquired using cameras, of which the resolution is an important characteristic. Super resolution reconstruction algorithms can be used to improve the effective sensor resolution. In order to judge these algorithms and the conditions under which they operate best, performance evaluation methods are necessary. This evaluation, however, is not straightforward for several reasons. First of all, frequency-based evaluation techniques alone will not provide a correct answer, due to the fact that they are unable to discriminate between structure-related and noise-related effects. Secondly, most super-resolution packages perform additional image enhancement techniques such as noise reduction and edge enhancement. As these algorithms improve the results they cannot be evaluated separately. Thirdly, a single high-resolution ground truth is rarely available. Therefore, evaluation of the differences in high resolution between the estimated high resolution image and its ground truth is not that straightforward. Fourth, different artifacts can occur due to super-resolution reconstruction, which are not known on forehand and hence are difficult to evaluate. In this paper we present a set of new evaluation techniques to assess super-resolution reconstruction algorithms. Some of these evaluation techniques are derived from processing on dedicated (synthetic) imagery. Other evaluation techniques can be evaluated on both synthetic and natural images (real camera data). The result is a balanced set of evaluation algorithms that can be used to assess the performance of super-resolution reconstruction algorithms.
5D Super Yang-Mills on Y p, q Sasaki-Einstein Manifolds
NASA Astrophysics Data System (ADS)
Qiu, Jian; Zabzine, Maxim
2015-01-01
On any simply connected Sasaki-Einstein five dimensional manifold one can construct a super Yang-Mills theory which preserves at least two supersymmetries. We study the special case of toric Sasaki-Einstein manifolds known as Y p, q manifolds. We use the localisation technique to compute the full perturbative part of the partition function. The full equivariant result is expressed in terms of a certain special function which appears to be a curious generalisation of the triple sine function. As an application of our general result we study the large N behaviour for the case of single hypermultiplet in adjoint representation and we derive the N 3-behaviour in this case.
Demosaiced pixel super-resolution for multiplexed holographic color imaging
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2016-01-01
To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242
NASA Astrophysics Data System (ADS)
Wetterhall, F.; Cloke, H. L.; He, Y.; Freer, J.; Pappenberger, F.
2012-04-01
Evidence provided by modelled assessments of climate change impact on flooding is fundamental to water resource and flood risk decision making. Impact models usually rely on climate projections from Global and Regional Climate Models, and there is no doubt that these provide a useful assessment of future climate change. However, cascading ensembles of climate projections into impact models is not straightforward because of problems of coarse resolution in Global and Regional Climate Models (GCM/RCM) and the deficiencies in modelling high-intensity precipitation events. Thus decisions must be made on how to appropriately pre-process the meteorological variables from GCM/RCMs, such as selection of downscaling methods and application of Model Output Statistics (MOS). In this paper a grand ensemble of projections from several GCM/RCM are used to drive a hydrological model and analyse the resulting future flood projections for the Upper Severn, UK. The impact and implications of applying MOS techniques to precipitation as well as hydrological model parameter uncertainty is taken into account. The resultant grand ensemble of future river discharge projections from the RCM/GCM-hydrological model chain is evaluated against a response surface technique combined with a perturbed physics experiment creating a probabilisic ensemble climate model outputs. The ensemble distribution of results show that future risk of flooding in the Upper Severn increases compared to present conditions, however, the study highlights that the uncertainties are large and that strong assumptions were made in using Model Output Statistics to produce the estimates of future discharge. The importance of analysing on a seasonal basis rather than just annual is highlighted. The inability of the RCMs (and GCMs) to produce realistic precipitation patterns, even in present conditions, is a major caveat of local climate impact studies on flooding, and this should be a focus for future development.
NASA Astrophysics Data System (ADS)
Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.
2015-11-01
Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.
NASA Astrophysics Data System (ADS)
Singh, Rajender; Sharma, Ramesh; Barman, P. B.; Sharma, Dheeraj
2017-11-01
UV shielding based super hydrophilic material is developed in the present formulation by in situ emulsion polymerization of poly (styrene-acrylonitrile) with ZnO nanoparticles. The ESI-MS technique confirms the structure of polymer nanocomposite by their mass fragments. The XRD study confirms the presence of ZnO phase in polymer matrix. PSAN/ZnO nanocomposite leads to give effective UV shielding (upto 375 nm) and visible luminescence with ZnO content in polymer matrix. The FESEM and TEM studies confirm the symmetrical, controlled growth of PNs. The incorporation of ZnO nanofillers into PSAN matrix lead to restructuring the PNs surfaces into superhydrophilic surfaces in water contact angle (WCA) from 70° to 10°. We believe our synthesized PSAN/ZnO nanocomposite could be potential as UV shielding, luminescent and super hydrophilic nature based materials in related commercial applications.
HIPPI: highly accurate protein family classification with ensembles of HMMs.
Nguyen, Nam-Phuong; Nute, Michael; Mirarab, Siavash; Warnow, Tandy
2016-11-11
Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification). HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .
Forced synchronization of large-scale circulation to increase predictability of surface states
NASA Astrophysics Data System (ADS)
Shen, Mao-Lin; Keenlyside, Noel; Selten, Frank; Wiegerinck, Wim; Duane, Gregory
2016-04-01
Numerical models are key tools in the projection of the future climate change. The lack of perfect initial condition and perfect knowledge of the laws of physics, as well as inherent chaotic behavior limit predictions. Conceptually, the atmospheric variables can be decomposed into a predictable component (signal) and an unpredictable component (noise). In ensemble prediction the anomaly of ensemble mean is regarded as the signal and the ensemble spread the noise. Naturally the prediction skill will be higher if the signal-to-noise ratio (SNR) is larger in the initial conditions. We run two ensemble experiments in order to explore a way to reduce the SNR of surface winds and temperature. One ensemble experiment is AGCM with prescribing sea surface temperature (SST); the other is AGCM with both prescribing SST and nudging the high-level temperature and winds to ERA-Interim. Each ensemble has 30 members. Larger SNR is expected and found over the tropical ocean in the first experiment because the tropical circulation is associated with the convection and the associated surface wind convergence as these are to a large extent driven by the SST. However, small SNR is found over high latitude ocean and land surface due to the chaotic and non-synchronized atmosphere states. In the second experiment the higher level temperature and winds are forced to be synchronized (nudged to reanalysis) and hence a larger SNR of surface winds and temperature is expected. Furthermore, different nudging coefficients are also tested in order to understand the limitation of both synchronization of large-scale circulation and the surface states. These experiments will be useful for the developing strategies to synchronize the 3-D states of atmospheric models that can be later used to build a super model.
Decadal climate predictions improved by ocean ensemble dispersion filtering
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.
2017-06-01
Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.
NASA Astrophysics Data System (ADS)
Cook, L. M.; Samaras, C.; Anderson, C.
2016-12-01
Engineers generally use historical precipitation trends to inform assumptions and parameters for long-lived infrastructure designs. However, resilient design calls for the adjustment of current engineering practice to incorporate a range of future climate conditions that are likely to be different than the past. Despite the availability of future projections from downscaled climate models, there remains a considerable mismatch between climate model outputs and the inputs needed in the engineering community to incorporate climate resiliency. These factors include differences in temporal and spatial scales, model uncertainties, and a lack of criteria for selection of an ensemble of models. This research addresses the limitations to working with climate data by providing a framework for the use of publicly available downscaled climate projections to inform engineering resiliency. The framework consists of five steps: 1) selecting the data source based on the engineering application, 2) extracting the data at a specific location, 3) validating for performance against observed data, 4) post-processing for bias or scale, and 5) selecting the ensemble and calculating statistics. The framework is illustrated with an example application to extreme precipitation-frequency statistics, the 25-year daily precipitation depth, using four publically available climate data sources: NARCCAP, USGS, Reclamation, and MACA. The attached figure presents the results for step 5 from the framework, analyzing how the 24H25Y depth changes when the model ensemble is culled based on model performance against observed data, for both post-processing techniques: bias-correction and change factor. Culling the model ensemble increases both the mean and median values for all data sources, and reduces range for NARCCAP and MACA ensembles due to elimination of poorer performing models, and in some cases, those that predict a decrease in future 24H25Y precipitation volumes. This result is especially relevant to engineers who wish to reduce the range of the ensemble and remove contradicting models; however, this result is not generalizable for all cases. Finally, this research highlights the need for the formation of an intermediate entity that is able to translate climate projections into relevant engineering information.
Regionalization of post-processed ensemble runoff forecasts
NASA Astrophysics Data System (ADS)
Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian
2016-05-01
For many years, meteorological models have been run with perturbated initial conditions or parameters to produce ensemble forecasts that are used as a proxy of the uncertainty of the forecasts. However, the ensembles are usually both biased (the mean is systematically too high or too low, compared with the observed weather), and has dispersion errors (the ensemble variance indicates a too low or too high confidence in the forecast, compared with the observed weather). The ensembles are therefore commonly post-processed to correct for these shortcomings. Here we look at one of these techniques, referred to as Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). Originally, the post-processing parameters were identified as a fixed set of parameters for a region. The application of our work is the European Flood Awareness System (http://www.efas.eu), where a distributed model is run with meteorological ensembles as input. We are therefore dealing with a considerably larger data set than previous analyses. We also want to regionalize the parameters themselves for other locations than the calibration gauges. The post-processing parameters are therefore estimated for each calibration station, but with a spatial penalty for deviations from neighbouring stations, depending on the expected semivariance between the calibration catchment and these stations. The estimated post-processed parameters can then be used for regionalization of the postprocessing parameters also for uncalibrated locations using top-kriging in the rtop-package (Skøien et al., 2006, 2014). We will show results from cross-validation of the methodology and although our interest is mainly in identifying exceedance probabilities for certain return levels, we will also show how the rtop package can be used for creating a set of post-processed ensembles through simulations.
A Builder's Guide to Super Good Cents Contruction and Sales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
OSU Extension Energy Program; United States. Bonneville Power Administration.
This Builder's guide describes the Super Good Cents {reg sign} program and the benefits available to participating builders. It explains the program standards and the typical building techniques used by Super Good Cents builders. Finally, the guide tells how you can participate and answers many of the questions asked by builders about the Super Good Cents program.
Towards an improved ensemble precipitation forecast: A probabilistic post-processing approach
NASA Astrophysics Data System (ADS)
Khajehei, Sepideh; Moradkhani, Hamid
2017-03-01
Recently, ensemble post-processing (EPP) has become a commonly used approach for reducing the uncertainty in forcing data and hence hydrologic simulation. The procedure was introduced to build ensemble precipitation forecasts based on the statistical relationship between observations and forecasts. More specifically, the approach relies on a transfer function that is developed based on a bivariate joint distribution between the observations and the simulations in the historical period. The transfer function is used to post-process the forecast. In this study, we propose a Bayesian EPP approach based on copula functions (COP-EPP) to improve the reliability of the precipitation ensemble forecast. Evaluation of the copula-based method is carried out by comparing the performance of the generated ensemble precipitation with the outputs from an existing procedure, i.e. mixed type meta-Gaussian distribution. Monthly precipitation from Climate Forecast System Reanalysis (CFS) and gridded observation from Parameter-Elevation Relationships on Independent Slopes Model (PRISM) have been employed to generate the post-processed ensemble precipitation. Deterministic and probabilistic verification frameworks are utilized in order to evaluate the outputs from the proposed technique. Distribution of seasonal precipitation for the generated ensemble from the copula-based technique is compared to the observation and raw forecasts for three sub-basins located in the Western United States. Results show that both techniques are successful in producing reliable and unbiased ensemble forecast, however, the COP-EPP demonstrates considerable improvement in the ensemble forecast in both deterministic and probabilistic verification, in particular in characterizing the extreme events in wet seasons.
NASA Astrophysics Data System (ADS)
Cecinati, Francesca; Rico-Ramirez, Miguel Angel; Heuvelink, Gerard B. M.; Han, Dawei
2017-05-01
The application of radar quantitative precipitation estimation (QPE) to hydrology and water quality models can be preferred to interpolated rainfall point measurements because of the wide coverage that radars can provide, together with a good spatio-temporal resolutions. Nonetheless, it is often limited by the proneness of radar QPE to a multitude of errors. Although radar errors have been widely studied and techniques have been developed to correct most of them, residual errors are still intrinsic in radar QPE. An estimation of uncertainty of radar QPE and an assessment of uncertainty propagation in modelling applications is important to quantify the relative importance of the uncertainty associated to radar rainfall input in the overall modelling uncertainty. A suitable tool for this purpose is the generation of radar rainfall ensembles. An ensemble is the representation of the rainfall field and its uncertainty through a collection of possible alternative rainfall fields, produced according to the observed errors, their spatial characteristics, and their probability distribution. The errors are derived from a comparison between radar QPE and ground point measurements. The novelty of the proposed ensemble generator is that it is based on a geostatistical approach that assures a fast and robust generation of synthetic error fields, based on the time-variant characteristics of errors. The method is developed to meet the requirement of operational applications to large datasets. The method is applied to a case study in Northern England, using the UK Met Office NIMROD radar composites at 1 km resolution and at 1 h accumulation on an area of 180 km by 180 km. The errors are estimated using a network of 199 tipping bucket rain gauges from the Environment Agency. 183 of the rain gauges are used for the error modelling, while 16 are kept apart for validation. The validation is done by comparing the radar rainfall ensemble with the values recorded by the validation rain gauges. The validated ensemble is then tested on a hydrological case study, to show the advantage of probabilistic rainfall for uncertainty propagation. The ensemble spread only partially captures the mismatch between the modelled and the observed flow. The residual uncertainty can be attributed to other sources of uncertainty, in particular to model structural uncertainty, parameter identification uncertainty, uncertainty in other inputs, and uncertainty in the observed flow.
HEPEX - achievements and challenges!
NASA Astrophysics Data System (ADS)
Pappenberger, Florian; Ramos, Maria-Helena; Thielen, Jutta; Wood, Andy; Wang, Qj; Duan, Qingyun; Collischonn, Walter; Verkade, Jan; Voisin, Nathalie; Wetterhall, Fredrik; Vuillaume, Jean-Francois Emmanuel; Lucatero Villasenor, Diana; Cloke, Hannah L.; Schaake, John; van Andel, Schalk-Jan
2014-05-01
HEPEX is an international initiative bringing together hydrologists, meteorologists, researchers and end-users to develop advanced probabilistic hydrological forecast techniques for improved flood, drought and water management. HEPEX was launched in 2004 as an independent, cooperative international scientific activity. During the first meeting, the overarching goal was defined as: "to develop and test procedures to produce reliable hydrological ensemble forecasts, and to demonstrate their utility in decision making related to the water, environmental and emergency management sectors." The applications of hydrological ensemble predictions span across large spatio-temporal scales, ranging from short-term and localized predictions to global climate change and regional modeling. Within the HEPEX community, information is shared through its blog (www.hepex.org), meetings, testbeds and intercompaison experiments, as well as project reportings. Key questions of HEPEX are: * What adaptations are required for meteorological ensemble systems to be coupled with hydrological ensemble systems? * How should the existing hydrological ensemble prediction systems be modified to account for all sources of uncertainty within a forecast? * What is the best way for the user community to take advantage of ensemble forecasts and to make better decisions based on them? This year HEPEX celebrates its 10th year anniversary and this poster will present a review of the main operational and research achievements and challenges prepared by Hepex contributors on data assimilation, post-processing of hydrologic predictions, forecast verification, communication and use of probabilistic forecasts in decision-making. Additionally, we will present the most recent activities implemented by Hepex and illustrate how everyone can join the community and participate to the development of new approaches in hydrologic ensemble prediction.
Transition-Metal Chalcogenide/Graphene Ensembles for Light-Induced Energy Applications.
Kagkoura, Antonia; Skaltsas, Theodosis; Tagmatarchis, Nikos
2017-09-21
Recently, nanomaterials that harvest solar energy and convert it to other forms of energy are of great interest. In this context, transition metal chalcogenides (TMCs) have recently been in the spotlight due to their optoelectronic properties that render them potential candidates mainly in energy conversion applications. Integration of TMCs onto a strong electron-accepting material, such as graphene, yielding novel TMC/graphene ensembles is of high significance, since photoinduced charge-transfer phenomena, leading to intra-ensemble charge separation, may occur. In this review, we highlight the utility of TMC/graphene ensembles, with a specific focus on latest trends in applications, while their synthetic routes are also discussed. In fact, TMC/graphene ensembles are photocatalytically active and superior as compared to intact TMCs analogues, when examined toward photocatalytic H 2 evolution, dye degradation and redox transformations of organic compounds. Moreover, TMC/graphene ensembles have shown excellent prospect when employed in photovoltaics and biosensing applications. Finally, the future prospects of such materials are outlined. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Review of combined isotopic and optical nanoscopy
Richter, Katharina N.; Rizzoli, Silvio O.; Jähne, Sebastian; Vogts, Angela; Lovric, Jelena
2017-01-01
Abstract. Investigating the detailed substructure of the cell is beyond the ability of conventional optical microscopy. Electron microscopy, therefore, has been the only option for such studies for several decades. The recent implementation of several super-resolution optical microscopy techniques has rendered the investigation of cellular substructure easier and more efficient. Nevertheless, optical microscopy only provides an image of the present structure of the cell, without any information on its long-temporal changes. These can be investigated by combining super-resolution optics with a nonoptical imaging technique, nanoscale secondary ion mass spectrometry, which investigates the isotopic composition of the samples. The resulting technique, combined isotopic and optical nanoscopy, enables the investigation of both the structure and the “history” of the cellular elements. The age and the turnover of cellular organelles can be read by isotopic imaging, while the structure can be analyzed by optical (fluorescence) approaches. We present these technologies, and we discuss their implementation for the study of biological samples. We conclude that, albeit complex, this type of technology is reliable enough for mass application to cell biology. PMID:28466025
NASA Astrophysics Data System (ADS)
Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.
USDA-ARS?s Scientific Manuscript database
The Ensemble Kalman Filter (EnKF), a popular data assimilation technique for non-linear systems was applied to the Root Zone Water Quality Model. Measured soil moisture data at four different depths (5cm, 20cm, 40cm and 60cm) from two agricultural fields (AS1 and AS2) in northeastern Indiana were us...
Leveraging Existing Mission Tools in a Re-Usable, Component-Based Software Environment
NASA Technical Reports Server (NTRS)
Greene, Kevin; Grenander, Sven; Kurien, James; z,s (fshir. z[orttr); z,scer; O'Reilly, Taifun
2006-01-01
Emerging methods in component-based software development offer significant advantages but may seem incompatible with existing mission operations applications. In this paper we relate our positive experiences integrating existing mission applications into component-based tools we are delivering to three missions. In most operations environments, a number of software applications have been integrated together to form the mission operations software. In contrast, with component-based software development chunks of related functionality and data structures, referred to as components, can be individually delivered, integrated and re-used. With the advent of powerful tools for managing component-based development, complex software systems can potentially see significant benefits in ease of integration, testability and reusability from these techniques. These benefits motivate us to ask how component-based development techniques can be relevant in a mission operations environment, where there is significant investment in software tools that are not component-based and may not be written in languages for which component-based tools even exist. Trusted and complex software tools for sequencing, validation, navigation, and other vital functions cannot simply be re-written or abandoned in order to gain the advantages offered by emerging component-based software techniques. Thus some middle ground must be found. We have faced exactly this issue, and have found several solutions. Ensemble is an open platform for development, integration, and deployment of mission operations software that we are developing. Ensemble itself is an extension of an open source, component-based software development platform called Eclipse. Due to the advantages of component-based development, we have been able to vary rapidly develop mission operations tools for three surface missions by mixing and matching from a common set of mission operation components. We have also had to determine how to integrate existing mission applications for sequence development, sequence validation, and high level activity planning, and other functions into a component-based environment. For each of these, we used a somewhat different technique based upon the structure and usage of the existing application.
A Builder`s Guide to Super Good Cents Contruction and Sales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
OSU Extension Energy Program; United States. Bonneville Power Administration.
This Builder`s guide describes the Super Good Cents {reg_sign} program and the benefits available to participating builders. It explains the program standards and the typical building techniques used by Super Good Cents builders. Finally, the guide tells how you can participate and answers many of the questions asked by builders about the Super Good Cents program.
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Baetz, B. W.; Cai, X. M.; Ancell, B. C.; Fan, Y. R.
2017-11-01
The ensemble Kalman filter (EnKF) is recognized as a powerful data assimilation technique that generates an ensemble of model variables through stochastic perturbations of forcing data and observations. However, relatively little guidance exists with regard to the proper specification of the magnitude of the perturbation and the ensemble size, posing a significant challenge in optimally implementing the EnKF. This paper presents a robust data assimilation system (RDAS), in which a multi-factorial design of the EnKF experiments is first proposed for hydrologic ensemble predictions. A multi-way analysis of variance is then used to examine potential interactions among factors affecting the EnKF experiments, achieving optimality of the RDAS with maximized performance of hydrologic predictions. The RDAS is applied to the Xiangxi River watershed which is the most representative watershed in China's Three Gorges Reservoir region to demonstrate its validity and applicability. Results reveal that the pairwise interaction between perturbed precipitation and streamflow observations has the most significant impact on the performance of the EnKF system, and their interactions vary dynamically across different settings of the ensemble size and the evapotranspiration perturbation. In addition, the interactions among experimental factors vary greatly in magnitude and direction depending on different statistical metrics for model evaluation including the Nash-Sutcliffe efficiency and the Box-Cox transformed root-mean-square error. It is thus necessary to test various evaluation metrics in order to enhance the robustness of hydrologic prediction systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blum, Suzanne A.
2016-05-24
The reactive behavior of individual molecules is seldom observed, because we usually measure the average properties of billions of molecules. What we miss is important: the catalytic activity of less than 1% of the molecules under observation can dominate the outcome of a chemical reaction seen at a macroscopic level. Currently available techniques to examine reaction mechanisms (such as nuclear magnetic resonance spectroscopy and mass spectrometry) study molecules as an averaged ensemble. These ensemble techniques are unable to detect minor components (under ~1%) in mixtures or determine which components in the mixture are responsible for reactivity and catalysis. In themore » field of mechanistic chemistry, there is a resulting heuristic device that if an intermediate is very reactive in catalysis, it often cannot be observed (termed “Halpern’s Rule” ). Ultimately, the development of single-molecule imaging technology could be a powerful tool to observe these “unobservable” intermediates and active catalysts. Single-molecule techniques have already transformed biology and the understanding of biochemical processes. The potential of single-molecule fluorescence microscopy to address diverse chemical questions, such as the chemical reactivity of organometallic or inorganic systems with discrete metal complexes, however, has not yet been realized. In this respect, its application to chemical systems lags significantly behind its application to biophysical systems. This transformative imaging technique has broad, multidisciplinary impact with the potential to change the way the chemistry community studies reaction mechanisms and reactivity distributions, especially in the core area of catalysis.« less
Zheng, Wenjing; Balzer, Laura; van der Laan, Mark; Petersen, Maya
2018-01-30
Binary classification problems are ubiquitous in health and social sciences. In many cases, one wishes to balance two competing optimality considerations for a binary classifier. For instance, in resource-limited settings, an human immunodeficiency virus prevention program based on offering pre-exposure prophylaxis (PrEP) to select high-risk individuals must balance the sensitivity of the binary classifier in detecting future seroconverters (and hence offering them PrEP regimens) with the total number of PrEP regimens that is financially and logistically feasible for the program. In this article, we consider a general class of constrained binary classification problems wherein the objective function and the constraint are both monotonic with respect to a threshold. These include the minimization of the rate of positive predictions subject to a minimum sensitivity, the maximization of sensitivity subject to a maximum rate of positive predictions, and the Neyman-Pearson paradigm, which minimizes the type II error subject to an upper bound on the type I error. We propose an ensemble approach to these binary classification problems based on the Super Learner methodology. This approach linearly combines a user-supplied library of scoring algorithms, with combination weights and a discriminating threshold chosen to minimize the constrained optimality criterion. We then illustrate the application of the proposed classifier to develop an individualized PrEP targeting strategy in a resource-limited setting, with the goal of minimizing the number of PrEP offerings while achieving a minimum required sensitivity. This proof of concept data analysis uses baseline data from the ongoing Sustainable East Africa Research in Community Health study. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Resolving ultrafast exciton migration in organic solids at the nanoscale
NASA Astrophysics Data System (ADS)
Ginsberg, Naomi
The migration of Frenkel excitons, tightly-bound electron-hole pairs, in photosynthesis and in organic semiconducting films is critical to the efficiency of natural and artificial light harvesting. While these materials exhibit a high degree of structural heterogeneity on the nanoscale, traditional measurements of exciton migration lengths are performed on bulk samples. Since both the characteristic length scales of structural heterogeneity and the reported bulk diffusion lengths are smaller than the optical diffraction limit, we adapt far-field super-resolution fluorescence imaging to uncover the correlations between the structural and energetic landscapes that the excitons explore. By combining the ultrafast super-resolved measurements with exciton hopping simulations we furthermore specify the nature (in addition to the extent) of exciton migration as a function of the intrinsic and ensemble chromophore energy scales that determine a spatio-energetic landscape for migration. In collaboration with: Samuel Penwell, Lucas Ginsberg, University of California, Berkeley and Rodrigo Noriega University of Utah.
Limit Theorems and Their Relation to Solute Transport in Simulated Fractured Media
NASA Astrophysics Data System (ADS)
Reeves, D. M.; Benson, D. A.; Meerschaert, M. M.
2003-12-01
Solute particles that travel through fracture networks are subject to wide velocity variations along a restricted set of directions. This may result in super-Fickian dispersion along a few primary scaling directions. The fractional advection-dispersion equation (FADE), a modification of the original advection-dispersion equation in which a fractional derivative replaces the integer-order dispersion term, has the ability to model rapid, non-Gaussian solute transport. The FADE assumes that solute particle motions converge to either α -stable or operator stable densities, which are modeled by spatial fractional derivatives. In multiple dimensions, the multi-fractional dispersion derivative dictates the order and weight of differentiation in all directions, which correspond to the statistics of large particle motions in all directions. This study numerically investigates the presence of super- Fickian solute transport through simulated two-dimensional fracture networks. An ensemble of networks is gen
Magnetofermionic condensate in two dimensions
Kulik, L. V.; Zhuravlev, A. S.; Dickmann, S.; Gorbunov, A. V.; Timofeev, V. B.; Kukushkin, I. V.; Schmult, S.
2016-01-01
Coherent condensate states of particles obeying either Bose or Fermi statistics are in the focus of interest in modern physics. Here we report on condensation of collective excitations with Bose statistics, cyclotron magnetoexcitons, in a high-mobility two-dimensional electron system in a magnetic field. At low temperatures, the dense non-equilibrium ensemble of long-lived triplet magnetoexcitons exhibits both a drastic reduction in the viscosity and a steep enhancement in the response to the external electromagnetic field. The observed effects are related to formation of a super-absorbing state interacting coherently with the electromagnetic field. Simultaneously, the electrons below the Fermi level form a super-emitting state. The effects are explicable from the viewpoint of a coherent condensate phase in a non-equilibrium system of two-dimensional fermions with a fully quantized energy spectrum. The condensation occurs in the space of vectors of magnetic translations, a property providing a completely new landscape for future physical investigations. PMID:27848969
Propagation phasor approach for holographic image reconstruction
Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan
2016-01-01
To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears. PMID:26964671
Supermodeling With A Global Atmospheric Model
NASA Astrophysics Data System (ADS)
Wiegerinck, Wim; Burgers, Willem; Selten, Frank
2013-04-01
In weather and climate prediction studies it often turns out to be the case that the multi-model ensemble mean prediction has the best prediction skill scores. One possible explanation is that the major part of the model error is random and is averaged out in the ensemble mean. In the standard multi-model ensemble approach, the models are integrated in time independently and the predicted states are combined a posteriori. Recently an alternative ensemble prediction approach has been proposed in which the models exchange information during the simulation and synchronize on a common solution that is closer to the truth than any of the individual model solutions in the standard multi-model ensemble approach or a weighted average of these. This approach is called the super modeling approach (SUMO). The potential of the SUMO approach has been demonstrated in the context of simple, low-order, chaotic dynamical systems. The information exchange takes the form of linear nudging terms in the dynamical equations that nudge the solution of each model to the solution of all other models in the ensemble. With a suitable choice of the connection strengths the models synchronize on a common solution that is indeed closer to the true system than any of the individual model solutions without nudging. This approach is called connected SUMO. An alternative approach is to integrate a weighted averaged model, weighted SUMO. At each time step all models in the ensemble calculate the tendency, these tendencies are weighted averaged and the state is integrated one time step into the future with this weighted averaged tendency. It was shown that in case the connected SUMO synchronizes perfectly, the connected SUMO follows the weighted averaged trajectory and both approaches yield the same solution. In this study we pioneer both approaches in the context of a global, quasi-geostrophic, three-level atmosphere model that is capable of simulating quite realistically the extra-tropical circulation in the Northern Hemisphere winter.
Impact of ensemble learning in the assessment of skeletal maturity.
Cunha, Pedro; Moura, Daniel C; Guevara López, Miguel Angel; Guerra, Conceição; Pinto, Daniela; Ramos, Isabel
2014-09-01
The assessment of the bone age, or skeletal maturity, is an important task in pediatrics that measures the degree of maturation of children's bones. Nowadays, there is no standard clinical procedure for assessing bone age and the most widely used approaches are the Greulich and Pyle and the Tanner and Whitehouse methods. Computer methods have been proposed to automatize the process; however, there is a lack of exploration about how to combine the features of the different parts of the hand, and how to take advantage of ensemble techniques for this purpose. This paper presents a study where the use of ensemble techniques for improving bone age assessment is evaluated. A new computer method was developed that extracts descriptors for each joint of each finger, which are then combined using different ensemble schemes for obtaining a final bone age value. Three popular ensemble schemes are explored in this study: bagging, stacking and voting. Best results were achieved by bagging with a rule-based regression (M5P), scoring a mean absolute error of 10.16 months. Results show that ensemble techniques improve the prediction performance of most of the evaluated regression algorithms, always achieving best or comparable to best results. Therefore, the success of the ensemble methods allow us to conclude that their use may improve computer-based bone age assessment, offering a scalable option for utilizing multiple regions of interest and combining their output.
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no informationmore » about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.« less
An ensemble framework for clustering protein-protein interaction networks.
Asur, Sitaram; Ucar, Duygu; Parthasarathy, Srinivasan
2007-07-01
Protein-Protein Interaction (PPI) networks are believed to be important sources of information related to biological processes and complex metabolic functions of the cell. The presence of biologically relevant functional modules in these networks has been theorized by many researchers. However, the application of traditional clustering algorithms for extracting these modules has not been successful, largely due to the presence of noisy false positive interactions as well as specific topological challenges in the network. In this article, we propose an ensemble clustering framework to address this problem. For base clustering, we introduce two topology-based distance metrics to counteract the effects of noise. We develop a PCA-based consensus clustering technique, designed to reduce the dimensionality of the consensus problem and yield informative clusters. We also develop a soft consensus clustering variant to assign multifaceted proteins to multiple functional groups. We conduct an empirical evaluation of different consensus techniques using topology-based, information theoretic and domain-specific validation metrics and show that our approaches can provide significant benefits over other state-of-the-art approaches. Our analysis of the consensus clusters obtained demonstrates that ensemble clustering can (a) produce improved biologically significant functional groupings; and (b) facilitate soft clustering by discovering multiple functional associations for proteins. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Boon, Choong S.; Guleryuz, Onur G.; Kawahara, Toshiro; Suzuki, Yoshinori
2006-08-01
We consider the mobile service scenario where video programming is broadcast to low-resolution wireless terminals. In such a scenario, broadcasters utilize simultaneous data services and bi-directional communications capabilities of the terminals in order to offer substantially enriched viewing experiences to users by allowing user participation and user tuned content. While users immediately benefit from this service when using their phones in mobile environments, the service is less appealing in stationary environments where a regular television provides competing programming at much higher display resolutions. We propose a fast super-resolution technique that allows the mobile terminals to show a much enhanced version of the broadcast video on nearby high-resolution devices, extending the appeal and usefulness of the broadcast service. The proposed single frame super-resolution algorithm uses recent sparse recovery results to provide high quality and high-resolution video reconstructions based solely on individual decoded frames provided by the low-resolution broadcast.
Dynamic placement of plasmonic hotspots for super-resolution surface-enhanced Raman scattering.
Ertsgaard, Christopher T; McKoskey, Rachel M; Rich, Isabel S; Lindquist, Nathan C
2014-10-28
In this paper, we demonstrate dynamic placement of locally enhanced plasmonic fields using holographic laser illumination of a silver nanohole array. To visualize these focused "hotspots", the silver surface was coated with various biological samples for surface-enhanced Raman spectroscopy (SERS) imaging. Due to the large field enhancements, blinking behavior of the SERS hotspots was observed and processed using a stochastic optical reconstruction microscopy algorithm enabling super-resolution localization of the hotspots to within 10 nm. These hotspots were then shifted across the surface in subwavelength (<100 nm for a wavelength of 660 nm) steps using holographic illumination from a spatial light modulator. This created a dynamic imaging and sensing surface, whereas static illumination would only have produced stationary hotspots. Using this technique, we also show that such subwavelength shifting and localization of plasmonic hotspots has potential for imaging applications. Interestingly, illuminating the surface with randomly shifting SERS hotspots was sufficient to completely fill in a wide field of view for super-resolution chemical imaging.
Super-resolution for scanning light stimulation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bitzer, L. A.; Neumann, K.; Benson, N., E-mail: niels.benson@uni-due.de
Super-resolution (SR) is a technique used in digital image processing to overcome the resolution limitation of imaging systems. In this process, a single high resolution image is reconstructed from multiple low resolution images. SR is commonly used for CCD and CMOS (Complementary Metal-Oxide-Semiconductor) sensor images, as well as for medical applications, e.g., magnetic resonance imaging. Here, we demonstrate that super-resolution can be applied with scanning light stimulation (LS) systems, which are common to obtain space-resolved electro-optical parameters of a sample. For our purposes, the Projection Onto Convex Sets (POCS) was chosen and modified to suit the needs of LS systems.more » To demonstrate the SR adaption, an Optical Beam Induced Current (OBIC) LS system was used. The POCS algorithm was optimized by means of OBIC short circuit current measurements on a multicrystalline solar cell, resulting in a mean square error reduction of up to 61% and improved image quality.« less
Plasmonics and metamaterials based super-resolution imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Liu, Zhaowei
2017-05-01
In recent years, surface imaging of various biological dynamics and biomechanical phenomena has seen a surge of interest. Imaging of processes such as exocytosis and kinesin motion are most effective when depth is limited to a very thin region of interest at the edge of the cell or specimen. However, many objects and processes of interest are of size scales below the diffraction limit for safe, visible wavelength illumination. Super-resolution imaging methods such as structured illumination microscopy and others have offered various compromises between resolution, imaging speed, and bio-compatibility. In this talk, I will present our most recent progress in plasmonic structured illumination microscopy (PSIM) and localized plasmonic structured illumination microscopy (LPSIM), and their applications in bio-imaging. We have achieved wide-field surface imaging with resolution down to 75 nm while maintaining reasonable speed and compatibility with biological specimens. These plasmonic enhanced super resolution techniques offer unique solutions to obtain 50nm spatial resolution and 50 frames per second wide imaging speed at the same time.
NASA Astrophysics Data System (ADS)
Khade, Vikram; Kurian, Jaison; Chang, Ping; Szunyogh, Istvan; Thyng, Kristen; Montuoro, Raffaele
2017-05-01
This paper demonstrates the potential of ocean ensemble forecasting in the Gulf of Mexico (GoM). The Bred Vector (BV) technique with one week rescaling frequency is implemented on a 9 km resolution version of the Regional Ocean Modelling System (ROMS). Numerical experiments are carried out by using the HYCOM analysis products to define the initial conditions and the lateral boundary conditions. The growth rates of the forecast uncertainty are estimated to be about 10% of initial amplitude per week. By carrying out ensemble forecast experiments with and without perturbed surface forcing, it is demonstrated that in the coastal regions accounting for uncertainties in the atmospheric forcing is more important than accounting for uncertainties in the ocean initial conditions. In the Loop Current region, the initial condition uncertainties, are the dominant source of the forecast uncertainty. The root-mean-square error of the Lagrangian track forecasts at the 15-day forecast lead time can be reduced by about 10 - 50 km using the ensemble mean Eulerian forecast of the oceanic flow for the computation of the tracks, instead of the single-initial-condition Eulerian forecast.
Gibbs Ensemble Simulations of the Solvent Swelling of Polymer Films
NASA Astrophysics Data System (ADS)
Gartner, Thomas; Epps, Thomas, III; Jayaraman, Arthi
Solvent vapor annealing (SVA) is a useful technique to tune the morphology of block polymer, polymer blend, and polymer nanocomposite films. Despite SVA's utility, standardized SVA protocols have not been established, partly due to a lack of fundamental knowledge regarding the interplay between the polymer(s), solvent, substrate, and free-surface during solvent annealing and evaporation. An understanding of how to tune polymer film properties in a controllable manner through SVA processes is needed. Herein, the thermodynamic implications of the presence of solvent in the swollen polymer film is explored through two alternative Gibbs ensemble simulation methods that we have developed and extended: Gibbs ensemble molecular dynamics (GEMD) and hybrid Monte Carlo (MC)/molecular dynamics (MD). In this poster, we will describe these simulation methods and demonstrate their application to polystyrene films swollen by toluene and n-hexane. Polymer film swelling experiments, Gibbs ensemble molecular simulations, and polymer reference interaction site model (PRISM) theory are combined to calculate an effective Flory-Huggins χ (χeff) for polymer-solvent mixtures. The effects of solvent chemistry, solvent content, polymer molecular weight, and polymer architecture on χeff are examined, providing a platform to control and understand the thermodynamics of polymer film swelling.
Robust video super-resolution with registration efficiency adaptation
NASA Astrophysics Data System (ADS)
Zhang, Xinfeng; Xiong, Ruiqin; Ma, Siwei; Zhang, Li; Gao, Wen
2010-07-01
Super-Resolution (SR) is a technique to construct a high-resolution (HR) frame by fusing a group of low-resolution (LR) frames describing the same scene. The effectiveness of the conventional super-resolution techniques, when applied on video sequences, strongly relies on the efficiency of motion alignment achieved by image registration. Unfortunately, such efficiency is limited by the motion complexity in the video and the capability of adopted motion model. In image regions with severe registration errors, annoying artifacts usually appear in the produced super-resolution video. This paper proposes a robust video super-resolution technique that adapts itself to the spatially-varying registration efficiency. The reliability of each reference pixel is measured by the corresponding registration error and incorporated into the optimization objective function of SR reconstruction. This makes the SR reconstruction highly immune to the registration errors, as outliers with higher registration errors are assigned lower weights in the objective function. In particular, we carefully design a mechanism to assign weights according to registration errors. The proposed superresolution scheme has been tested with various video sequences and experimental results clearly demonstrate the effectiveness of the proposed method.
Marzullo, Timothy Charles; Lehmkuhle, Mark J; Gage, Gregory J; Kipke, Daryl R
2010-04-01
Closed-loop neural interface technology that combines neural ensemble decoding with simultaneous electrical microstimulation feedback is hypothesized to improve deep brain stimulation techniques, neuromotor prosthetic applications, and epilepsy treatment. Here we describe our iterative results in a rat model of a sensory and motor neurophysiological feedback control system. Three rats were chronically implanted with microelectrode arrays in both the motor and visual cortices. The rats were subsequently trained over a period of weeks to modulate their motor cortex ensemble unit activity upon delivery of intra-cortical microstimulation (ICMS) of the visual cortex in order to receive a food reward. Rats were given continuous feedback via visual cortex ICMS during the response periods that was representative of the motor cortex ensemble dynamics. Analysis revealed that the feedback provided the animals with indicators of the behavioral trials. At the hardware level, this preparation provides a tractable test model for improving the technology of closed-loop neural devices.
SSAGES: Software Suite for Advanced General Ensemble Simulations.
Sidky, Hythem; Colón, Yamil J; Helfferich, Julian; Sikora, Benjamin J; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S; Reid, Daniel R; Sevgen, Emre; Thapar, Vikram; Webb, Michael A; Whitmer, Jonathan K; de Pablo, Juan J
2018-01-28
Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques-including adaptive biasing force, string methods, and forward flux sampling-that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.
NASA Astrophysics Data System (ADS)
Tao, Y.; Muller, J.-P.
2017-09-01
In this paper, we demonstrate novel Super-resolution restoration and 3D reconstruction tools developed within the EU FP7 projects and their applications to advanced dynamic feature tracking through HiRISE repeat stereo. We show an example with one of the RSL sites in the Palikir Crater took 8 repeat-pass 25cm HiRISE images from which a 5cm RSL-free SRR image is generated using GPT-SRR. Together with repeat 3D modelling of the same area, it allows us to overlay tracked dynamic features onto the reconstructed "original" surface, providing a much more comprehensive interpretation of the surface formation processes in 3D.
NASA Astrophysics Data System (ADS)
Li, N.; Kinzelbach, W.; Li, H.; Li, W.; Chen, F.; Wang, L.
2017-12-01
Data assimilation techniques are widely used in hydrology to improve the reliability of hydrological models and to reduce model predictive uncertainties. This provides critical information for decision makers in water resources management. This study aims to evaluate a data assimilation system for the Guantao groundwater flow model coupled with a one-dimensional soil column simulation (Hydrus 1D) using an Unbiased Ensemble Square Root Filter (UnEnSRF) originating from the Ensemble Kalman Filter (EnKF) to update parameters and states, separately or simultaneously. To simplify the coupling between unsaturated and saturated zone, a linear relationship obtained from analyzing inputs to and outputs from Hydrus 1D is applied in the data assimilation process. Unlike EnKF, the UnEnSRF updates parameter ensemble mean and ensemble perturbations separately. In order to keep the ensemble filter working well during the data assimilation, two factors are introduced in the study. One is called damping factor to dampen the update amplitude of the posterior ensemble mean to avoid nonrealistic values. The other is called inflation factor to relax the posterior ensemble perturbations close to prior to avoid filter inbreeding problems. The sensitivities of the two factors are studied and their favorable values for the Guantao model are determined. The appropriate observation error and ensemble size were also determined to facilitate the further analysis. This study demonstrated that the data assimilation of both model parameters and states gives a smaller model prediction error but with larger uncertainty while the data assimilation of only model states provides a smaller predictive uncertainty but with a larger model prediction error. Data assimilation in a groundwater flow model will improve model prediction and at the same time make the model converge to the true parameters, which provides a successful base for applications in real time modelling or real time controlling strategies in groundwater resources management.
Sensitivity of worst-case strom surge considering influence of climate change
NASA Astrophysics Data System (ADS)
Takayabu, Izuru; Hibino, Kenshi; Sasaki, Hidetaka; Shiogama, Hideo; Mori, Nobuhito; Shibutani, Yoko; Takemi, Tetsuya
2016-04-01
There are two standpoints when assessing risk caused by climate change. One is how to prevent disaster. For this purpose, we get probabilistic information of meteorological elements, from enough number of ensemble simulations. Another one is to consider disaster mitigation. For this purpose, we have to use very high resolution sophisticated model to represent a worst case event in detail. If we could use enough computer resources to drive many ensemble runs with very high resolution model, we can handle these all themes in one time. However resources are unfortunately limited in most cases, and we have to select the resolution or the number of simulations if we design the experiment. Applying PGWD (Pseudo Global Warming Downscaling) method is one solution to analyze a worst case event in detail. Here we introduce an example to find climate change influence on the worst case storm-surge, by applying PGWD to a super typhoon Haiyan (Takayabu et al, 2015). 1 km grid WRF model could represent both the intensity and structure of a super typhoon. By adopting PGWD method, we can only estimate the influence of climate change on the development process of the Typhoon. Instead, the changes in genesis could not be estimated. Finally, we drove SU-WAT model (which includes shallow water equation model) to get the signal of storm surge height. The result indicates that the height of the storm surge increased up to 20% owing to these 150 years climate change.
Protecting exposed tissues with external ultrasonic super-hydration.
Silberg, Barry Neil
2006-01-01
The author contends that a technique preventing dehydration of exposed tissues, such as external ultrasonic super-hydration, will result in a lower morbidity rate, decreasing deep tissue pain, susceptibility to infection, fat necrosis, wound dehiscence, and improving recovery times. He discusses how he uses this technique in his aesthetic surgery practice.
Model Independence in Downscaled Climate Projections: a Case Study in the Southeast United States
NASA Astrophysics Data System (ADS)
Gray, G. M. E.; Boyles, R.
2016-12-01
Downscaled climate projections are used to deduce how the climate will change in future decades at local and regional scales. It is important to use multiple models to characterize part of the future uncertainty given the impact on adaptation decision making. This is traditionally employed through an equally-weighted ensemble of multiple GCMs downscaled using one technique. Newer practices include several downscaling techniques in an effort to increase the ensemble's representation of future uncertainty. However, this practice may be adding statistically dependent models to the ensemble. Previous research has shown a dependence problem in the GCM ensemble in multiple generations, but has not been shown in the downscaled ensemble. In this case study, seven downscaled climate projections on the daily time scale are considered: CLAREnCE10, SERAP, BCCA (CMIP5 and CMIP3 versions), Hostetler, CCR, and MACA-LIVNEH. These data represent 83 ensemble members, 44 GCMs, and two generations of GCMs. Baseline periods are compared against the University of Idaho's METDATA gridded observation dataset. Hierarchical agglomerative clustering is applied to the correlated errors to determine dependent clusters. Redundant GCMs across different downscaling techniques show the most dependence, while smaller dependence signals are detected within downscaling datasets and across generations of GCMs. These results indicate that using additional downscaled projections to increase the ensemble size must be done with care to avoid redundant GCMs and the process of downscaling may increase the dependence of those downscaled GCMs. Climate model generation does not appear dissimilar enough to be treated as two separate statistical populations for ensemble building at the local and regional scales.
Superresolution microscopy for microbiology
Coltharp, Carla; Xiao, Jie
2014-01-01
Summary This review provides a practical introduction to superresolution microscopy from the perspective of microbiological research. Because of the small sizes of bacterial cells, superresolution methods are particularly powerful and suitable for revealing details of cellular structures that are not resolvable under conventional fluorescence light microscopy. Here we describe the methodological concepts behind three major categories of super-resolution light microscopy: photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM), structured illumination microscopy (SIM) and stimulated emission-depletion (STED) microscopy. We then present recent applications of each of these techniques to microbial systems, which have revealed novel conformations of cellular structures and described new properties of in vivo protein function and interactions. Finally, we discuss the unique issues related to implementing each of these superresolution techniques with bacterial specimens and suggest avenues for future development. The goal of this review is to provide the necessary technical background for interested microbiologists to choose the appropriate super-resolution method for their biological systems, and to introduce the practical considerations required for designing and analysing superresolution imaging experiments. PMID:22947061
Chowdhury, Shwetadwip; Eldridge, Will J.; Wax, Adam; Izatt, Joseph A.
2017-01-01
Though structured illumination (SI) microscopy is a popular imaging technique conventionally associated with fluorescent super-resolution, recent works have suggested its applicability towards sub-diffraction resolution coherent imaging with quantitative endogenous biological contrast. Here, we demonstrate that SI can efficiently integrate together the principles of fluorescent super-resolution and coherent synthetic aperture to achieve 3D dual-modality sub-diffraction resolution, fluorescence and refractive-index (RI) visualizations of biological samples. We experimentally demonstrate this framework by introducing a SI microscope capable of 3D sub-diffraction resolution fluorescence and RI imaging, and verify its biological visualization capabilities by experimentally reconstructing 3D RI/fluorescence visualizations of fluorescent calibration microspheres as well as alveolar basal epithelial adenocarcinoma (A549) and human colorectal adenocarcinmoa (HT-29) cells, fluorescently stained for F-actin. This demonstration may suggest SI as an especially promising imaging technique to enable future biological studies that explore synergistically operating biophysical/biochemical and molecular mechanisms at sub-diffraction resolutions. PMID:29296504
Factorization approach to superintegrable systems: Formalism and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballesteros, Á., E-mail: angelb@ubu.es; Herranz, F. J., E-mail: fjherranz@ubu.es; Kuru, Ş., E-mail: kuru@science.ankara.edu.tr
2017-03-15
The factorization technique for superintegrable Hamiltonian systems is revisited and applied in order to obtain additional (higher-order) constants of the motion. In particular, the factorization approach to the classical anisotropic oscillator on the Euclidean plane is reviewed, and new classical (super) integrable anisotropic oscillators on the sphere are constructed. The Tremblay–Turbiner–Winternitz system on the Euclidean plane is also studied from this viewpoint.
Emergent 1d Ising Behavior in AN Elementary Cellular Automaton Model
NASA Astrophysics Data System (ADS)
Kassebaum, Paul G.; Iannacchione, Germano S.
The fundamental nature of an evolving one-dimensional (1D) Ising model is investigated with an elementary cellular automaton (CA) simulation. The emergent CA simulation employs an ensemble of cells in one spatial dimension, each cell capable of two microstates interacting with simple nearest-neighbor rules and incorporating an external field. The behavior of the CA model provides insight into the dynamics of coupled two-state systems not expressible by exact analytical solutions. For instance, state progression graphs show the causal dynamics of a system through time in relation to the system's entropy. Unique graphical analysis techniques are introduced through difference patterns, diffusion patterns, and state progression graphs of the 1D ensemble visualizing the evolution. All analyses are consistent with the known behavior of the 1D Ising system. The CA simulation and new pattern recognition techniques are scalable (in both dimension, complexity, and size) and have many potential applications such as complex design of materials, control of agent systems, and evolutionary mechanism design.
Amplified Sensitivity of Nitrogen-Vacancy Spins in Nanodiamonds Using All-Optical Charge Readout.
Hopper, David A; Grote, Richard R; Parks, Samuel M; Bassett, Lee C
2018-04-23
Nanodiamonds containing nitrogen-vacancy (NV) centers offer a versatile platform for sensing applications spanning from nanomagnetism to in vivo monitoring of cellular processes. In many cases, however, weak optical signals and poor contrast demand long acquisition times that prevent the measurement of environmental dynamics. Here, we demonstrate the ability to perform fast, high-contrast optical measurements of charge distributions in ensembles of NV centers in nanodiamonds and use the technique to improve the spin-readout signal-to-noise ratio through spin-to-charge conversion. A study of 38 nanodiamonds with sizes ranging between 20 and 70 nm, each hosting a small ensemble of NV centers, uncovers complex, multiple time scale dynamics due to radiative and nonradiative ionization and recombination processes. Nonetheless, the NV-containing nanodiamonds universally exhibit charge-dependent photoluminescence contrasts and the potential for enhanced spin readout using spin-to-charge conversion. We use the technique to speed up a T 1 relaxometry measurement by a factor of 5.
Distillation of Greenberger-Horne-Zeilinger states by selective information manipulation.
Cohen, O; Brun, T A
2000-06-19
Methods for distilling Greenberger-Horne-Zeilinger (GHZ) states from arbitrary entangled tripartite pure states are described. These techniques work for virtually any input state. Each technique has two stages which we call primary and secondary distillations. Primary distillation produces a GHZ state with some probability, so that when applied to an ensemble of systems a certain percentage is discarded. Secondary distillation produces further GHZs from the discarded systems. These protocols are developed with the help of an approach to quantum information theory based on absolutely selective information, which has other potential applications.
NASA Astrophysics Data System (ADS)
Perraud, Jean-Michel; Bennett, James C.; Bridgart, Robert; Robertson, David E.
2016-04-01
Research undertaken through the Water Information Research and Development Alliance (WIRADA) has laid the foundations for continuous deterministic and ensemble short-term forecasting services. One output of this research is the software Short-term Water Information Forecasting Tools version 2 (SWIFT2). SWIFT2 is developed for use in research on short term streamflow forecasting techniques as well as operational forecasting services at the Australian Bureau of Meteorology. The variety of uses in research and operations requires a modular software system whose components can be arranged in applications that are fit for each particular purpose, without unnecessary software duplication. SWIFT2 modelling structures consist of sub-areas of hydrologic models, nodes and links with in-stream routing and reservoirs. While this modelling structure is customary, SWIFT2 is built from the ground up for computational and data intensive applications such as ensemble forecasts necessary for the estimation of the uncertainty in forecasts. Support for parallel computation on multiple processors or on a compute cluster is a primary use case. A convention is defined to store large multi-dimensional forecasting data and its metadata using the netCDF library. SWIFT2 is written in modern C++ with state of the art software engineering techniques and practices. A salient technical feature is a well-defined application programming interface (API) to facilitate access from different applications and technologies. SWIFT2 is already seamlessly accessible on Windows and Linux via packages in R, Python, Matlab and .NET languages such as C# and F#. Command line or graphical front-end applications are also feasible. This poster gives an overview of the technology stack, and illustrates the resulting features of SWIFT2 for users. Research and operational uses share the same common core C++ modelling shell for consistency, but augmented by different software modules suitable for each context. The accessibility via interactive modelling languages is particularly amenable to using SWIFT2 in exploratory research, with a dynamic and versatile experimental modelling workflow. This does not come at the expense of the stability and reliability required for use in operations, where only mature and stable components are used.
Decision tree and ensemble learning algorithms with their applications in bioinformatics.
Che, Dongsheng; Liu, Qi; Rasheed, Khaled; Tao, Xiuping
2011-01-01
Machine learning approaches have wide applications in bioinformatics, and decision tree is one of the successful approaches applied in this field. In this chapter, we briefly review decision tree and related ensemble algorithms and show the successful applications of such approaches on solving biological problems. We hope that by learning the algorithms of decision trees and ensemble classifiers, biologists can get the basic ideas of how machine learning algorithms work. On the other hand, by being exposed to the applications of decision trees and ensemble algorithms in bioinformatics, computer scientists can get better ideas of which bioinformatics topics they may work on in their future research directions. We aim to provide a platform to bridge the gap between biologists and computer scientists.
Coherent Spin Control at the Quantum Level in an Ensemble-Based Optical Memory.
Jobez, Pierre; Laplane, Cyril; Timoney, Nuala; Gisin, Nicolas; Ferrier, Alban; Goldner, Philippe; Afzelius, Mikael
2015-06-12
Long-lived quantum memories are essential components of a long-standing goal of remote distribution of entanglement in quantum networks. These can be realized by storing the quantum states of light as single-spin excitations in atomic ensembles. However, spin states are often subjected to different dephasing processes that limit the storage time, which in principle could be overcome using spin-echo techniques. Theoretical studies suggest this to be challenging due to unavoidable spontaneous emission noise in ensemble-based quantum memories. Here, we demonstrate spin-echo manipulation of a mean spin excitation of 1 in a large solid-state ensemble, generated through storage of a weak optical pulse. After a storage time of about 1 ms we optically read-out the spin excitation with a high signal-to-noise ratio. Our results pave the way for long-duration optical quantum storage using spin-echo techniques for any ensemble-based memory.
Li, Xi; He, Ji-Zheng; Zheng, Yuan-Ming; Zheng, Ming-Lan
2014-02-01
Super absorbent polymers (SAPs), a new water retention material, have a potential for application in water-saving agricultural production. In this study, we investigated the effects of SAPs, synthesized from natural plant extracts, on Chinese cabbage fresh weight, soil water content, soil water stable aggregates, soil microbial biomass (carbon) and soil microbial respiration under three water conditions (excessive, normal and deficient) and two SAPs application strategies (bulk treatment and spraying treatment). The results showed that the SAPs significantly promoted the soil water content, water-stable aggregates (> 0.25 mm) and the soil microbial activities, especially under the water deficient conditions. Meanwhile, SAP application strategy was of great significance to the effects on Chinese cabbage and soil properties. Compared with the control treatment under normal water condition, spraying treatment of Jaguar C (S-JC) could reduce irrigation water amount by about 25% without reducing the crop production. Furthermore, compared with the control treatment under the same water condition with S-JC (deficient), it could increase Chinese cabbage production by 287%. Thus, SAPs is an environmental friendly water-saving technique in agricultural production.
Recent developments in machine learning applications in landslide susceptibility mapping
NASA Astrophysics Data System (ADS)
Lun, Na Kai; Liew, Mohd Shahir; Matori, Abdul Nasir; Zawawi, Noor Amila Wan Abdullah
2017-11-01
While the prediction of spatial distribution of potential landslide occurrences is a primary interest in landslide hazard mitigation, it remains a challenging task. To overcome the scarceness of complete, sufficiently detailed geomorphological attributes and environmental conditions, various machine-learning techniques are increasingly applied to effectively map landslide susceptibility for large regions. Nevertheless, limited review papers are devoted to this field, particularly on the various domain specific applications of machine learning techniques. Available literature often report relatively good predictive performance, however, papers discussing the limitations of each approaches are quite uncommon. The foremost aim of this paper is to narrow these gaps in literature and to review up-to-date machine learning and ensemble learning techniques applied in landslide susceptibility mapping. It provides new readers an introductory understanding on the subject matter and researchers a contemporary review of machine learning advancements alongside the future direction of these techniques in the landslide mitigation field.
NASA Astrophysics Data System (ADS)
Zhao, Bin
2015-02-01
Temperature-pressure coupled field analysis of liquefied petroleum gas (LPG) tank under jet fire can offer theoretical guidance for preventing the fire accidents of LPG tank, the application of super wavelet finite element on it is studied in depth. First, review of related researches on heat transfer analysis of LPG tank under fire and super wavelet are carried out. Second, basic theory of super wavelet transform is studied. Third, the temperature-pressure coupled model of gas phase and liquid LPG under jet fire is established based on the equation of state, the VOF model and the RNG k-ɛ model. Then the super wavelet finite element formulation is constructed using the super wavelet scale function as interpolating function. Finally, the simulation is carried out, and results show that the super wavelet finite element method has higher computing precision than wavelet finite element method.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Diffusion, Dispersion, and Uncertainty in Anisotropic Fractal Porous Media
NASA Astrophysics Data System (ADS)
Monnig, N. D.; Benson, D. A.
2007-12-01
Motivated by field measurements of aquifer hydraulic conductivity (K), recent techniques were developed to construct anisotropic fractal random fields, in which the scaling, or self-similarity parameter, varies with direction and is defined by a matrix. Ensemble numerical results are analyzed for solute transport through these 2-D "operator-scaling" fractional Brownian motion (fBm) ln(K) fields. Contrary to some analytic stochastic theories for monofractal K fields, the plume growth rates never exceed Mercado's (1967) purely stratified aquifer growth rate of plume apparent dispersivity proportional to mean distance. Apparent super-stratified growth must be the result of other demonstrable factors, such as initial plume size. The addition of large local dispersion and diffusion does not significantly change the effective longitudinal dispersivity of the plumes. In the presence of significant local dispersion or diffusion, the concentration coefficient of variation CV={σc}/{\\langle c \\rangle} remains large at the leading edge of the plumes. This indicates that even with considerable mixing due to dispersion or diffusion, there is still substantial uncertainty in the leading edge of a plume moving in fractal porous media.
Single-cell epigenomics: techniques and emerging applications.
Schwartzman, Omer; Tanay, Amos
2015-12-01
Epigenomics is the study of the physical modifications, associations and conformations of genomic DNA sequences, with the aim of linking these with epigenetic memory, cellular identity and tissue-specific functions. While current techniques in the field are characterizing the average epigenomic features across large cell ensembles, the increasing interest in the epigenetics within complex and heterogeneous tissues is driving the development of single-cell epigenomics. We review emerging single-cell methods for capturing DNA methylation, chromatin accessibility, histone modifications, chromosome conformation and replication dynamics. Together, these techniques are rapidly becoming a powerful tool in studies of cellular plasticity and diversity, as seen in stem cells and cancer.
Spatio-temporal behaviour of medium-range ensemble forecasts
NASA Astrophysics Data System (ADS)
Kipling, Zak; Primo, Cristina; Charlton-Perez, Andrew
2010-05-01
Using the recently-developed mean-variance of logarithms (MVL) diagram, together with the TIGGE archive of medium-range ensemble forecasts from nine different centres, we present an analysis of the spatio-temporal dynamics of their perturbations, and show how the differences between models and perturbation techniques can explain the shape of their characteristic MVL curves. We also consider the use of the MVL diagram to compare the growth of perturbations within the ensemble with the growth of the forecast error, showing that there is a much closer correspondence for some models than others. We conclude by looking at how the MVL technique might assist in selecting models for inclusion in a multi-model ensemble, and suggest an experiment to test its potential in this context.
Double Beta Decays and Neutrinos - Experiments and MOON
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ejiri, H.; National Institute of Radiological Sciences, Chiba, 263-8555
2008-01-24
This is a brief review of the present and future experiments of neutrino-less double beta decays (0{nu}{beta}{beta}) and the MOON (Mo Observatory Of Neutrinos) project. High sensitivity 0{nu}{beta}{beta} experiments are unique and realistic probes for studying the Majorana nature of neutrinos and the absolute mass scale as suggested by neutrino oscillation experiments. MOON aims at spectroscopic 0{nu}{beta}{beta} studies with the {nu}-mass sensitivity of 100-30 meV by means of a super ensemble of multilayer modules of scintillator plates and tracking detector planes.
A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy.
S K, Somasundaram; P, Alli
2017-11-09
The main complication of diabetes is Diabetic retinopathy (DR), retinal vascular disease and it leads to the blindness. Regular screening for early DR disease detection is considered as an intensive labor and resource oriented task. Therefore, automatic detection of DR diseases is performed only by using the computational technique is the great solution. An automatic method is more reliable to determine the presence of an abnormality in Fundus images (FI) but, the classification process is poorly performed. Recently, few research works have been designed for analyzing texture discrimination capacity in FI to distinguish the healthy images. However, the feature extraction (FE) process was not performed well, due to the high dimensionality. Therefore, to identify retinal features for DR disease diagnosis and early detection using Machine Learning and Ensemble Classification method, called, Machine Learning Bagging Ensemble Classifier (ML-BEC) is designed. The ML-BEC method comprises of two stages. The first stage in ML-BEC method comprises extraction of the candidate objects from Retinal Images (RI). The candidate objects or the features for DR disease diagnosis include blood vessels, optic nerve, neural tissue, neuroretinal rim, optic disc size, thickness and variance. These features are initially extracted by applying Machine Learning technique called, t-distributed Stochastic Neighbor Embedding (t-SNE). Besides, t-SNE generates a probability distribution across high-dimensional images where the images are separated into similar and dissimilar pairs. Then, t-SNE describes a similar probability distribution across the points in the low-dimensional map. This lessens the Kullback-Leibler divergence among two distributions regarding the locations of the points on the map. The second stage comprises of application of ensemble classifiers to the extracted features for providing accurate analysis of digital FI using machine learning. In this stage, an automatic detection of DR screening system using Bagging Ensemble Classifier (BEC) is investigated. With the help of voting the process in ML-BEC, bagging minimizes the error due to variance of the base classifier. With the publicly available retinal image databases, our classifier is trained with 25% of RI. Results show that the ensemble classifier can achieve better classification accuracy (CA) than single classification models. Empirical experiments suggest that the machine learning-based ensemble classifier is efficient for further reducing DR classification time (CT).
ERIC Educational Resources Information Center
Mages, Wendy K.
2013-01-01
This research analyzes the techniques, strategies, and philosophical foundations that contributed to the quality and maintenance of a strong theatre-in-education ensemble. This study details how the company selected ensemble members and describes the work environment the company developed to promote collaboration and encourage actor-teacher…
NASA Astrophysics Data System (ADS)
Deka, Gitanjal; Nishida, Kentaro; Mochizuki, Kentaro; Ding, Hou-Xian; Fujita, Katsumasa; Chu, Shi-Wei
2018-03-01
Recently, many resolution enhancing techniques are demonstrated, but most of them are severely limited for deep tissue applications. For example, wide-field based localization techniques lack the ability of optical sectioning, and structured light based techniques are susceptible to beam distortion due to scattering/aberration. Saturated excitation (SAX) microscopy, which relies on temporal modulation that is less affected when penetrating into tissues, should be the best candidate for deep-tissue resolution enhancement. Nevertheless, although fluorescence saturation has been successfully adopted in SAX, it is limited by photobleaching, and its practical resolution enhancement is less than two-fold. Recently, we demonstrated plasmonic SAX which provides bleaching-free imaging with three-fold resolution enhancement. Here we show that the three-fold resolution enhancement is sustained throughout the whole working distance of an objective, i.e., 200 μm, which is the deepest super-resolution record to our knowledge, and is expected to extend into deeper tissues. In addition, SAX offers the advantage of background-free imaging by rejecting unwanted scattering background from biological tissues. This study provides an inspirational direction toward deep-tissue super-resolution imaging and has the potential in tumor monitoring and beyond.
Correlative Super-Resolution Microscopy: New Dimensions and New Opportunities.
Hauser, Meghan; Wojcik, Michal; Kim, Doory; Mahmoudi, Morteza; Li, Wan; Xu, Ke
2017-06-14
Correlative microscopy, the integration of two or more microscopy techniques performed on the same sample, produces results that emphasize the strengths of each technique while offsetting their individual weaknesses. Light microscopy has historically been a central method in correlative microscopy due to its widespread availability, compatibility with hydrated and live biological samples, and excellent molecular specificity through fluorescence labeling. However, conventional light microscopy can only achieve a resolution of ∼300 nm, undercutting its advantages in correlations with higher-resolution methods. The rise of super-resolution microscopy (SRM) over the past decade has drastically improved the resolution of light microscopy to ∼10 nm, thus creating exciting new opportunities and challenges for correlative microscopy. Here we review how these challenges are addressed to effectively correlate SRM with other microscopy techniques, including light microscopy, electron microscopy, cryomicroscopy, atomic force microscopy, and various forms of spectroscopy. Though we emphasize biological studies, we also discuss the application of correlative SRM to materials characterization and single-molecule reactions. Finally, we point out current limitations and discuss possible future improvements and advances. We thus demonstrate how a correlative approach adds new dimensions of information and provides new opportunities in the fast-growing field of SRM.
Photon-efficient super-resolution laser radar
NASA Astrophysics Data System (ADS)
Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.
2017-08-01
The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.
Multiple signal classification algorithm for super-resolution fluorescence microscopy
Agarwal, Krishna; Macháň, Radek
2016-01-01
Single-molecule localization techniques are restricted by long acquisition and computational times, or the need of special fluorophores or biologically toxic photochemical environments. Here we propose a statistical super-resolution technique of wide-field fluorescence microscopy we call the multiple signal classification algorithm which has several advantages. It provides resolution down to at least 50 nm, requires fewer frames and lower excitation power and works even at high fluorophore concentrations. Further, it works with any fluorophore that exhibits blinking on the timescale of the recording. The multiple signal classification algorithm shows comparable or better performance in comparison with single-molecule localization techniques and four contemporary statistical super-resolution methods for experiments of in vitro actin filaments and other independently acquired experimental data sets. We also demonstrate super-resolution at timescales of 245 ms (using 49 frames acquired at 200 frames per second) in samples of live-cell microtubules and live-cell actin filaments imaged without imaging buffers. PMID:27934858
A Study of a Super-Cooling Technique for Removal of Rubber from Solid-Rubber Tires.
environmental pollution . In answering these questions, an experiment is conducted to validate the concept and to determine liquid...is performed to compare the costs of the super-cooling technique with those of the brake drum lathe method of rubber removal. Safety and environmental pollution factors are also investigated and
Super-resolution processing for multi-functional LPI waveforms
NASA Astrophysics Data System (ADS)
Li, Zhengzheng; Zhang, Yan; Wang, Shang; Cai, Jingxiao
2014-05-01
Super-resolution (SR) is a radar processing technique closely related to the pulse compression (or correlation receiver). There are many super-resolution algorithms developed for the improved range resolution and reduced sidelobe contaminations. Traditionally, the waveforms used for the SR have been either phase-coding (such as LKP3 code, Barker code) or the frequency modulation (chirp, or nonlinear frequency modulation). There are, however, an important class of waveforms which are either random in nature (such as random noise waveform), or randomly modulated for multiple function operations (such as the ADS-B radar signals in [1]). These waveforms have the advantages of low-probability-of-intercept (LPI). If the existing SR techniques can be applied to these waveforms, there will be much more flexibility for using these waveforms in actual sensing missions. Also, SR usually has great advantage that the final output (as estimation of ground truth) is largely independent of the waveform. Such benefits are attractive to many important primary radar applications. In this paper the general introduction of the SR algorithms are provided first, and some implementation considerations are discussed. The selected algorithms are applied to the typical LPI waveforms, and the results are discussed. It is observed that SR algorithms can be reliably used for LPI waveforms, on the other hand, practical considerations should be kept in mind in order to obtain the optimal estimation results.
Greene, Richard N; Sutherland, Douglas E; Tausch, Timothy J; Perez, Deo S
2014-03-01
Super-selective vascular control prior to robotic partial nephrectomy (also known as 'zero-ischemia') is a novel surgical technique that promises to reduce warm ischemia time. The technique has been shown to be feasible but adds substantial technical complexity and cost to the procedure. We present a simplified retrograde dissection of the renal hilum to achieve selective vascular control during robotic partial nephrectomy. Consecutive patients with stage 1 solid and complex cystic renal masses underwent robotic partial nephrectomies with selective vascular control using a modification to previously described super-selective robotic partial nephrectomy. In each case, the renal arterial branch supplying the mass and surrounding parenchyma was dissected in a retrograde fashion from the tumor. Intra-renal dissection of the interlobular artery was not performed. Intra-operative immunofluorescence was not utilized as assessment of parenchymal ischemia was documented before partial nephrectomy. Data was prospectively collected in an IRB-approved partial nephrectomy database. Operative variables between patients undergoing super-selective versus standard robotic partial nephrectomy were compared. Super-selective partial nephrectomy with retrograde hilar dissection was successfully completed in five consecutive patients. There were no complications or conversions to traditional partial nephrectomy. All were diagnosed with renal cell carcinoma and surgical margins were all negative. Estimated blood loss, warm ischemia time, operative time and length of stay were all comparable between patients undergoing super-selective and standard robotic partial nephrectomy. Retrograde hilar dissection appears to be a feasible and safe approach to super-selective partial nephrectomy without adding complex renovascular surgical techniques or cost to the procedure.
NASA Astrophysics Data System (ADS)
Wood, Andy; Clark, Elizabeth; Mendoza, Pablo; Nijssen, Bart; Newman, Andy; Clark, Martyn; Nowak, Kenneth; Arnold, Jeffrey
2017-04-01
Many if not most national operational streamflow prediction systems rely on a forecaster-in-the-loop approach that require the hands-on-effort of an experienced human forecaster. This approach evolved from the need to correct for long-standing deficiencies in the models and datasets used in forecasting, and the practice often leads to skillful flow predictions despite the use of relatively simple, conceptual models. Yet the 'in-the-loop' forecast process is not reproducible, which limits opportunities to assess and incorporate new techniques systematically, and the effort required to make forecasts in this way is an obstacle to expanding forecast services - e.g., though adding new forecast locations or more frequent forecast updates, running more complex models, or producing forecast and hindcasts that can support verification. In the last decade, the hydrologic forecasting community has begun develop more centralized, 'over-the-loop' systems. The quality of these new forecast products will depend on their ability to leverage research in areas including earth system modeling, parameter estimation, data assimilation, statistical post-processing, weather and climate prediction, verification, and uncertainty estimation through the use of ensembles. Currently, many national operational streamflow forecasting and water management communities have little experience with the strengths and weaknesses of over-the-loop approaches, even as such systems are beginning to be deployed operationally in centers such as ECMWF. There is thus a need both to evaluate these forecasting advances and to demonstrate their potential in a public arena, raising awareness in forecast user communities and development programs alike. To address this need, the US National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the US Army Corps of Engineers, using the NCAR 'System for Hydromet Analysis Research and Prediction Applications' (SHARP) to implement, assess and demonstrate real-time over-the-loop ensemble flow forecasts in a range of US watersheds. The system relies on fully ensemble techniques, including: an 100-member ensemble of meteorological model forcings and an ensemble particle filter data assimilation for initializing watershed states; analog/regression-based downscaling of ensemble weather forecasts from GEFS; and statistical post-processing of ensemble forecast outputs, all of which run in real-time within a workflow managed by ECWMF's ecFlow libraries over large US regional domains. We describe SHARP and present early hindcast and verification results for short to seasonal range streamflow forecasts in a number of US case study watersheds.
Cornick, Matthew; Hunt, Brian; Ott, Edward; Kurtuldu, Huseyin; Schatz, Michael F
2009-03-01
Data assimilation refers to the process of estimating a system's state from a time series of measurements (which may be noisy or incomplete) in conjunction with a model for the system's time evolution. Here we demonstrate the applicability of a recently developed data assimilation method, the local ensemble transform Kalman filter, to nonlinear, high-dimensional, spatiotemporally chaotic flows in Rayleigh-Bénard convection experiments. Using this technique we are able to extract the full temperature and velocity fields from a time series of shadowgraph measurements. In addition, we describe extensions of the algorithm for estimating model parameters. Our results suggest the potential usefulness of our data assimilation technique to a broad class of experimental situations exhibiting spatiotemporal chaos.
Lynch, Chip M; Abdollahi, Behnaz; Fuqua, Joshua D; de Carlo, Alexandra R; Bartholomai, James A; Balgemann, Rayeanne N; van Berkel, Victor H; Frieboes, Hermann B
2017-12-01
Outcomes for cancer patients have been previously estimated by applying various machine learning techniques to large datasets such as the Surveillance, Epidemiology, and End Results (SEER) program database. In particular for lung cancer, it is not well understood which types of techniques would yield more predictive information, and which data attributes should be used in order to determine this information. In this study, a number of supervised learning techniques is applied to the SEER database to classify lung cancer patients in terms of survival, including linear regression, Decision Trees, Gradient Boosting Machines (GBM), Support Vector Machines (SVM), and a custom ensemble. Key data attributes in applying these methods include tumor grade, tumor size, gender, age, stage, and number of primaries, with the goal to enable comparison of predictive power between the various methods The prediction is treated like a continuous target, rather than a classification into categories, as a first step towards improving survival prediction. The results show that the predicted values agree with actual values for low to moderate survival times, which constitute the majority of the data. The best performing technique was the custom ensemble with a Root Mean Square Error (RMSE) value of 15.05. The most influential model within the custom ensemble was GBM, while Decision Trees may be inapplicable as it had too few discrete outputs. The results further show that among the five individual models generated, the most accurate was GBM with an RMSE value of 15.32. Although SVM underperformed with an RMSE value of 15.82, statistical analysis singles the SVM as the only model that generated a distinctive output. The results of the models are consistent with a classical Cox proportional hazards model used as a reference technique. We conclude that application of these supervised learning techniques to lung cancer data in the SEER database may be of use to estimate patient survival time with the ultimate goal to inform patient care decisions, and that the performance of these techniques with this particular dataset may be on par with that of classical methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Yang, Shan; Al-Hashimi, Hashim M.
2016-01-01
A growing number of studies employ time-averaged experimental data to determine dynamic ensembles of biomolecules. While it is well known that different ensembles can satisfy experimental data to within error, the extent and nature of these degeneracies, and their impact on the accuracy of the ensemble determination remains poorly understood. Here, we use simulations and a recently introduced metric for assessing ensemble similarity to explore degeneracies in determining ensembles using NMR residual dipolar couplings (RDCs) with specific application to A-form helices in RNA. Various target ensembles were constructed representing different domain-domain orientational distributions that are confined to a topologically restricted (<10%) conformational space. Five independent sets of ensemble averaged RDCs were then computed for each target ensemble and a ‘sample and select’ scheme used to identify degenerate ensembles that satisfy RDCs to within experimental uncertainty. We find that ensembles with different ensemble sizes and that can differ significantly from the target ensemble (by as much as ΣΩ ~ 0.4 where ΣΩ varies between 0 and 1 for maximum and minimum ensemble similarity, respectively) can satisfy the ensemble averaged RDCs. These deviations increase with the number of unique conformers and breadth of the target distribution, and result in significant uncertainty in determining conformational entropy (as large as 5 kcal/mol at T = 298 K). Nevertheless, the RDC-degenerate ensembles are biased towards populated regions of the target ensemble, and capture other essential features of the distribution, including the shape. Our results identify ensemble size as a major source of uncertainty in determining ensembles and suggest that NMR interactions such as RDCs and spin relaxation, on their own, do not carry the necessary information needed to determine conformational entropy at a useful level of precision. The framework introduced here provides a general approach for exploring degeneracies in ensemble determination for different types of experimental data. PMID:26131693
NASA Astrophysics Data System (ADS)
Kidon, Lyran; Wilner, Eli Y.; Rabani, Eran
2015-12-01
The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima-Zwanzig-Mori time-convolution (TC) and the other on the Tokuyama-Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called "memory kernel" or "generator," going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operator in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green's function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.
NASA Astrophysics Data System (ADS)
Montero-Martinez, M. J.; Colorado, G.; Diaz-Gutierrez, D. E.; Salinas-Prieto, J. A.
2017-12-01
It is well known the North American Monsoon (NAM) region is already a very dry region which is under a lot of stress due to the lack of water resources on multiple locations of the area. However, it is very interesting that even under those conditions, the Mexican part of the NAM region is certainly the most productive in Mexico from the agricultural point of view. Thus, it is very important to have realistic climate scenarios for climate variables such as temperature, precipitation, relative humidity, radiation, etc. This study tries to tackle that problem by generating probabilistic climate scenarios using a weighted CMIP5-GCM ensemble approach based on the Xu et al. (2010) technique which is on itself an improved method from the better known Reliability Ensemble Averaging algorithm of Giorgi and Mearns (2002). In addition, it is compared the 20-plus GCMs individual performances and the weighted ensemble versus observed data (CRU TS2.1) by using different metrics and Taylor diagrams. This study focuses on probabilistic results reaching a certain threshold given the fact that those types of products could be of potential use for agricultural applications.
Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.
Lee, Soojeong; Chang, Joon-Hyuk
2017-11-01
This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57 mmHg, respectively. These indicate that the proposed method actually enhances the performance by 9.18% and 10.88% compared with the DBN-DNN single estimator. The proposed methodology improves the accuracy of BP estimation and reduces the uncertainty for BP estimation. Copyright © 2017 Elsevier B.V. All rights reserved.
Visualizing Uncertainty for Probabilistic Weather Forecasting based on Reforecast Analogs
NASA Astrophysics Data System (ADS)
Pelorosso, Leandro; Diehl, Alexandra; Matković, Krešimir; Delrieux, Claudio; Ruiz, Juan; Gröeller, M. Eduard; Bruckner, Stefan
2016-04-01
Numerical weather forecasts are prone to uncertainty coming from inaccuracies in the initial and boundary conditions and lack of precision in numerical models. Ensemble of forecasts partially addresses these problems by considering several runs of the numerical model. Each forecast is generated with different initial and boundary conditions and different model configurations [GR05]. The ensembles can be expressed as probabilistic forecasts, which have proven to be very effective in the decision-making processes [DE06]. The ensemble of forecasts represents only some of the possible future atmospheric states, usually underestimating the degree of uncertainty in the predictions [KAL03, PH06]. Hamill and Whitaker [HW06] introduced the "Reforecast Analog Regression" (RAR) technique to overcome the limitations of ensemble forecasting. This technique produces probabilistic predictions based on the analysis of historical forecasts and observations. Visual analytics provides tools for processing, visualizing, and exploring data to get new insights and discover hidden information patterns in an interactive exchange between the user and the application [KMS08]. In this work, we introduce Albero, a visual analytics solution for probabilistic weather forecasting based on the RAR technique. Albero targets at least two different type of users: "forecasters", who are meteorologists working in operational weather forecasting and "researchers", who work in the construction of numerical prediction models. Albero is an efficient tool for analyzing precipitation forecasts, allowing forecasters to make and communicate quick decisions. Our solution facilitates the analysis of a set of probabilistic forecasts, associated statistical data, observations and uncertainty. A dashboard with small-multiples of probabilistic forecasts allows the forecasters to analyze at a glance the distribution of probabilities as a function of time, space, and magnitude. It provides the user with a more accurate measure of forecast uncertainty that could result in better decision-making. It offers different level of abstractions to help with the recalibration of the RAR method. It also has an inspection tool that displays the selected analogs, their observations and statistical data. It gives the users access to inner parts of the method, unveiling hidden information. References [GR05] GNEITING T., RAFTERY A. E.: Weather forecasting with ensemble methods. Science 310, 5746, 248-249, 2005. [KAL03] KALNAY E.: Atmospheric modeling, data assimilation and predictability. Cambridge University Press, 2003. [PH06] PALMER T., HAGEDORN R.: Predictability of weather and climate. Cambridge University Press, 2006. [HW06] HAMILL T. M., WHITAKER J. S.: Probabilistic quantitative precipitation forecasts based on reforecast analogs: Theory and application. Monthly Weather Review 134, 11, 3209-3229, 2006. [DE06] DEITRICK S., EDSALL R.: The influence of uncertainty visualization on decision making: An empirical evaluation. Springer, 2006. [KMS08] KEIM D. A., MANSMANN F., SCHNEIDEWIND J., THOMAS J., ZIEGLER H.: Visual analytics: Scope and challenges. Springer, 2008.
Super-resolution Microscopy in Plant Cell Imaging.
Komis, George; Šamajová, Olga; Ovečka, Miroslav; Šamaj, Jozef
2015-12-01
Although the development of super-resolution microscopy methods dates back to 1994, relevant applications in plant cell imaging only started to emerge in 2010. Since then, the principal super-resolution methods, including structured-illumination microscopy (SIM), photoactivation localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), and stimulated emission depletion microscopy (STED), have been implemented in plant cell research. However, progress has been limited due to the challenging properties of plant material. Here we summarize the basic principles of existing super-resolution methods and provide examples of applications in plant science. The limitations imposed by the nature of plant material are reviewed and the potential for future applications in plant cell imaging is highlighted. Copyright © 2015 Elsevier Ltd. All rights reserved.
Several air quality forecasting ensembles were created from seven models, running in real-time during the 2006 Texas Air Quality (TEXAQS-II) experiment. These multi-model ensembles incorporated a diverse set of meteorological models, chemical mechanisms, and emission inventories...
Interagency Report: Astrogeology 58, television cartography
Batson, Raymond M.
1973-01-01
The purpose of this paper is to illustrate the processing of digital television pictures into base maps. In this context, a base map is defined as a pictorial representation of planetary surface morphology accurately reproduced on standard map projections. Topographic contour lines, albedo or geologic overprints may be super imposed on these base maps. The compilation of geodetic map controls, the techniques of mosaic compilation, computer processing and airbrush enhancement, and the compilation of con tour lines are discussed elsewhere by the originators of these techniques. A bibliography of applicable literature is included for readers interested in more detailed discussions.
Porting Ordinary Applications to Blue Gene/Q Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy
2015-08-31
Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt'smore » sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.« less
An Experimental Study for Effectiveness of Super-Learning Technique at Elementary Level in Pakistan
ERIC Educational Resources Information Center
Shafqat, Hussain; Muhammad, Sarwar; Imran, Yousaf; Naemullah; Inamullah
2010-01-01
The objective of the study was to experience the effectiveness of super-learning technique of teaching at elementary level. The study was conducted with 8th grade students at a public sector school. Pre-test and post-test control group designs were used. Experimental and control groups were formed randomly, the experimental group (N = 62),…
Bayesian ensemble refinement by replica simulations and reweighting.
Hummer, Gerhard; Köfinger, Jürgen
2015-12-28
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Bayesian ensemble refinement by replica simulations and reweighting
NASA Astrophysics Data System (ADS)
Hummer, Gerhard; Köfinger, Jürgen
2015-12-01
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Cormary, Benoit; Li, Tao; Liakakos, Nikos; ...
2016-06-14
The molecular and ensemble dynamics for the growth of hierarchical supercrystals of cobalt nanorods have been studied by in situ tandem X-ray Absorption Spectroscopy – Small Angle X-ray Scattering (XAS - SAXS). The super-crystals were obtained by reducing a Co(II) precursor under H 2 in the presence of a long chain amine and a long chain carboxylic acid. Complementary time-dependent ex situ TEM studies were also performed. The experimental data provide critical insights into the nanorod growth mechanism, and unequivocal evidence for a concerted growth-organization process. Nanorod formation involves cobalt nucleation, a fast atom by atom anisotropic growth and amore » slower oriented attach-ment process that continues well after cobalt reduction is complete. As a result, smectic-like ordering of the nanorods appears very early in the process, as soon as nanoparticle elongation appears, and nanorod growth takes place inside organized super-lattices, which can be regarded as mesocrystals.« less
Exploring the calibration of a wind forecast ensemble for energy applications
NASA Astrophysics Data System (ADS)
Heppelmann, Tobias; Ben Bouallegue, Zied; Theis, Susanne
2015-04-01
In the German research project EWeLiNE, Deutscher Wetterdienst (DWD) and Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) are collaborating with three German Transmission System Operators (TSO) in order to provide the TSOs with improved probabilistic power forecasts. Probabilistic power forecasts are derived from probabilistic weather forecasts, themselves derived from ensemble prediction systems (EPS). Since the considered raw ensemble wind forecasts suffer from underdispersiveness and bias, calibration methods are developed for the correction of the model bias and the ensemble spread bias. The overall aim is to improve the ensemble forecasts such that the uncertainty of the possible weather deployment is depicted by the ensemble spread from the first forecast hours. Additionally, the ensemble members after calibration should remain physically consistent scenarios. We focus on probabilistic hourly wind forecasts with horizon of 21 h delivered by the convection permitting high-resolution ensemble system COSMO-DE-EPS which has become operational in 2012 at DWD. The ensemble consists of 20 ensemble members driven by four different global models. The model area includes whole Germany and parts of Central Europe with a horizontal resolution of 2.8 km and a vertical resolution of 50 model levels. For verification we use wind mast measurements around 100 m height that corresponds to the hub height of wind energy plants that belong to wind farms within the model area. Calibration of the ensemble forecasts can be performed by different statistical methods applied to the raw ensemble output. Here, we explore local bivariate Ensemble Model Output Statistics at individual sites and quantile regression with different predictors. Applying different methods, we already show an improvement of ensemble wind forecasts from COSMO-DE-EPS for energy applications. In addition, an ensemble copula coupling approach transfers the time-dependencies of the raw ensemble to the calibrated ensemble. The calibrated wind forecasts are evaluated first with univariate probabilistic scores and additionally with diagnostics of wind ramps in order to assess the time-consistency of the calibrated ensemble members.
Re-scan confocal microscopy: scanning twice for better resolution.
De Luca, Giulia M R; Breedijk, Ronald M P; Brandt, Rick A J; Zeelenberg, Christiaan H C; de Jong, Babette E; Timmermans, Wendy; Azar, Leila Nahidi; Hoebe, Ron A; Stallinga, Sjoerd; Manders, Erik M M
2013-01-01
We present a new super-resolution technique, Re-scan Confocal Microscopy (RCM), based on standard confocal microscopy extended with an optical (re-scanning) unit that projects the image directly on a CCD-camera. This new microscope has improved lateral resolution and strongly improved sensitivity while maintaining the sectioning capability of a standard confocal microscope. This simple technology is typically useful for biological applications where the combination high-resolution and high-sensitivity is required.
Ensemble codes involving hippocampal neurons are at risk during delayed performance tests.
Hampson, R E; Deadwyler, S A
1996-11-26
Multielectrode recording techniques were used to record ensemble activity from 10 to 16 simultaneously active CA1 and CA3 neurons in the rat hippocampus during performance of a spatial delayed-nonmatch-to-sample task. Extracted sources of variance were used to assess the nature of two different types of errors that accounted for 30% of total trials. The two types of errors included ensemble "miscodes" of sample phase information and errors associated with delay-dependent corruption or disappearance of sample information at the time of the nonmatch response. Statistical assessment of trial sequences and associated "strength" of hippocampal ensemble codes revealed that miscoded error trials always followed delay-dependent error trials in which encoding was "weak," indicating that the two types of errors were "linked." It was determined that the occurrence of weakly encoded, delay-dependent error trials initiated an ensemble encoding "strategy" that increased the chances of being correct on the next trial and avoided the occurrence of further delay-dependent errors. Unexpectedly, the strategy involved "strongly" encoding response position information from the prior (delay-dependent) error trial and carrying it forward to the sample phase of the next trial. This produced a miscode type error on trials in which the "carried over" information obliterated encoding of the sample phase response on the next trial. Application of this strategy, irrespective of outcome, was sufficient to reorient the animal to the proper between trial sequence of response contingencies (nonmatch-to-sample) and boost performance to 73% correct on subsequent trials. The capacity for ensemble analyses of strength of information encoding combined with statistical assessment of trial sequences therefore provided unique insight into the "dynamic" nature of the role hippocampus plays in delay type memory tasks.
Social Validity of Behavioral Practices in the Treatment of Autism--A Review of the "Super Nanny"
ERIC Educational Resources Information Center
King, Melissa J.; Valdovinos, Maria G.
2009-01-01
This study assessed the social validity of behavioral techniques (i.e., pivotal response treatment) used with a child diagnosed with autism as viewed on an episode of the "Super Nanny" [Frost, J. (Host). (2005). Facente family [television series episode]. In N. Powell (Producer), "Super Nanny". New York: American Broadcasting Companies, Inc.].…
EFS: an ensemble feature selection tool implemented as R-package and web-application.
Neumann, Ursula; Genze, Nikita; Heider, Dominik
2017-01-01
Feature selection methods aim at identifying a subset of features that improve the prediction performance of subsequent classification models and thereby also simplify their interpretability. Preceding studies demonstrated that single feature selection methods can have specific biases, whereas an ensemble feature selection has the advantage to alleviate and compensate for these biases. The software EFS (Ensemble Feature Selection) makes use of multiple feature selection methods and combines their normalized outputs to a quantitative ensemble importance. Currently, eight different feature selection methods have been integrated in EFS, which can be used separately or combined in an ensemble. EFS identifies relevant features while compensating specific biases of single methods due to an ensemble approach. Thereby, EFS can improve the prediction accuracy and interpretability in subsequent binary classification models. EFS can be downloaded as an R-package from CRAN or used via a web application at http://EFS.heiderlab.de.
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Anagnostou, E. N.; Hartman, B.; Kallos, G. B.
2015-12-01
Weather prediction accuracy has become very important for the Northeast U.S. given the devastating effects of extreme weather events in the recent years. Weather forecasting systems are used towards building strategies to prevent catastrophic losses for human lives and the environment. Concurrently, weather forecast tools and techniques have evolved with improved forecast skill as numerical prediction techniques are strengthened by increased super-computing resources. In this study, we examine the combination of two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) by utilizing a Bayesian regression approach to improve the prediction of extreme weather events for NE U.S. The basic concept behind the Bayesian regression approach is to take advantage of the strengths of two atmospheric modeling systems and, similar to the multi-model ensemble approach, limit their weaknesses which are related to systematic and random errors in the numerical prediction of physical processes. The first part of this study is focused on retrospective simulations of seventeen storms that affected the region in the period 2004-2013. Optimal variances are estimated by minimizing the root mean square error and are applied to out-of-sample weather events. The applicability and usefulness of this approach are demonstrated by conducting an error analysis based on in-situ observations from meteorological stations of the National Weather Service (NWS) for wind speed and wind direction, and NCEP Stage IV radar data, mosaicked from the regional multi-sensor for precipitation. The preliminary results indicate a significant improvement in the statistical metrics of the modeled-observed pairs for meteorological variables using various combinations of the sixteen events as predictors of the seventeenth. This presentation will illustrate the implemented methodology and the obtained results for wind speed, wind direction and precipitation, as well as set the research steps that will be followed in the future.
NASA Astrophysics Data System (ADS)
Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.
2015-12-01
A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a local computer, and inter-compare CRM output and data with GCE and NU-WRF.
New developments in super-resolution for GaoFen-4
NASA Astrophysics Data System (ADS)
Li, Feng; Fu, Jie; Xin, Lei; Liu, Yuhong; Liu, Zhijia
2017-10-01
In this paper, the application of super resolution (SR, restoring a high spatial resolution image from a series of low resolution images of the same scene) techniques to GaoFen(GF)-4, which is the most advanced geostationaryorbit earth observing satellite in China, remote sensing images is investigated and tested. SR has been a hot research area for decades, but one of the barriers of applying SR in remote sensing community is the time slot between those low resolution (LR) images acquisition. In general, the longer the time slot, the less reliable the reconstruction. GF-4 has the unique advantage of capturing a sequence of LR of the same region in minutes, i.e. working as a staring camera from the point view of SR. This is the first experiment of applying super resolution to a sequence of low resolution images captured by GF-4 within a short time period. In this paper, we use Maximum a Posteriori (MAP) to solve the ill-conditioned problem of SR. Both the wavelet transform and the curvelet transform are used to setup a sparse prior for remote sensing images. By combining several images of both the BeiJing and DunHuang regions captured by GF-4 our method can improve spatial resolution both visually and numerically. Experimental tests show that lots of detail cannot be observed in the captured LR images, but can be seen in the super resolved high resolution (HR) images. To help the evaluation, Google Earth image can also be referenced. Moreover, our experimental tests also show that the higher the temporal resolution, the better the HR images can be resolved. The study illustrates that the application for SR to geostationary-orbit based earth observation data is very feasible and worthwhile, and it holds the potential application for all other geostationary-orbit based earth observing systems.
Improved vocal tract reconstruction and modeling using an image super-resolution technique.
Zhou, Xinhui; Woo, Jonghye; Stone, Maureen; Prince, Jerry L; Espy-Wilson, Carol Y
2013-06-01
Magnetic resonance imaging has been widely used in speech production research. Often only one image stack (sagittal, axial, or coronal) is used for vocal tract modeling. As a result, complementary information from other available stacks is not utilized. To overcome this, a recently developed super-resolution technique was applied to integrate three orthogonal low-resolution stacks into one isotropic volume. The results on vowels show that the super-resolution volume produces better vocal tract visualization than any of the low-resolution stacks. Its derived area functions generally produce formant predictions closer to the ground truth, particularly for those formants sensitive to area perturbations at constrictions.
Super-spiral structures of bi-stable spiral waves and a new instability of spiral waves
NASA Astrophysics Data System (ADS)
Gao, Jian; Wang, Qun; Lü, Huaping
2017-10-01
A new type of super-spiral structure and instability of spiral waves (in numerical simulation) are investigated. Before the period-doubling bifurcation of this system, the super-spiral structure occurs caused by phase trajectory selection. This type of super-spiral structure is totally different from the super-spiral structure observed early. Although the spiral rotates, the super-spiral structure is stationary. Observably, fully turbulence of the system occurs suddenly which has no process of instability. The forming principle of this instability may have applications in cardiology.
Improving wave forecasting by integrating ensemble modelling and machine learning
NASA Astrophysics Data System (ADS)
O'Donncha, F.; Zhang, Y.; James, S. C.
2017-12-01
Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.
Example-Based Super-Resolution Fluorescence Microscopy.
Jia, Shu; Han, Boran; Kutz, J Nathan
2018-04-23
Capturing biological dynamics with high spatiotemporal resolution demands the advancement in imaging technologies. Super-resolution fluorescence microscopy offers spatial resolution surpassing the diffraction limit to resolve near-molecular-level details. While various strategies have been reported to improve the temporal resolution of super-resolution imaging, all super-resolution techniques are still fundamentally limited by the trade-off associated with the longer image acquisition time that is needed to achieve higher spatial information. Here, we demonstrated an example-based, computational method that aims to obtain super-resolution images using conventional imaging without increasing the imaging time. With a low-resolution image input, the method provides an estimate of its super-resolution image based on an example database that contains super- and low-resolution image pairs of biological structures of interest. The computational imaging of cellular microtubules agrees approximately with the experimental super-resolution STORM results. This new approach may offer potential improvements in temporal resolution for experimental super-resolution fluorescence microscopy and provide a new path for large-data aided biomedical imaging.
Impact of laser power density on tribological properties of Pulsed Laser Deposited DLC films
NASA Astrophysics Data System (ADS)
Gayathri, S.; Kumar, N.; Krishnan, R.; AmirthaPandian, S.; Ravindran, T. R.; Dash, S.; Tyagi, A. K.; Sridharan, M.
2013-12-01
Fabrication of wear resistant and low friction carbon films on the engineered substrates is considered as a challenging task for expanding the applications of diamond-like carbon (DLC) films. In this paper, pulsed laser deposition (PLD) technique is used to deposit DLC films on two different types of technologically important class of substrates such as silicon and AISI 304 stainless steel. Laser power density is one of the important parameter used to tailor the fraction of sp2 bonded amorphous carbon (a-C) and tetrahedral amorphous carbon (ta-C) made by sp3 domain in the DLC film. The I(D)/I(G) ratio decreases with the increasing laser power density which is associated with decrease in fraction of a-C/ta-C ratio. The fraction of these chemical components is quantitatively analyzed by EELS which is well supported to the data obtained from the Raman spectroscopy. Tribological properties of the DLC are associated with chemical structure of the film. However, the super low value of friction coefficient 0.003 is obtained when the film is predominantly constituted by a-C and sp2 fraction which is embedded within the clusters of ta-C. Such a particular film with super low friction coefficient is measured while it was deposited on steel at low laser power density of 2 GW/cm2. The super low friction mechanism is explained by low sliding resistance of a-C/sp2 and ta-C clusters. Combination of excellent physical and mechanical properties of wear resistance and super low friction coefficient of DLC films is desirable for engineering applications. Moreover, the high friction coefficient of DLC films deposited at 9GW/cm2 is related to widening of the intergrain distance caused by transformation from sp2 to sp3 hybridized structure.
Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics
NASA Technical Reports Server (NTRS)
Zhu, Yanqui; Cohn, Stephen E.; Todling, Ricardo
1999-01-01
The Kalman filter is the optimal filter in the presence of known gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions. Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz model as well as more realistic models of the means and atmosphere. A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter situations to allow for correct update of the ensemble members. The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to be quite puzzling in that results state estimates are worse than for their filter analogue. In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use the Lorenz model to test and compare the behavior of a variety of implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.
Gavrishchaka, Valeriy; Senyukova, Olga; Davis, Kristina
2015-01-01
Previously, we have proposed to use complementary complexity measures discovered by boosting-like ensemble learning for the enhancement of quantitative indicators dealing with necessarily short physiological time series. We have confirmed robustness of such multi-complexity measures for heart rate variability analysis with the emphasis on detection of emerging and intermittent cardiac abnormalities. Recently, we presented preliminary results suggesting that such ensemble-based approach could be also effective in discovering universal meta-indicators for early detection and convenient monitoring of neurological abnormalities using gait time series. Here, we argue and demonstrate that these multi-complexity ensemble measures for gait time series analysis could have significantly wider application scope ranging from diagnostics and early detection of physiological regime change to gait-based biometrics applications.
Adiabatic passage in photon-echo quantum memories
NASA Astrophysics Data System (ADS)
Demeter, Gabor
2013-11-01
Photon-echo-based quantum memories use inhomogeneously broadened, optically thick ensembles of absorbers to store a weak optical signal and employ various protocols to rephase the atomic coherences for information retrieval. We study the application of two consecutive, frequency-chirped control pulses for coherence rephasing in an ensemble with a “natural” inhomogeneous broadening. Although propagation effects distort the two control pulses differently, chirped pulses that drive adiabatic passage can rephase atomic coherences in an optically thick storage medium. Combined with spatial phase-mismatching techniques to prevent primary echo emission, coherences can be rephased around the ground state to achieve secondary echo emission with close to unit efficiency. Potential advantages over similar schemes working with π pulses include greater potential signal fidelity, reduced noise due to spontaneous emission, and better capability for the storage of multiple memory channels.
The Super Tuesday Outbreak: Forecast Sensitivities to Single-Moment Microphysics Schemes
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.; Lapenta, William M.
2008-01-01
Forecast precipitation and radar characteristics are used by operational centers to guide the issuance of advisory products. As operational numerical weather prediction is performed at increasingly finer spatial resolution, convective precipitation traditionally represented by sub-grid scale parameterization schemes is now being determined explicitly through single- or multi-moment bulk water microphysics routines. Gains in forecasting skill are expected through improved simulation of clouds and their microphysical processes. High resolution model grids and advanced parameterizations are now available through steady increases in computer resources. As with any parameterization, their reliability must be measured through performance metrics, with errors noted and targeted for improvement. Furthermore, the use of these schemes within an operational framework requires an understanding of limitations and an estimate of biases so that forecasters and model development teams can be aware of potential errors. The National Severe Storms Laboratory (NSSL) Spring Experiments have produced daily, high resolution forecasts used to evaluate forecast skill among an ensemble with varied physical parameterizations and data assimilation techniques. In this research, high resolution forecasts of the 5-6 February 2008 Super Tuesday Outbreak are replicated using the NSSL configuration in order to evaluate two components of simulated convection on a large domain: sensitivities of quantitative precipitation forecasts to assumptions within a single-moment bulk water microphysics scheme, and to determine if these schemes accurately depict the reflectivity characteristics of well-simulated, organized, cold frontal convection. As radar returns are sensitive to the amount of hydrometeor mass and the distribution of mass among variably sized targets, radar comparisons may guide potential improvements to a single-moment scheme. In addition, object-based verification metrics are evaluated for their utility in gauging model performance and QPF variability.
Online breakage detection of multitooth tools using classifier ensembles for imbalanced data
NASA Astrophysics Data System (ADS)
Bustillo, Andrés; Rodríguez, Juan J.
2014-12-01
Cutting tool breakage detection is an important task, due to its economic impact on mass production lines in the automobile industry. This task presents a central limitation: real data-sets are extremely imbalanced because breakage occurs in very few cases compared with normal operation of the cutting process. In this paper, we present an analysis of different data-mining techniques applied to the detection of insert breakage in multitooth tools. The analysis applies only one experimental variable: the electrical power consumption of the tool drive. This restriction profiles real industrial conditions more accurately than other physical variables, such as acoustic or vibration signals, which are not so easily measured. Many efforts have been made to design a method that is able to identify breakages with a high degree of reliability within a short period of time. The solution is based on classifier ensembles for imbalanced data-sets. Classifier ensembles are combinations of classifiers, which in many situations are more accurate than individual classifiers. Six different base classifiers are tested: Decision Trees, Rules, Naïve Bayes, Nearest Neighbour, Multilayer Perceptrons and Logistic Regression. Three different balancing strategies are tested with each of the classifier ensembles and compared to their performance with the original data-set: Synthetic Minority Over-Sampling Technique (SMOTE), undersampling and a combination of SMOTE and undersampling. To identify the most suitable data-mining solution, Receiver Operating Characteristics (ROC) graph and Recall-precision graph are generated and discussed. The performance of logistic regression ensembles on the balanced data-set using the combination of SMOTE and undersampling turned out to be the most suitable technique. Finally a comparison using industrial performance measures is presented, which concludes that this technique is also more suited to this industrial problem than the other techniques presented in the bibliography.
Muhlestein, Whitney E; Akagi, Dallin S; Kallos, Justiss A; Morone, Peter J; Weaver, Kyle D; Thompson, Reid C; Chambless, Lola B
2018-04-01
Objective Machine learning (ML) algorithms are powerful tools for predicting patient outcomes. This study pilots a novel approach to algorithm selection and model creation using prediction of discharge disposition following meningioma resection as a proof of concept. Materials and Methods A diversity of ML algorithms were trained on a single-institution database of meningioma patients to predict discharge disposition. Algorithms were ranked by predictive power and top performers were combined to create an ensemble model. The final ensemble was internally validated on never-before-seen data to demonstrate generalizability. The predictive power of the ensemble was compared with a logistic regression. Further analyses were performed to identify how important variables impact the ensemble. Results Our ensemble model predicted disposition significantly better than a logistic regression (area under the curve of 0.78 and 0.71, respectively, p = 0.01). Tumor size, presentation at the emergency department, body mass index, convexity location, and preoperative motor deficit most strongly influence the model, though the independent impact of individual variables is nuanced. Conclusion Using a novel ML technique, we built a guided ML ensemble model that predicts discharge destination following meningioma resection with greater predictive power than a logistic regression, and that provides greater clinical insight than a univariate analysis. These techniques can be extended to predict many other patient outcomes of interest.
Analyses and forecasts of a tornadic supercell outbreak using a 3DVAR system ensemble
NASA Astrophysics Data System (ADS)
Zhuang, Zhaorong; Yussouf, Nusrat; Gao, Jidong
2016-05-01
As part of NOAA's "Warn-On-Forecast" initiative, a convective-scale data assimilation and prediction system was developed using the WRF-ARW model and ARPS 3DVAR data assimilation technique. The system was then evaluated using retrospective short-range ensemble analyses and probabilistic forecasts of the tornadic supercell outbreak event that occurred on 24 May 2011 in Oklahoma, USA. A 36-member multi-physics ensemble system provided the initial and boundary conditions for a 3-km convective-scale ensemble system. Radial velocity and reflectivity observations from four WSR-88Ds were assimilated into the ensemble using the ARPS 3DVAR technique. Five data assimilation and forecast experiments were conducted to evaluate the sensitivity of the system to data assimilation frequencies, in-cloud temperature adjustment schemes, and fixed- and mixed-microphysics ensembles. The results indicated that the experiment with 5-min assimilation frequency quickly built up the storm and produced a more accurate analysis compared with the 10-min assimilation frequency experiment. The predicted vertical vorticity from the moist-adiabatic in-cloud temperature adjustment scheme was larger in magnitude than that from the latent heat scheme. Cycled data assimilation yielded good forecasts, where the ensemble probability of high vertical vorticity matched reasonably well with the observed tornado damage path. Overall, the results of the study suggest that the 3DVAR analysis and forecast system can provide reasonable forecasts of tornadic supercell storms.
SVM and SVM Ensembles in Breast Cancer Prediction.
Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong
2017-01-01
Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers.
SVM and SVM Ensembles in Breast Cancer Prediction
Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong
2017-01-01
Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers. PMID:28060807
Prediction of drug synergy in cancer using ensemble-based machine learning techniques
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder
2018-04-01
Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.
New learning based super-resolution: use of DWT and IGMRF prior.
Gajjar, Prakash P; Joshi, Manjunath V
2010-05-01
In this paper, we propose a new learning-based approach for super-resolving an image captured at low spatial resolution. Given the low spatial resolution test image and a database consisting of low and high spatial resolution images, we obtain super-resolution for the test image. We first obtain an initial high-resolution (HR) estimate by learning the high-frequency details from the available database. A new discrete wavelet transform (DWT) based approach is proposed for learning that uses a set of low-resolution (LR) images and their corresponding HR versions. Since the super-resolution is an ill-posed problem, we obtain the final solution using a regularization framework. The LR image is modeled as the aliased and noisy version of the corresponding HR image, and the aliasing matrix entries are estimated using the test image and the initial HR estimate. The prior model for the super-resolved image is chosen as an Inhomogeneous Gaussian Markov random field (IGMRF) and the model parameters are estimated using the same initial HR estimate. A maximum a posteriori (MAP) estimation is used to arrive at the cost function which is minimized using a simple gradient descent approach. We demonstrate the effectiveness of the proposed approach by conducting the experiments on gray scale as well as on color images. The method is compared with the standard interpolation technique and also with existing learning-based approaches. The proposed approach can be used in applications such as wildlife sensor networks, remote surveillance where the memory, the transmission bandwidth, and the camera cost are the main constraints.
Exploring diversity in ensemble classification: Applications in large area land cover mapping
NASA Astrophysics Data System (ADS)
Mellor, Andrew; Boukir, Samia
2017-07-01
Ensemble classifiers, such as random forests, are now commonly applied in the field of remote sensing, and have been shown to perform better than single classifier systems, resulting in reduced generalisation error. Diversity across the members of ensemble classifiers is known to have a strong influence on classification performance - whereby classifier errors are uncorrelated and more uniformly distributed across ensemble members. The relationship between ensemble diversity and classification performance has not yet been fully explored in the fields of information science and machine learning and has never been examined in the field of remote sensing. This study is a novel exploration of ensemble diversity and its link to classification performance, applied to a multi-class canopy cover classification problem using random forests and multisource remote sensing and ancillary GIS data, across seven million hectares of diverse dry-sclerophyll dominated public forests in Victoria Australia. A particular emphasis is placed on analysing the relationship between ensemble diversity and ensemble margin - two key concepts in ensemble learning. The main novelty of our work is on boosting diversity by emphasizing the contribution of lower margin instances used in the learning process. Exploring the influence of tree pruning on diversity is also a new empirical analysis that contributes to a better understanding of ensemble performance. Results reveal insights into the trade-off between ensemble classification accuracy and diversity, and through the ensemble margin, demonstrate how inducing diversity by targeting lower margin training samples is a means of achieving better classifier performance for more difficult or rarer classes and reducing information redundancy in classification problems. Our findings inform strategies for collecting training data and designing and parameterising ensemble classifiers, such as random forests. This is particularly important in large area remote sensing applications, for which training data is costly and resource intensive to collect.
Operating Spin Echo in the Quantum Regime for an Atomic-Ensemble Quantum Memory
NASA Astrophysics Data System (ADS)
Rui, Jun; Jiang, Yan; Yang, Sheng-Jun; Zhao, Bo; Bao, Xiao-Hui; Pan, Jian-Wei
2015-09-01
Spin echo is a powerful technique to extend atomic or nuclear coherence times by overcoming the dephasing due to inhomogeneous broadenings. However, there are disputes about the feasibility of applying this technique to an ensemble-based quantum memory at the single-quanta level. In this experimental study, we find that noise due to imperfections of the rephasing pulses has both intense superradiant and weak isotropic parts. By properly arranging the beam directions and optimizing the pulse fidelities, we successfully manage to operate the spin echo technique in the quantum regime by observing nonclassical photon-photon correlations as well as the quantum behavior of retrieved photons. Our work for the first time demonstrates the feasibility of harnessing the spin echo method to extend the lifetime of ensemble-based quantum memories at the single-quanta level.
Real-Time Fourier Synthesis of Ensembles with Timbral Interpolation
NASA Astrophysics Data System (ADS)
Haken, Lippold
1990-01-01
In Fourier synthesis, natural musical sounds are produced by summing time-varying sinusoids. Sounds are analyzed to find the amplitude and frequency characteristics for their sinusoids; interpolation between the characteristics of several sounds is used to produce intermediate timbres. An ensemble can be synthesized by summing all the sinusoids for several sounds, but in practice it is difficult to perform such computations in real time. To solve this problem on inexpensive hardware, it is useful to take advantage of the masking effects of the auditory system. By avoiding the computations for perceptually unimportant sinusoids, and by employing other computation reduction techniques, a large ensemble may be synthesized in real time on the Platypus signal processor. Unlike existing computation reduction techniques, the techniques described in this thesis do not sacrifice independent fine control over the amplitude and frequency characteristics of each sinusoid.
NASA Astrophysics Data System (ADS)
Protopopescu, V.; D'Helon, C.; Barhen, J.
2003-06-01
A constant-time solution of the continuous global optimization problem (GOP) is obtained by using an ensemble algorithm. We show that under certain assumptions, the solution can be guaranteed by mapping the GOP onto a discrete unsorted search problem, whereupon Brüschweiler's ensemble search algorithm is applied. For adequate sensitivities of the measurement technique, the query complexity of the ensemble search algorithm depends linearly on the size of the function's domain. Advantages and limitations of an eventual NMR implementation are discussed.
An introduction to optical super-resolution microscopy for the adventurous biologist
NASA Astrophysics Data System (ADS)
Vangindertael, J.; Camacho, R.; Sempels, W.; Mizuno, H.; Dedecker, P.; Janssen, K. P. F.
2018-04-01
Ever since the inception of light microscopy, the laws of physics have seemingly thwarted every attempt to visualize the processes of life at its most fundamental, sub-cellular, level. The diffraction limit has restricted our view to length scales well above 250 nm and in doing so, severely compromised our ability to gain true insights into many biological systems. Fortunately, continuous advancements in optics, electronics and mathematics have since provided the means to once again make physics work to our advantage. Even though some of the fundamental concepts enabling super-resolution light microscopy have been known for quite some time, practically feasible implementations have long remained elusive. It should therefore not come as a surprise that the 2014 Nobel Prize in Chemistry was awarded to the scientists who, each in their own way, contributed to transforming super-resolution microscopy from a technological tour de force to a staple of the biologist’s toolkit. By overcoming the diffraction barrier, light microscopy could once again be established as an indispensable tool in an age where the importance of understanding life at the molecular level cannot be overstated. This review strives to provide the aspiring life science researcher with an introduction to optical microscopy, starting from the fundamental concepts governing compound and fluorescent confocal microscopy to the current state-of-the-art of super-resolution microscopy techniques and their applications.
Re-scan confocal microscopy: scanning twice for better resolution
De Luca, Giulia M.R.; Breedijk, Ronald M.P.; Brandt, Rick A.J.; Zeelenberg, Christiaan H.C.; de Jong, Babette E.; Timmermans, Wendy; Azar, Leila Nahidi; Hoebe, Ron A.; Stallinga, Sjoerd; Manders, Erik M.M.
2013-01-01
We present a new super-resolution technique, Re-scan Confocal Microscopy (RCM), based on standard confocal microscopy extended with an optical (re-scanning) unit that projects the image directly on a CCD-camera. This new microscope has improved lateral resolution and strongly improved sensitivity while maintaining the sectioning capability of a standard confocal microscope. This simple technology is typically useful for biological applications where the combination high-resolution and high-sensitivity is required. PMID:24298422
Schierle, Gabriele S Kaminski; Michel, Claire H; Gasparini, Laura
2016-08-01
Alzheimer's disease (AD) is the main cause of dementia in the elderly population. Over 30 million people worldwide are living with dementia and AD prevalence is projected to increase dramatically in the next two decades. In terms of neuropathology, AD is characterized by two major cerebral hallmarks: extracellular β-amyloid (Aβ) plaques and intracellular Tau inclusions, which start accumulating in the brain 15-20 years before the onset of symptoms. Within this context, the scientific community worldwide is undertaking a wide research effort to detect AD pathology at its earliest, before symptoms appear. Neuroimaging of Aβ by positron emission tomography (PET) is clinically available and is a promising modality for early detection of Aβ pathology and AD diagnosis. Substantive efforts are ongoing to develop advanced imaging techniques for early detection of Tau pathology. Here, we will briefly describe the key features of Tau pathology and its heterogeneity across various neurodegenerative diseases bearing cerebral Tau inclusions (i.e., tauopathies). We will outline the current status of research on Tau-specific PET tracers and their clinical development. Finally, we will discuss the potential application of novel super-resolution and label-free techniques for investigating Tau pathology at the experimental level and their potential application for AD diagnosis. Microsc. Res. Tech. 79:677-683, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Supporting lander and rover operation: a novel super-resolution restoration technique
NASA Astrophysics Data System (ADS)
Tao, Yu; Muller, Jan-Peter
2015-04-01
Higher resolution imaging data is always desirable to critical rover engineering operations, such as landing site selection, path planning, and optical localisation. For current Mars missions, 25cm HiRISE images have been widely used by the MER & MSL engineering team for rover path planning and location registration/adjustment. However, 25cm is not high enough resolution to be able to view individual rocks (≤2m in size) or visualise the types of sedimentary features that rover onboard cameras might observe. Nevertheless, due to various physical constraints (e.g. telescope size and mass) from the imaging instruments themselves, one needs to be able to tradeoff spatial resolution and bandwidth. This means that future imaging systems are likely to be limited to resolve features larger than 25cm. We have developed a novel super-resolution algorithm/pipeline to be able to restore higher resolution image from the non-redundant sub-pixel information contained in multiple lower resolution raw images [Tao & Muller 2015]. We will demonstrate with experiments performed using 5-10 overlapped 25cm HiRISE images for MER-A, MER-B & MSL to resolve 5-10cm super resolution images that can be directly compared to rover imagery at a range of 5 metres from the rover cameras but in our case can be used to visualise features many kilometres away from the actual rover traverse. We will demonstrate how these super-resolution images together with image understanding software can be used to quantify rock size-frequency distributions as well as measure sedimentary rock layers for several critical sites for comparison with rover orthorectified image mosaic to demonstrate optimality of using our super-resolution resolved image to better support future lander and rover operation in future. We present the potential of super-resolution for virtual exploration to the ˜400 HiRISE areas which have been viewed 5 or more times and the potential application of this technique to all of the ESA ExoMars Trace Gas orbiter CaSSiS stereo, multi-angle and colour camera images from 2017 onwards. Acknowledgements: The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement No.312377 PRoViDE.
Benchmarking Deep Learning Models on Large Healthcare Datasets.
Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan
2018-06-04
Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models. Copyright © 2018 Elsevier Inc. All rights reserved.
Tantra, Ratna; Knight, Alex
2011-09-01
The use of imaging tools to probe nanoparticle-cell interactions will be crucial to elucidating the mechanisms of nanoparticle-induced toxicity. Of particular interest are mechanisms associated with cell penetration, translocation and subsequent accumulation inside the cell, or in cellular compartments. The objective of the present paper is to review imaging techniques that have been previously used in order to assess such interactions, and new techniques with the potential to be useful in this area. In order to identify the most suitable techniques, they were evaluated and matched against a list of evaluation criteria. We conclude that limitations exist with all of the techniques and the ultimate choice will thus depend on the needs of end users, and their particular application. The state-of-the-art techniques appear to have the least limitations, despite the fact that they are not so well established and still far from being routine. For example, super-resolution microscopy techniques appear to have many advantages for understanding the details of the interactions between nanoparticles and cells. Future research should concentrate on further developing or improving such novel techniques, to include the development of standardized methods and appropriate reference materials.
Multi-Model Ensemble Wake Vortex Prediction
NASA Technical Reports Server (NTRS)
Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.
2015-01-01
Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.
Marino, Ricardo; Majumdar, Satya N; Schehr, Grégory; Vivo, Pierpaolo
2016-09-01
Let P_{β}^{(V)}(N_{I}) be the probability that a N×Nβ-ensemble of random matrices with confining potential V(x) has N_{I} eigenvalues inside an interval I=[a,b] on the real line. We introduce a general formalism, based on the Coulomb gas technique and the resolvent method, to compute analytically P_{β}^{(V)}(N_{I}) for large N. We show that this probability scales for large N as P_{β}^{(V)}(N_{I})≈exp[-βN^{2}ψ^{(V)}(N_{I}/N)], where β is the Dyson index of the ensemble. The rate function ψ^{(V)}(k_{I}), independent of β, is computed in terms of single integrals that can be easily evaluated numerically. The general formalism is then applied to the classical β-Gaussian (I=[-L,L]), β-Wishart (I=[1,L]), and β-Cauchy (I=[-L,L]) ensembles. Expanding the rate function around its minimum, we find that generically the number variance var(N_{I}) exhibits a nonmonotonic behavior as a function of the size of the interval, with a maximum that can be precisely characterized. These analytical results, corroborated by numerical simulations, provide the full counting statistics of many systems where random matrix models apply. In particular, we present results for the full counting statistics of zero-temperature one-dimensional spinless fermions in a harmonic trap.
Characterization and improvement of highly inclined optical sheet microscopy
NASA Astrophysics Data System (ADS)
Vignolini, T.; Curcio, V.; Gardini, L.; Capitanio, M.; Pavone, F. S.
2018-02-01
Highly Inclined and Laminated Optical sheet (HILO) microscopy is an optical technique that employs a highly inclined laser beam to illuminate the sample with a thin sheet of light that can be scanned through the sample volume1 . HILO is an efficient illumination technique when applied to fluorescence imaging of thick samples owing to the confined illumination volume that allows high contrast imaging while retaining deep scanning capability in a wide-field configuration. The restricted illumination volume is crucial to limit background fluorescence originating from portions of the sample far from the focal plane, especially in applications such as single molecule localization and super-resolution imaging2-4. Despite its widespread use, current literature lacks comprehensive reports of the actual advantages of HILO in these kinds of microscopies. Here, we thoroughly characterize the propagation of a highly inclined beam through fluorescently labeled samples and implement appropriate beam shaping for optimal application to single molecule and super-resolution imaging. We demonstrate that, by reducing the beam size along the refracted axis only, the excitation volume is consequently reduced while maintaining a field of view suitable for single cell imaging. We quantify the enhancement in signal-tobackground ratio with respect to the standard HILO technique and apply our illumination method to dSTORM superresolution imaging of the actin and vimentin cytoskeleton. We define the conditions to achieve localization precisions comparable to state-of-the-art reports, obtain a significant improvement in the image contrast, and enhanced plane selectivity within the sample volume due to the further confinement of the inclined beam.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kidon, Lyran; The Sackler Center for Computational Molecular and Materials Science, Tel Aviv University, Tel Aviv 69978; Wilner, Eli Y.
2015-12-21
The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima–Zwanzig–Mori time-convolution (TC) and the other on the Tokuyama–Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called “memory kernel” or “generator,” going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operatormore » in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green’s function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.« less
SSAGES: Software Suite for Advanced General Ensemble Simulations
NASA Astrophysics Data System (ADS)
Sidky, Hythem; Colón, Yamil J.; Helfferich, Julian; Sikora, Benjamin J.; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z.; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J.; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S.; Reid, Daniel R.; Sevgen, Emre; Thapar, Vikram; Webb, Michael A.; Whitmer, Jonathan K.; de Pablo, Juan J.
2018-01-01
Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques—including adaptive biasing force, string methods, and forward flux sampling—that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.
2013-01-01
Background Many problems in protein modeling require obtaining a discrete representation of the protein conformational space as an ensemble of conformations. In ab-initio structure prediction, in particular, where the goal is to predict the native structure of a protein chain given its amino-acid sequence, the ensemble needs to satisfy energetic constraints. Given the thermodynamic hypothesis, an effective ensemble contains low-energy conformations which are similar to the native structure. The high-dimensionality of the conformational space and the ruggedness of the underlying energy surface currently make it very difficult to obtain such an ensemble. Recent studies have proposed that Basin Hopping is a promising probabilistic search framework to obtain a discrete representation of the protein energy surface in terms of local minima. Basin Hopping performs a series of structural perturbations followed by energy minimizations with the goal of hopping between nearby energy minima. This approach has been shown to be effective in obtaining conformations near the native structure for small systems. Recent work by us has extended this framework to larger systems through employment of the molecular fragment replacement technique, resulting in rapid sampling of large ensembles. Methods This paper investigates the algorithmic components in Basin Hopping to both understand and control their effect on the sampling of near-native minima. Realizing that such an ensemble is reduced before further refinement in full ab-initio protocols, we take an additional step and analyze the quality of the ensemble retained by ensemble reduction techniques. We propose a novel multi-objective technique based on the Pareto front to filter the ensemble of sampled local minima. Results and conclusions We show that controlling the magnitude of the perturbation allows directly controlling the distance between consecutively-sampled local minima and, in turn, steering the exploration towards conformations near the native structure. For the minimization step, we show that the addition of Metropolis Monte Carlo-based minimization is no more effective than a simple greedy search. Finally, we show that the size of the ensemble of sampled local minima can be effectively and efficiently reduced by a multi-objective filter to obtain a simpler representation of the probed energy surface. PMID:24564970
Screen-level data assimilation of observations and pseudo-observations in COSMO-I2
NASA Astrophysics Data System (ADS)
Milelli, Dr.; Turco, Dr.; Cane, Dr.; Oberto, Dr.; Pelosini, Dr.
2009-09-01
The COSMO model has been developed by the COnsortium for Small-scale MOdelling, an over-national consortium coordinating the cooperation of the national and regional weather services of Germany, Italy, Switzerland, Greece, Poland and Romania. Its operational version does not make use of the 2m temperature, since it has been shown to have potentially adverse effects on the stability of the planetary boundary layer. Moreover, in pre-operational tests, it has been showed to degrade the low-tropospheric thermal structure of the model. The 2m temperature is at the moment only used in the soil moisture analysis, where it has the potential to modify the surface fluxes and to improve the prediction of 2m temperature during the forecast time. Despite these facts, there is an option in the model for the inclusion of 2m temperature in the assimilation cycle. For this reason, considering the great number of non-GTS stations in the ARPA Piemonte ground network, it has been decided to try the assimilation of 2m temperature in the COSMO-I2 version of the model, which has a horizontal resolution of about 3 km more similar to the average resolution of the thermometers. Two different test periods have been considered, from 1 to 15 September 2008 (summer-like weather) and from 3 to 17 January 2009 (winter-like weather). Every day we have run two simulations up to +24h, starting at 00UTC and 12UTC in order to investigate also the dependence on the initial state of the PBL. The aim of the work is to investigate the assimilation of the non-GTS data in the first 12h of the simulations in order to create an operational very high-resolution analysis, but also to test the option of running in the future a very short-range forecast (+12h to +18h) starting from these analyses. The results, in terms of RMSE, Mean Error (ME) and diurnal cycle of some surface variables such as 2m temperature, 2m relative humidity and 10m wind intensity, and in terms of vertical profile of temperature, show in general a positive impact during the assimilation cycle and below 1000-1500 m respectively and a neutral impact elsewhere, because the effect of the nudging vanishes a few hours after the end of the assimilation. As a second step, we introduced the assimilation of the 2 m temperature forecasts given by the Multimodel SuperEnsemble technique for all the available stations of the ARPA Piemonte network into the model, as if they were observations (we call them pseudo-observations), from +12h to +24h. The Multimodel SuperEnsemble technique is a powerful post-processing method for the estimation of weather forecast parameters. Several model outputs are combined, using weights calculated during a so-called training period. This technique has already been tested and implemented in many works on limited-area models in order to obtain reliable forecasts in complex orography regions. Also in this case we observe a positive impact mainly on the surface variables, but the effect lasts up to +24h.
High-resolution magnetic resonance spectroscopy using a solid-state spin sensor
NASA Astrophysics Data System (ADS)
Glenn, David R.; Bucher, Dominik B.; Lee, Junghyun; Lukin, Mikhail D.; Park, Hongkun; Walsworth, Ronald L.
2018-03-01
Quantum systems that consist of solid-state electronic spins can be sensitive detectors of nuclear magnetic resonance (NMR) signals, particularly from very small samples. For example, nitrogen–vacancy centres in diamond have been used to record NMR signals from nanometre-scale samples, with sensitivity sufficient to detect the magnetic field produced by a single protein. However, the best reported spectral resolution for NMR of molecules using nitrogen–vacancy centres is about 100 hertz. This is insufficient to resolve the key spectral identifiers of molecular structure that are critical to NMR applications in chemistry, structural biology and materials research, such as scalar couplings (which require a resolution of less than ten hertz) and small chemical shifts (which require a resolution of around one part per million of the nuclear Larmor frequency). Conventional, inductively detected NMR can provide the necessary high spectral resolution, but its limited sensitivity typically requires millimetre-scale samples, precluding applications that involve smaller samples, such as picolitre-volume chemical analysis or correlated optical and NMR microscopy. Here we demonstrate a measurement technique that uses a solid-state spin sensor (a magnetometer) consisting of an ensemble of nitrogen–vacancy centres in combination with a narrowband synchronized readout protocol to obtain NMR spectral resolution of about one hertz. We use this technique to observe NMR scalar couplings in a micrometre-scale sample volume of approximately ten picolitres. We also use the ensemble of nitrogen–vacancy centres to apply NMR to thermally polarized nuclear spins and resolve chemical-shift spectra from small molecules. Our technique enables analytical NMR spectroscopy at the scale of single cells.
NASA Astrophysics Data System (ADS)
Ramos, Maria-Helena; Wetterhall, Fredrik; Wood, Andy; Wang, Qj; Pappenberger, Florian; Verkade, Jan
2017-04-01
Since 2004, HEPEX (Hydrologic Ensemble Prediction Experiment) has been fostering a community of researchers and practitioners around the world. Through the years, it has contributed to establish a more integrative view of hydrological forecasting, where data assimilation, hydro-meteorological modelling chains, post-processing techniques, expert knowledge, and decision support systems are connected to enhance operational systems and water management applications. Here we present the community activities in HEPEX that have contributed to strengthening this unfunded/volunteer effort for more than a decade. It includes the organization of workshops, conference sessions, testbeds and inter-comparison experiments. More recently, HEPEX has also prompted the development of several publicly available role-play games and, since 2013, it has been running a blog portal (www.hepex.org), which is used as an intersection point for members. Through this website, members can continuously share their research, make announcements, report on workshops, projects and meetings, and hear about related research and operational challenges. It also creates a platform for early career scientists to become increasingly involved in hydrological forecasting science and applications.
NASA Astrophysics Data System (ADS)
Re, Matteo; Valentini, Giorgio
2012-03-01
Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been proposed to explain the characteristics and the successful application of ensembles to different application domains. For instance, Allwein, Schapire, and Singer interpreted the improved generalization capabilities of ensembles of learning machines in the framework of large margin classifiers [4,177], Kleinberg in the context of stochastic discrimination theory [112], and Breiman and Friedman in the light of the bias-variance analysis borrowed from classical statistics [21,70]. Empirical studies showed that both in classification and regression problems, ensembles improve on single learning machines, and moreover large experimental studies compared the effectiveness of different ensemble methods on benchmark data sets [10,11,49,188]. The interest in this research area is motivated also by the availability of very fast computers and networks of workstations at a relatively low cost that allow the implementation and the experimentation of complex ensemble methods using off-the-shelf computer platforms. However, as explained in Section 26.2 there are deeper reasons to use ensembles of learning machines, motivated by the intrinsic characteristics of the ensemble methods. The main aim of this chapter is to introduce ensemble methods and to provide an overview and a bibliography of the main areas of research, without pretending to be exhaustive or to explain the detailed characteristics of each ensemble method. The paper is organized as follows. In the next section, the main theoretical and practical reasons for combining multiple learners are introduced. Section 26.3 depicts the main taxonomies on ensemble methods proposed in the literature. In Section 26.4 and 26.5, we present an overview of the main supervised ensemble methods reported in the literature, adopting a simple taxonomy, originally proposed in Ref. [201]. Applications of ensemble methods are only marginally considered, but a specific section on some relevant applications of ensemble methods in astronomy and astrophysics has been added (Section 26.6). The conclusion (Section 26.7) ends this paper and lists some issues not covered in this work.
NASA Astrophysics Data System (ADS)
Liu, Di; Mishra, Ashok K.; Yu, Zhongbo
2016-07-01
This paper examines the combination of support vector machines (SVM) and the dual ensemble Kalman filter (EnKF) technique to estimate root zone soil moisture at different soil layers up to 100 cm depth. Multiple experiments are conducted in a data rich environment to construct and validate the SVM model and to explore the effectiveness and robustness of the EnKF technique. It was observed that the performance of SVM relies more on the initial length of training set than other factors (e.g., cost function, regularization parameter, and kernel parameters). The dual EnKF technique proved to be efficient to improve SVM with observed data either at each time step or at a flexible time steps. The EnKF technique can reach its maximum efficiency when the updating ensemble size approaches a certain threshold. It was observed that the SVM model performance for the multi-layer soil moisture estimation can be influenced by the rainfall magnitude (e.g., dry and wet spells).
NASA Astrophysics Data System (ADS)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2014-11-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
The Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics
NASA Technical Reports Server (NTRS)
Zhu, Yanqiu; Cohn, Stephen E.; Todling, Ricardo
1999-01-01
The Kalman filter is the optimal filter in the presence of known Gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions (e.g., Miller 1994). Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz (1963) model as well as more realistic models of the oceans (Evensen and van Leeuwen 1996) and atmosphere (Houtekamer and Mitchell 1998). A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter equations to allow for correct update of the ensemble members (Burgers 1998). The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to quite puzzling in that results of state estimate are worse than for their filter analogue (Evensen 1997). In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use Lorenz (1963) model to test and compare the behavior of a variety implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.
A Theoretical Analysis of Why Hybrid Ensembles Work.
Hsu, Kuo-Wei
2017-01-01
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles.
Tatavarty, Vedakumar; Kim, Eun-Ji; Rodionov, Vladimir; Yu, Ji
2009-11-09
Morphological changes in dendritic spines represent an important mechanism for synaptic plasticity which is postulated to underlie the vital cognitive phenomena of learning and memory. These morphological changes are driven by the dynamic actin cytoskeleton that is present in dendritic spines. The study of actin dynamics in these spines traditionally has been hindered by the small size of the spine. In this study, we utilize a photo-activation localization microscopy (PALM)-based single-molecule tracking technique to analyze F-actin movements with approximately 30-nm resolution in cultured hippocampal neurons. We were able to observe the kinematic (physical motion of actin filaments, i.e., retrograde flow) and kinetic (F-actin turn-over) dynamics of F-actin at the single-filament level in dendritic spines. We found that F-actin in dendritic spines exhibits highly heterogeneous kinematic dynamics at the individual filament level, with simultaneous actin flows in both retrograde and anterograde directions. At the ensemble level, movements of filaments integrate into a net retrograde flow of approximately 138 nm/min. These results suggest a weakly polarized F-actin network that consists of mostly short filaments in dendritic spines.
A variational ensemble scheme for noisy image data assimilation
NASA Astrophysics Data System (ADS)
Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne
2014-05-01
Data assimilation techniques aim at recovering a system state variables trajectory denoted as X, along time from partially observed noisy measurements of the system denoted as Y. These procedures, which couple dynamics and noisy measurements of the system, fulfill indeed a twofold objective. On one hand, they provide a denoising - or reconstruction - procedure of the data through a given model framework and on the other hand, they provide estimation procedures for unknown parameters of the dynamics. A standard variational data assimilation problem can be formulated as the minimization of the following objective function with respect to the initial discrepancy, η, from the background initial guess: δ« J(η(x)) = 1∥Xb (x) - X (t ,x)∥2 + 1 tf∥H(X (t,x ))- Y (t,x)∥2dt. 2 0 0 B 2 t0 R (1) where the observation operator H links the state variable and the measurements. The cost function can be interpreted as the log likelihood function associated to the a posteriori distribution of the state given the past history of measurements and the background. In this work, we aim at studying ensemble based optimal control strategies for data assimilation. Such formulation nicely combines the ingredients of ensemble Kalman filters and variational data assimilation (4DVar). It is also formulated as the minimization of the objective function (1), but similarly to ensemble filter, it introduces in its objective function an empirical ensemble-based background-error covariance defined as: B ≡ <(Xb -
Super-radiant effects in electron oscillators with near-cutoff operating waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bandurkin, I. V.; Savilov, A. V.; Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod
2015-06-15
Super-radiant regimes in electron oscillators can be attractive for applications requiring powerful and relatively short pulses of microwave radiation, since the peak power of the super-radiant pulse can exceed the power of the operating electron beam. In this paper, possibilities for realization of the super-radiant regimes are studied in various schemes of electron oscillators based on excitation of near-cutoff operating waves (gyrotron and orotron)
Enhancing multi-spot structured illumination microscopy with fluorescence difference
NASA Astrophysics Data System (ADS)
Ward, Edward N.; Torkelsen, Frida H.; Pal, Robert
2018-03-01
Structured illumination microscopy is a super-resolution technique used extensively in biological research. However, this technique is limited in the maximum possible resolution increase. Here we report the results of simulations of a novel enhanced multi-spot structured illumination technique. This method combines the super-resolution technique of difference microscopy with structured illumination deconvolution. Initial results give at minimum a 1.4-fold increase in resolution over conventional structured illumination in a low-noise environment. This new technique also has the potential to be expanded to further enhance axial resolution with three-dimensional difference microscopy. The requirement for precise pattern determination in this technique also led to the development of a new pattern estimation algorithm which proved more efficient and reliable than other methods tested.
NASA Astrophysics Data System (ADS)
Fuji, Hiroshi; Kikukawa, Takashi; Tominaga, Junji
2004-07-01
Pit-edge recording at a density of 150 nm pits and spaces is carried out on a super-resolution near-field structure (super-RENS) disk with a platinum oxide layer. Pits are recorded and read using a 635-nm-wavelength laser and an objective lens with a 0.6 numerical aperture. We arrange laser pulses to correctly record the pits on the disk by a write-strategy technique. The laser-pulse figure includes a unit time of 0.25 T and intensities of Pw1, Pw2 and Pw3. After recording pits of various lengths, the observation of an eye pattern is achieved despite a pit smaller than the resolution limit. Furthermore, the eye pattern maintains its shape even though other pits fill the adjacent tracks at a track density of 600 nm. The disk can be used as a pit-edge recording system through a write-strategy technique.
NASA Astrophysics Data System (ADS)
Granero, Luis; Ferreira, Carlos; Zalevsky, Zeev; García, Javier; Micó, Vicente
2016-07-01
Single-Exposure Super-Resolved Interferometric Microscopy (SESRIM) reports on a way to achieve one-dimensional (1-D) superresolved imaging in digital holographic microscopy (DHM) by a single illumination shot and digital recording. SESRIM provides color-coded angular multiplexing of the accessible sample's range of spatial frequencies and it allows their recording in a single CCD (color or monochrome) snapshot by adding 3 RGB coherent reference beams at the output plane. In this manuscript, we extend the applicability of SESRIM to the field of digital in-line holographic microscopy (DIHM), that is, working without lenses. As consequence of the in-line configuration, an additional restriction concerning the object field of view (FOV) must be imposed to the technique. Experimental results are reported for both a synthetic object (USAF resolution test target) and a biological sample (swine sperm sample) validating this new kind of superresolution imaging method named as lensless SESRIM (L-SESRIM).
Super resolution for astronomical observations
NASA Astrophysics Data System (ADS)
Li, Zhan; Peng, Qingyu; Bhanu, Bir; Zhang, Qingfeng; He, Haifeng
2018-05-01
In order to obtain detailed information from multiple telescope observations a general blind super-resolution (SR) reconstruction approach for astronomical images is proposed in this paper. A pixel-reliability-based SR reconstruction algorithm is described and implemented, where the developed process incorporates flat field correction, automatic star searching and centering, iterative star matching, and sub-pixel image registration. Images captured by the 1-m telescope at Yunnan Observatory are used to test the proposed technique. The results of these experiments indicate that, following SR reconstruction, faint stars are more distinct, bright stars have sharper profiles, and the backgrounds have higher details; thus these results benefit from the high-precision star centering and image registration provided by the developed method. Application of the proposed approach not only provides more opportunities for new discoveries from astronomical image sequences, but will also contribute to enhancing the capabilities of most spatial or ground-based telescopes.
Field-Portable Pixel Super-Resolution Colour Microscope
Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan
2013-01-01
Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm2. This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate ‘rainbow’ like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings. PMID:24086742
Field-portable pixel super-resolution colour microscope.
Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan
2013-01-01
Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm(2). This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate 'rainbow' like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings.
Fabrication of nano-structured super-hydrophobic film on aluminum by controllable immersing method
NASA Astrophysics Data System (ADS)
Wu, Ruomei; Liang, Shuquan; Pan, Anqiang; Yuan, Zhiqing; Tang, Yan; Tan, Xiaoping; Guan, Dikai; Yu, Ya
2012-06-01
Aluminum alloy surface can be etched easily in acid environment, but the microstructure of alloy surface hardly meets the customers' demand. In this work, a facile acidic-assistant surface oxidation technique has been employed to form reproducible super-hydrophobic surfaces on aluminum alloy plates. The samples immersed in three different acid solutions at ambient temperatures are studied and the results demonstrated that the aqueous mixture solution of oxalic acid and hydrochloric is easier to produce better faces and better stability. Scanning electron microscopy (SEM), X-ray diffraction (XRD), Raman spectrometer, X-ray photoelectron spectroscopy (XPS) and water contact angle measurement are used to investigate the morphologies, microstructures, chemical compositions and hydrophobicity of the produced films on aluminum substrates. The surfaces, configured of a labyrinth structure with convexity and concavity, are in different roughness and gloss because of the different recipe acid solutions used. Better roughness of the surface can be obtained by adjusting the concentration of Clˉ and oxalate ions in acid solutions. The present research work provides a new strategy for the controllable preparation super-hydrophobic films of general materials on aluminum alloy for practical industrial applications.
Galaxy power-spectrum responses and redshift-space super-sample effect
NASA Astrophysics Data System (ADS)
Li, Yin; Schmittfull, Marcel; Seljak, Uroš
2018-02-01
As a major source of cosmological information, galaxy clustering is susceptible to long-wavelength density and tidal fluctuations. These long modes modulate the growth and expansion rate of local structures, shifting them in both amplitude and scale. These effects are often named the growth and dilation effects, respectively. In particular the dilation shifts the baryon acoustic oscillation (BAO) peak and breaks the assumption of the Alcock-Paczynski (AP) test. This cannot be removed with reconstruction techniques because the effect originates from long modes outside the survey. In redshift space, the long modes generate a large-scale radial peculiar velocity that affects the redshift-space distortion (RSD) signal. We compute the redshift-space response functions of the galaxy power spectrum to long density and tidal modes at leading order in perturbation theory, including both the growth and dilation terms. We validate these response functions against measurements from simulated galaxy mock catalogs. As one application, long density and tidal modes beyond the scale of a survey correlate various observables leading to an excess error known as the super-sample covariance, and thus weaken their constraining power. We quantify the super-sample effect on BAO, AP, and RSD measurements, and study its impact on current and future surveys.
Nanoscopy for nanoscience: how super-resolution microscopy extends imaging for nanotechnology.
Johnson, Sam A
2015-01-01
Imaging methods have presented scientists with powerful means of investigation for centuries. The ability to resolve structures using light microscopes is though limited to around 200 nm. Fluorescence-based super-resolution light microscopy techniques of several principles and methods have emerged in recent years and offer great potential to extend the capabilities of microscopy. This resolution improvement is especially promising for nanoscience where the imaging of nanoscale structures is inherently restricted by the resolution limit of standard forms of light microscopy. Resolution can be improved by several distinct approaches including structured illumination microscopy, stimulated emission depletion, and single-molecule positioning methods such as photoactivated localization microscopy and stochastic optical reconstruction microscopy and several derivative variations of each of these. These methods involve substantial differences in the resolutions achievable in the different axes, speed of acquisition, compatibility with different labels, ease of use, hardware complexity, and compatibility with live biological samples. The field of super-resolution imaging and its application to nanotechnology is relatively new and still rapidly developing. An overview of how these methods may be used with nanomaterials is presented with some examples of pioneering uses of these approaches. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Kasiviswanathan, K.; Sudheer, K.
2013-05-01
Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph
Synaptic Ensemble Underlying the Selection and Consolidation of Neuronal Circuits during Learning.
Hoshiba, Yoshio; Wada, Takeyoshi; Hayashi-Takagi, Akiko
2017-01-01
Memories are crucial to the cognitive essence of who we are as human beings. Accumulating evidence has suggested that memories are stored as a subset of neurons that probably fire together in the same ensemble. Such formation of cell ensembles must meet contradictory requirements of being plastic and responsive during learning, but also stable in order to maintain the memory. Although synaptic potentiation is presumed to be the cellular substrate for this process, the link between the two remains correlational. With the application of the latest optogenetic tools, it has been possible to collect direct evidence of the contributions of synaptic potentiation in the formation and consolidation of cell ensemble in a learning task specific manner. In this review, we summarize the current view of the causative role of synaptic plasticity as the cellular mechanism underlying the encoding of memory and recalling of learned memories. In particular, we will be focusing on the latest optoprobe developed for the visualization of such "synaptic ensembles." We further discuss how a new synaptic ensemble could contribute to the formation of cell ensembles during learning and memory. With the development and application of novel research tools in the future, studies on synaptic ensembles will pioneer new discoveries, eventually leading to a comprehensive understanding of how the brain works.
NASA Astrophysics Data System (ADS)
Tallapragada, V.
2017-12-01
NOAA's Next Generation Global Prediction System (NGGPS) has provided the unique opportunity to develop and implement a non-hydrostatic global model based on Geophysical Fluid Dynamics Laboratory (GFDL) Finite Volume Cubed Sphere (FV3) Dynamic Core at National Centers for Environmental Prediction (NCEP), making a leap-step advancement in seamless prediction capabilities across all spatial and temporal scales. Model development efforts are centralized with unified model development in the NOAA Environmental Modeling System (NEMS) infrastructure based on Earth System Modeling Framework (ESMF). A more sophisticated coupling among various earth system components is being enabled within NEMS following National Unified Operational Prediction Capability (NUOPC) standards. The eventual goal of unifying global and regional models will enable operational global models operating at convective resolving scales. Apart from the advanced non-hydrostatic dynamic core and coupling to various earth system components, advanced physics and data assimilation techniques are essential for improved forecast skill. NGGPS is spearheading ambitious physics and data assimilation strategies, concentrating on creation of a Common Community Physics Package (CCPP) and Joint Effort for Data Assimilation Integration (JEDI). Both initiatives are expected to be community developed, with emphasis on research transitioning to operations (R2O). The unified modeling system is being built to support the needs of both operations and research. Different layers of community partners are also established with specific roles/responsibilities for researchers, core development partners, trusted super-users, and operations. Stakeholders are engaged at all stages to help drive the direction of development, resources allocations and prioritization. This talk presents the current and future plans of unified model development at NCEP for weather, sub-seasonal, and seasonal climate prediction applications with special emphasis on implementation of NCEP FV3 Global Forecast System (GFS) and Global Ensemble Forecast System (GEFS) into operations by 2019.
Capturing the Surface Texture and Shape of Pollen: A Comparison of Microscopy Techniques
Sivaguru, Mayandi; Mander, Luke; Fried, Glenn; Punyasena, Surangi W.
2012-01-01
Research on the comparative morphology of pollen grains depends crucially on the application of appropriate microscopy techniques. Information on the performance of microscopy techniques can be used to inform that choice. We compared the ability of several microscopy techniques to provide information on the shape and surface texture of three pollen types with differing morphologies. These techniques are: widefield, apotome, confocal and two-photon microscopy (reflected light techniques), and brightfield and differential interference contrast microscopy (DIC) (transmitted light techniques). We also provide a first view of pollen using super-resolution microscopy. The three pollen types used to contrast the performance of each technique are: Croton hirtus (Euphorbiaceae), Mabea occidentalis (Euphorbiaceae) and Agropyron repens (Poaceae). No single microscopy technique provided an adequate picture of both the shape and surface texture of any of the three pollen types investigated here. The wavelength of incident light, photon-collection ability of the optical technique, signal-to-noise ratio, and the thickness and light absorption characteristics of the exine profoundly affect the recovery of morphological information by a given optical microscopy technique. Reflected light techniques, particularly confocal and two-photon microscopy, best capture pollen shape but provide limited information on very fine surface texture. In contrast, transmitted light techniques, particularly differential interference contrast microscopy, can resolve very fine surface texture but provide limited information on shape. Texture comprising sculptural elements that are spaced near the diffraction limit of light (∼250 nm; NDL) presents an acute challenge to optical microscopy. Super-resolution structured illumination microscopy provides data on the NDL texture of A. repens that is more comparable to textural data from scanning electron microscopy than any other optical microscopy technique investigated here. Maximizing the recovery of morphological information from pollen grains should lead to more robust classifications, and an increase in the taxonomic precision with which ancient vegetation can be reconstructed. PMID:22720050
NASA Astrophysics Data System (ADS)
Javier Romualdez, Luis
Scientific balloon-borne instrumentation offers an attractive, competitive, and effective alternative to space-borne missions when considering the overall scope, cost, and development timescale required to design and launch scientific instruments. In particular, the balloon-borne environment provides a near-space regime that is suitable for a number of modern astronomical and cosmological experiments, where the atmospheric interference suffered by ground-based instrumentation is negligible at stratospheric altitudes. This work is centered around the analytical strategies and implementation considerations for the attitude determination and control of SuperBIT, a scientific balloon-borne payload capable of meeting the strict sub-arcsecond pointing and image stability requirements demanded by modern cosmological experiments. Broadly speaking, the designed stability specifications of SuperBIT coupled with its observational efficiency, image quality, and accessibility rivals state-of-the-art astronomical observatories such as the Hubble Space Telescope. To this end, this work presents an end-to-end design methodology for precision pointing balloon-borne payloads such as SuperBIT within an analytical yet implementationally grounded context. Simulation models of SuperBIT are analytically derived to aid in pre-assembly trade-off and case studies that are pertinent to the dynamic balloon-borne environment. From these results, state estimation techniques and control methodologies are extensively developed, leveraging the analytical framework of simulation models and design studies. This pre-assembly design phase is physically validated during assembly, integration, and testing through implementation in real-time hardware and software, which bridges the gap between analytical results and practical application. SuperBIT attitude determination and control is demonstrated throughout two engineering test flights that verify pointing and image stability requirements in flight, where the post-flight results close the overall design loop by suggesting practical improvements to pre-design methodologies. Overall, the analytical and practical results presented in this work, though centered around the SuperBIT project, provide generically useful and implementationally viable methodologies for high precision balloon-borne instrumentation, all of which are validated, justified, and improved both theoretically and practically. As such, the continuing development of SuperBIT, built from the work presented in this thesis, strives to further the potential for scientific balloon-borne astronomy in the near future.
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.
Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
Life under the Microscope: Single-Molecule Fluorescence Highlights the RNA World.
Ray, Sujay; Widom, Julia R; Walter, Nils G
2018-04-25
The emergence of single-molecule (SM) fluorescence techniques has opened up a vast new toolbox for exploring the molecular basis of life. The ability to monitor individual biomolecules in real time enables complex, dynamic folding pathways to be interrogated without the averaging effect of ensemble measurements. In parallel, modern biology has been revolutionized by our emerging understanding of the many functions of RNA. In this comprehensive review, we survey SM fluorescence approaches and discuss how the application of these tools to RNA and RNA-containing macromolecular complexes in vitro has yielded significant insights into the underlying biology. Topics covered include the three-dimensional folding landscapes of a plethora of isolated RNA molecules, their assembly and interactions in RNA-protein complexes, and the relation of these properties to their biological functions. In all of these examples, the use of SM fluorescence methods has revealed critical information beyond the reach of ensemble averages.
Protein binding hot spots prediction from sequence only by a new ensemble learning method.
Hu, Shan-Shan; Chen, Peng; Wang, Bing; Li, Jinyan
2017-10-01
Hot spots are interfacial core areas of binding proteins, which have been applied as targets in drug design. Experimental methods are costly in both time and expense to locate hot spot areas. Recently, in-silicon computational methods have been widely used for hot spot prediction through sequence or structure characterization. As the structural information of proteins is not always solved, and thus hot spot identification from amino acid sequences only is more useful for real-life applications. This work proposes a new sequence-based model that combines physicochemical features with the relative accessible surface area of amino acid sequences for hot spot prediction. The model consists of 83 classifiers involving the IBk (Instance-based k means) algorithm, where instances are encoded by important properties extracted from a total of 544 properties in the AAindex1 (Amino Acid Index) database. Then top-performance classifiers are selected to form an ensemble by a majority voting technique. The ensemble classifier outperforms the state-of-the-art computational methods, yielding an F1 score of 0.80 on the benchmark binding interface database (BID) test set. http://www2.ahu.edu.cn/pchen/web/HotspotEC.htm .
Creating ensembles of decision trees through sampling
Kamath, Chandrika; Cantu-Paz, Erick
2005-08-30
A system for decision tree ensembles that includes a module to read the data, a module to sort the data, a module to evaluate a potential split of the data according to some criterion using a random sample of the data, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method is based on statistical sampling techniques and includes the steps of reading the data; sorting the data; evaluating a potential split according to some criterion using a random sample of the data, splitting the data, and combining multiple decision trees in ensembles.
A Fractional Cartesian Composition Model for Semi-Spatial Comparative Visualization Design.
Kolesar, Ivan; Bruckner, Stefan; Viola, Ivan; Hauser, Helwig
2017-01-01
The study of spatial data ensembles leads to substantial visualization challenges in a variety of applications. In this paper, we present a model for comparative visualization that supports the design of according ensemble visualization solutions by partial automation. We focus on applications, where the user is interested in preserving selected spatial data characteristics of the data as much as possible-even when many ensemble members should be jointly studied using comparative visualization. In our model, we separate the design challenge into a minimal set of user-specified parameters and an optimization component for the automatic configuration of the remaining design variables. We provide an illustrated formal description of our model and exemplify our approach in the context of several application examples from different domains in order to demonstrate its generality within the class of comparative visualization problems for spatial data ensembles.
Supercritical fluid extraction. Principles and practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
McHugh, M.A.; Krukonis, V.J.
This book is a presentation of the fundamentals and application of super-critical fluid solvents (SCF). The authors cover virtually every facet of SCF technology: the history of SCF extraction, its underlying thermodynamic principles, process principles, industrial applications, and analysis of SCF research and development efforts. The thermodynamic principles governing SCF extraction are covered in depth. The often complex three-dimensional pressure-temperature composition (PTx) phase diagrams for SCF-solute mixtures are constructed in a coherent step-by-step manner using the more familiar two-dimensional Px diagrams. The experimental techniques used to obtain high pressure phase behavior information are described in detail and the advantages andmore » disadvantages of each technique are explained. Finally, the equations used to model SCF-solute mixtures are developed, and modeling results are presented to highlight the correlational strengths of a cubic equation of state.« less
NASA Technical Reports Server (NTRS)
MIittman, David S
2011-01-01
Ensemble is an open architecture for the development, integration, and deployment of mission operations software. Fundamentally, it is an adaptation of the Eclipse Rich Client Platform (RCP), a widespread, stable, and supported framework for component-based application development. By capitalizing on the maturity and availability of the Eclipse RCP, Ensemble offers a low-risk, politically neutral path towards a tighter integration of operations tools. The Ensemble project is a highly successful, ongoing collaboration among NASA Centers. Since 2004, the Ensemble project has supported the development of mission operations software for NASA's Exploration Systems, Science, and Space Operations Directorates.
On the structure and phase transitions of power-law Poissonian ensembles
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Oshanin, Gleb
2012-10-01
Power-law Poissonian ensembles are Poisson processes that are defined on the positive half-line, and that are governed by power-law intensities. Power-law Poissonian ensembles are stochastic objects of fundamental significance; they uniquely display an array of fractal features and they uniquely generate a span of important applications. In this paper we apply three different methods—oligarchic analysis, Lorenzian analysis and heterogeneity analysis—to explore power-law Poissonian ensembles. The amalgamation of these analyses, combined with the topology of power-law Poissonian ensembles, establishes a detailed and multi-faceted picture of the statistical structure and the statistical phase transitions of these elemental ensembles.
Gholipour, Ali; Afacan, Onur; Aganj, Iman; Scherrer, Benoit; Prabhu, Sanjay P; Sahin, Mustafa; Warfield, Simon K
2015-12-01
To compare and evaluate the use of super-resolution reconstruction (SRR), in frequency, image, and wavelet domains, to reduce through-plane partial voluming effects in magnetic resonance imaging. The reconstruction of an isotropic high-resolution image from multiple thick-slice scans has been investigated through techniques in frequency, image, and wavelet domains. Experiments were carried out with thick-slice T2-weighted fast spin echo sequence on the Academic College of Radiology MRI phantom, where the reconstructed images were compared to a reference high-resolution scan using peak signal-to-noise ratio (PSNR), structural similarity image metric (SSIM), mutual information (MI), and the mean absolute error (MAE) of image intensity profiles. The application of super-resolution reconstruction was then examined in retrospective processing of clinical neuroimages of ten pediatric patients with tuberous sclerosis complex (TSC) to reduce through-plane partial voluming for improved 3D delineation and visualization of thin radial bands of white matter abnormalities. Quantitative evaluation results show improvements in all evaluation metrics through super-resolution reconstruction in the frequency, image, and wavelet domains, with the highest values obtained from SRR in the image domain. The metric values for image-domain SRR versus the original axial, coronal, and sagittal images were PSNR = 32.26 vs 32.22, 32.16, 30.65; SSIM = 0.931 vs 0.922, 0.924, 0.918; MI = 0.871 vs 0.842, 0.844, 0.831; and MAE = 5.38 vs 7.34, 7.06, 6.19. All similarity metrics showed high correlations with expert ranking of image resolution with MI showing the highest correlation at 0.943. Qualitative assessment of the neuroimages of ten TSC patients through in-plane and out-of-plane visualization of structures showed the extent of partial voluming effect in a real clinical scenario and its reduction using SRR. Blinded expert evaluation of image resolution in resampled out-of-plane views consistently showed the superiority of SRR compared to original axial and coronal image acquisitions. Thick-slice 2D T2-weighted MRI scans are part of many routine clinical protocols due to their high signal-to-noise ratio, but are often severely affected by through-plane partial voluming effects. This study shows that while radiologic assessment is performed in 2D on thick-slice scans, super-resolution MRI reconstruction techniques can be used to fuse those scans to generate a high-resolution image with reduced partial voluming for improved postacquisition processing. Qualitative and quantitative evaluation showed the efficacy of all SRR techniques with the best results obtained from SRR in the image domain. The limitations of SRR techniques are uncertainties in modeling the slice profile, density compensation, quantization in resampling, and uncompensated motion between scans.
Ensembler: Enabling High-Throughput Molecular Simulations at the Superfamily Scale.
Parton, Daniel L; Grinaway, Patrick B; Hanson, Sonya M; Beauchamp, Kyle A; Chodera, John D
2016-06-01
The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences-from a single sequence to an entire superfamily-and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics-such as Markov state models (MSMs)-which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.
NASA Astrophysics Data System (ADS)
Subramanian, Aneesh C.; Palmer, Tim N.
2017-06-01
Stochastic schemes to represent model uncertainty in the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system has helped improve its probabilistic forecast skill over the past decade by both improving its reliability and reducing the ensemble mean error. The largest uncertainties in the model arise from the model physics parameterizations. In the tropics, the parameterization of moist convection presents a major challenge for the accurate prediction of weather and climate. Superparameterization is a promising alternative strategy for including the effects of moist convection through explicit turbulent fluxes calculated from a cloud-resolving model (CRM) embedded within a global climate model (GCM). In this paper, we compare the impact of initial random perturbations in embedded CRMs, within the ECMWF ensemble prediction system, with stochastically perturbed physical tendency (SPPT) scheme as a way to represent model uncertainty in medium-range tropical weather forecasts. We especially focus on forecasts of tropical convection and dynamics during MJO events in October-November 2011. These are well-studied events for MJO dynamics as they were also heavily observed during the DYNAMO field campaign. We show that a multiscale ensemble modeling approach helps improve forecasts of certain aspects of tropical convection during the MJO events, while it also tends to deteriorate certain large-scale dynamic fields with respect to stochastically perturbed physical tendencies approach that is used operationally at ECMWF.
NASA Astrophysics Data System (ADS)
Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert
2016-05-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta
2009-07-01
Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.
Super-sensing technology: industrial applications and future challenges of electrical tomography.
Wei, Kent Hsin-Yu; Qiu, Chang-Hua; Primrose, Ken
2016-06-28
Electrical tomography is a relatively new imaging technique that can image the distribution of the passive electrical properties of an object. Since electrical tomography technology was proposed in the 1980s, the technique has evolved rapidly because of its low cost, easy scale-up and non-invasive features. The technique itself can be sensitive to all passive electrical properties, such as conductivity, permittivity and permeability. Hence, it has a huge potential to be applied in many applications. Owing to its ill-posed nature and low image resolution, electrical tomography attracts more attention in industrial fields than biomedical fields. In the past decades, there have been many research developments and industrial implementations of electrical tomography; nevertheless, the awareness of this technology in industrial sectors is still one of the biggest limitations for technology implementation. In this paper, the authors have summarized several representative applications that use electrical tomography. Some of the current tomography research activities will also be discussed. This article is part of the themed issue 'Supersensing through industrial process tomography'. © 2016 The Author(s).
A Theoretical Analysis of Why Hybrid Ensembles Work
2017-01-01
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles. PMID:28255296
Enhancing Analytical Separations Using Super-Resolution Microscopy
NASA Astrophysics Data System (ADS)
Moringo, Nicholas A.; Shen, Hao; Bishop, Logan D. C.; Wang, Wenxiao; Landes, Christy F.
2018-04-01
Super-resolution microscopy is becoming an invaluable tool to investigate structure and dynamics driving protein interactions at interfaces. In this review, we highlight the applications of super-resolution microscopy for quantifying the physics and chemistry that occur between target proteins and stationary-phase supports during chromatographic separations. Our discussion concentrates on the newfound ability of super-resolved single-protein spectroscopy to inform theoretical parameters via quantification of adsorption-desorption dynamics, protein unfolding, and nanoconfined transport.
Application of Diamond Nanoparticles in Low-Energy Neutron Physics
Nesvizhevsky, Valery; Cubitt, Robert; Lychagin, Egor; Muzychka, Alexei; Nekhaev, Grigory; Pignol, Guillaume; Protasov, Konstantin; Strelkov, Alexander
2010-01-01
Diamond, with its exceptionally high optical nuclear potential and low absorption cross-section, is a unique material for a series of applications in VCN (very cold neutron) physics and techniques. In particular, powder of diamond nanoparticles provides the best reflector for neutrons in the complete VCN energy range. It allowed also the first observation of quasi-specular reflection of cold neutrons (CN) from disordered medium. Effective critical velocity for such a quasi-specular reflection is higher than that for the best super-mirror. Nano-diamonds survive in high radiation fluxes; therefore they could be used, under certain conditions, in the vicinity of intense neutron sources.
NASA Astrophysics Data System (ADS)
Nikitin, S. Yu.; Priezzhev, A. V.; Lugovtsov, A. E.; Ustinov, V. D.; Razgulin, A. V.
2014-10-01
The paper is devoted to development of the laser ektacytometry technique for evaluation of the statistical characteristics of inhomogeneous ensembles of red blood cells (RBCs). We have analyzed theoretically laser beam scattering by the inhomogeneous ensembles of elliptical discs, modeling red blood cells in the ektacytometer. The analysis shows that the laser ektacytometry technique allows for quantitative evaluation of such population characteristics of RBCs as the cells mean shape, the cells deformability variance and asymmetry of the cells distribution in the deformability. Moreover, we show that the deformability distribution itself can be retrieved by solving a specific Fredholm integral equation of the first kind. At this stage we do not take into account the scatter in the RBC sizes.
Human health impacts avoided under the Paris Agreement on climate change
NASA Astrophysics Data System (ADS)
Mitchell, Dann
2017-04-01
This analyses makes use of the experiments and model data from the Half a degree Additional warming; Prognosis and Projected Impacts (HAPPI; www.happimip.org) analysis (Mitchell et al, 2016a). HAPPI is unique in that it is specifically designed to address the Paris Agreement priorities on climate impacts, by using equilibrated climates and super-ensembles, thereby enabling robust analysis of extremes. Here we first look at extreme hot and cold spells, and then make use of the most recent heat-mortality models, and heat stress metrics to look at any differences between 1.5C and 2C worlds compared to normal.
Merdasa, Aboma; Tian, Yuxi; Camacho, Rafael; Dobrovolsky, Alexander; Debroye, Elke; Unger, Eva L; Hofkens, Johan; Sundström, Villy; Scheblykin, Ivan G
2017-06-27
Organo-metal halide perovskites are some of the most promising materials for the new generation of low-cost photovoltaic and light-emitting devices. Their solution processability is a beneficial trait, although it leads to a spatial inhomogeneity of perovskite films with a variation of the trap state density at the nanoscale. Comprehending their properties using traditional spectroscopy therefore becomes difficult, calling for a combination with microscopy in order to see beyond the ensemble-averaged response. We studied photoluminescence (PL) blinking of micrometer-sized individual methylammonium lead iodide (MAPbI 3 ) perovskite polycrystals, as well as monocrystalline microrods up to 10 μm long. We correlated their PL dynamics with structure employing scanning electron and optical super-resolution microscopy. Combining super-resolution localization imaging and super-resolution optical fluctuation imaging (SOFI), we could detect and quantify preferential emitting regions in polycrystals exhibiting different types of blinking. We propose that blinking in MAPbI 3 occurs by the activation/passivation of a "supertrap" which presumably is a donor-acceptor pair able to trap both electrons and holes. As such, nonradiative recombination via supertraps, in spite being present at a rather low concentrations (10 12 -10 15 cm -3 ), is much more efficient than via all other defect states present in the material at higher concentrations (10 16 -10 18 cm -3 ). We speculate that activation/deactivation of a supertrap occurs by its temporary dissociation into free donor and acceptor impurities. We found that supertraps are most efficient in structurally homogeneous and large MAPbI 3 crystals where carrier diffusion is efficient, which may therefore pose limitations on the efficiency of perovskite-based devices.
Institute for Sustained Performance, Energy, and Resilience (SuPER)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jagode, Heike; Bosilca, George; Danalis, Anthony
The University of Tennessee (UTK) and University of Texas at El Paso (UTEP) partnership supported the three main thrusts of the SUPER project---performance, energy, and resilience. The UTK-UTEP effort thus helped advance the main goal of SUPER, which was to ensure that DOE's computational scientists can successfully exploit the emerging generation of high performance computing (HPC) systems. This goal is being met by providing application scientists with strategies and tools to productively maximize performance, conserve energy, and attain resilience. The primary vehicle through which UTK provided performance measurement support to SUPER and the larger HPC community is the Performance Applicationmore » Programming Interface (PAPI). PAPI is an ongoing project that provides a consistent interface and methodology for collecting hardware performance information from various hardware and software components, including most major CPUs, GPUs and accelerators, interconnects, I/O systems, and power interfaces, as well as virtual cloud environments. The PAPI software is widely used for performance modeling of scientific and engineering applications---for example, the HOMME (High Order Methods Modeling Environment) climate code, and the GAMESS and NWChem computational chemistry codes---on DOE supercomputers. PAPI is widely deployed as middleware for use by higher-level profiling, tracing, and sampling tools (e.g., CrayPat, HPCToolkit, Scalasca, Score-P, TAU, Vampir, PerfExpert), making it the de facto standard for hardware counter analysis. PAPI has established itself as fundamental software infrastructure in every application domain (spanning academia, government, and industry), where improving performance can be mission critical. Ultimately, as more application scientists migrate their applications to HPC platforms, they will benefit from the extended capabilities this grant brought to PAPI to analyze and optimize performance in these environments, whether they use PAPI directly, or via third-party performance tools. Capabilities added to PAPI through this grant include support for new architectures such as the lastest GPU and Xeon Phi accelerators, and advanced power measurement and management features. Another important topic for the UTK team was providing support for a rich ecosystem of different fault management strategies in the context of parallel computing. Our long term efforts have been oriented toward proposing flexible strategies and providing building boxes that application developers can use to build the most efficient fault management technique for their application. These efforts span across the entire software spectrum, from theoretical models of existing strategies to easily assess their performance, to algorithmic modifications to take advantage of specific mathematical properties for data redundancy and to extensions to widely used programming paradigms to empower the application developers to deal with all types of faults. We have also continued our tight collaborations with users to help them adopt these technologies to ensure their application always deliver meaningful scientific data. Large supercomputer systems are becoming more and more power and energy constrained, and future systems and applications running on them will need to be optimized to run under power caps and/or minimize energy consumption. The UTEP team contributed to the SUPER energy thrust by developing power modeling methodologies and investigating power management strategies. Scalability modeling results showed that some applications can scale better with respect to an increasing power budget than with respect to only the number of processors. Power management, in particular shifting power to processors on the critical path of an application execution, can reduce perturbation due to system noise and other sources of runtime variability, which are growing problems on large-scale power-constrained computer systems.« less
Fabrication of super-hydrophobic duo-structures
NASA Astrophysics Data System (ADS)
Zhang, X. Y.; Zhang, F.; Jiang, Y. J.; Wang, Y. Y.; Shi, Z. W.; Peng, C. S.
2015-04-01
Recently, super-hydrophobicity has attracted increasing attention due to its huge potential in the practical applications. In this paper, we have presented a duo-structure of the combination of micro-dot-matrix and nano-candle-soot. Polydimethylsiloxane (PDMS) was used as a combination layer between the dot-matrix and the soot particles. Firstly, a period of 9-μm dot-matrix was easily fabricated on the K9 glass using the most simple and mature photolithography process. Secondly, the dot-matrix surface was coated by a thin film of PDMS (elastomer: hardener=10:1) which was diluted by methylbenzene at the volume ratio of 1:8. Thirdly, we held the PDMS modified surface over a candle flame to deposit a soot layer and followed by a gentle water-risen to remove the non-adhered particles. At last, the samples were baked at 85°C for 2 hours and then the duo-structure surface with both micro-size dot-matrix and nano-size soot particles was obtained. The SEM indicated this novel surface morphology was quite like a lotus leaf of the well-know micro-nano-binary structures. As a result, the contact angle meter demonstrated such surface exhibited a perfect super-hydrophobicity with water contact angle of 153° and sliding angle of 3°. Besides, just listed as above, the fabrication process for our structure was quite more easy, smart and low-cost compared with the other production technique for super-hydrophobic surfaces such as the phase separation method, electrochemical deposition and chemical vapor deposition etc. Hence, this super-hydrophobic duo-structure reported in this letter was a great promising candidate for a wide and rapid commercialization in the future.
Evaluation of fluorophores for optimal performance in localization-based super-resolution imaging
Dempsey, Graham T.; Vaughan, Joshua C.; Chen, Kok Hao; Bates, Mark; Zhuang, Xiaowei
2011-01-01
One approach to super-resolution fluorescence imaging uses sequential activation and localization of individual fluorophores to achieve high spatial resolution. Essential to this technique is the choice of fluorescent probes — the properties of the probes, including photons per switching event, on/off duty cycle, photostability, and number of switching cycles, largely dictate the quality of super-resolution images. While many probes have been reported, a systematic characterization of the properties of these probes and their impact on super-resolution image quality has been described in only a few cases. Here, we quantitatively characterized the switching properties of 26 organic dyes and directly related these properties to the quality of super-resolution images. This analysis provides a set of guidelines for characterization of super-resolution probes and a resource for selecting probes based on performance. Our evaluation identified several photoswitchable dyes with good to excellent performance in four independent spectral ranges, with which we demonstrated low crosstalk, four-color super-resolution imaging. PMID:22056676
NASA Astrophysics Data System (ADS)
Bellier, Joseph; Bontron, Guillaume; Zin, Isabella
2017-12-01
Meteorological ensemble forecasts are nowadays widely used as input of hydrological models for probabilistic streamflow forecasting. These forcings are frequently biased and have to be statistically postprocessed, using most of the time univariate techniques that apply independently to individual locations, lead times and weather variables. Postprocessed ensemble forecasts therefore need to be reordered so as to reconstruct suitable multivariate dependence structures. The Schaake shuffle and ensemble copula coupling are the two most popular methods for this purpose. This paper proposes two adaptations of them that make use of meteorological analogues for reconstructing spatiotemporal dependence structures of precipitation forecasts. Performances of the original and adapted techniques are compared through a multistep verification experiment using real forecasts from the European Centre for Medium-Range Weather Forecasts. This experiment evaluates not only multivariate precipitation forecasts but also the corresponding streamflow forecasts that derive from hydrological modeling. Results show that the relative performances of the different reordering methods vary depending on the verification step. In particular, the standard Schaake shuffle is found to perform poorly when evaluated on streamflow. This emphasizes the crucial role of the precipitation spatiotemporal dependence structure in hydrological ensemble forecasting.
Akbar, Shahid; Hayat, Maqsood; Iqbal, Muhammad; Jan, Mian Ahmad
2017-06-01
Cancer is a fatal disease, responsible for one-quarter of all deaths in developed countries. Traditional anticancer therapies such as, chemotherapy and radiation, are highly expensive, susceptible to errors and ineffective techniques. These conventional techniques induce severe side-effects on human cells. Due to perilous impact of cancer, the development of an accurate and highly efficient intelligent computational model is desirable for identification of anticancer peptides. In this paper, evolutionary intelligent genetic algorithm-based ensemble model, 'iACP-GAEnsC', is proposed for the identification of anticancer peptides. In this model, the protein sequences are formulated, using three different discrete feature representation methods, i.e., amphiphilic Pseudo amino acid composition, g-Gap dipeptide composition, and Reduce amino acid alphabet composition. The performance of the extracted feature spaces are investigated separately and then merged to exhibit the significance of hybridization. In addition, the predicted results of individual classifiers are combined together, using optimized genetic algorithm and simple majority technique in order to enhance the true classification rate. It is observed that genetic algorithm-based ensemble classification outperforms than individual classifiers as well as simple majority voting base ensemble. The performance of genetic algorithm-based ensemble classification is highly reported on hybrid feature space, with an accuracy of 96.45%. In comparison to the existing techniques, 'iACP-GAEnsC' model has achieved remarkable improvement in terms of various performance metrics. Based on the simulation results, it is observed that 'iACP-GAEnsC' model might be a leading tool in the field of drug design and proteomics for researchers. Copyright © 2017 Elsevier B.V. All rights reserved.
Barlag, Britta; Beutel, Oliver; Janning, Dennis; Czarniak, Frederik; Richter, Christian P.; Kommnick, Carina; Göser, Vera; Kurre, Rainer; Fabiani, Florian; Erhardt, Marc; Piehler, Jacob; Hensel, Michael
2016-01-01
The investigation of the subcellular localization, dynamics and interaction of proteins and protein complexes in prokaryotes is complicated by the small size of the cells. Super-resolution microscopy (SRM) comprise various new techniques that allow light microscopy with a resolution that can be up to ten-fold higher than conventional light microscopy. Application of SRM techniques to living prokaryotes demands the introduction of suitable fluorescent probes, usually by fusion of proteins of interest to fluorescent proteins with properties compatible to SRM. Here we describe an approach that is based on the genetically encoded self-labelling enzymes HaloTag and SNAP-tag. Proteins of interest are fused to HaloTag or SNAP-tag and cell permeable substrates can be labelled with various SRM-compatible fluorochromes. Fusions of the enzyme tags to subunits of a type I secretion system (T1SS), a T3SS, the flagellar rotor and a transcription factor were generated and analysed in living Salmonella enterica. The new approach is versatile in tagging proteins of interest in bacterial cells and allows to determine the number, relative subcellular localization and dynamics of protein complexes in living cells. PMID:27534893
Mode locking of electron spin coherences in singly charged quantum dots.
Greilich, A; Yakovlev, D R; Shabaev, A; Efros, Al L; Yugova, I A; Oulton, R; Stavarache, V; Reuter, D; Wieck, A; Bayer, M
2006-07-21
The fast dephasing of electron spins in an ensemble of quantum dots is detrimental for applications in quantum information processing. We show here that dephasing can be overcome by using a periodic train of light pulses to synchronize the phases of the precessing spins, and we demonstrate this effect in an ensemble of singly charged (In,Ga)As/GaAs quantum dots. This mode locking leads to constructive interference of contributions to Faraday rotation and presents potential applications based on robust quantum coherence within an ensemble of dots.
DNA-based construction at the nanoscale: emerging trends and applications
NASA Astrophysics Data System (ADS)
Lourdu Xavier, P.; Chandrasekaran, Arun Richard
2018-02-01
The field of structural DNA nanotechnology has evolved remarkably—from the creation of artificial immobile junctions to the recent DNA-protein hybrid nanoscale shapes—in a span of about 35 years. It is now possible to create complex DNA-based nanoscale shapes and large hierarchical assemblies with greater stability and predictability, thanks to the development of computational tools and advances in experimental techniques. Although it started with the original goal of DNA-assisted structure determination of difficult-to-crystallize molecules, DNA nanotechnology has found its applications in a myriad of fields. In this review, we cover some of the basic and emerging assembly principles: hybridization, base stacking/shape complementarity, and protein-mediated formation of nanoscale structures. We also review various applications of DNA nanostructures, with special emphasis on some of the biophysical applications that have been reported in recent years. In the outlook, we discuss further improvements in the assembly of such structures, and explore possible future applications involving super-resolved fluorescence, single-particle cryo-electron (cryo-EM) and x-ray free electron laser (XFEL) nanoscopic imaging techniques, and in creating new synergistic designer materials.
DNA-based construction at the nanoscale: emerging trends and applications.
Xavier, P Lourdu; Chandrasekaran, Arun Richard
2018-02-09
The field of structural DNA nanotechnology has evolved remarkably-from the creation of artificial immobile junctions to the recent DNA-protein hybrid nanoscale shapes-in a span of about 35 years. It is now possible to create complex DNA-based nanoscale shapes and large hierarchical assemblies with greater stability and predictability, thanks to the development of computational tools and advances in experimental techniques. Although it started with the original goal of DNA-assisted structure determination of difficult-to-crystallize molecules, DNA nanotechnology has found its applications in a myriad of fields. In this review, we cover some of the basic and emerging assembly principles: hybridization, base stacking/shape complementarity, and protein-mediated formation of nanoscale structures. We also review various applications of DNA nanostructures, with special emphasis on some of the biophysical applications that have been reported in recent years. In the outlook, we discuss further improvements in the assembly of such structures, and explore possible future applications involving super-resolved fluorescence, single-particle cryo-electron (cryo-EM) and x-ray free electron laser (XFEL) nanoscopic imaging techniques, and in creating new synergistic designer materials.
Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series
Vicente, Raul; Díaz-Pernas, Francisco J.; Wibral, Michael
2014-01-01
Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems. PMID:25068489
Enhancing multi-spot structured illumination microscopy with fluorescence difference
Torkelsen, Frida H.
2018-01-01
Structured illumination microscopy is a super-resolution technique used extensively in biological research. However, this technique is limited in the maximum possible resolution increase. Here we report the results of simulations of a novel enhanced multi-spot structured illumination technique. This method combines the super-resolution technique of difference microscopy with structured illumination deconvolution. Initial results give at minimum a 1.4-fold increase in resolution over conventional structured illumination in a low-noise environment. This new technique also has the potential to be expanded to further enhance axial resolution with three-dimensional difference microscopy. The requirement for precise pattern determination in this technique also led to the development of a new pattern estimation algorithm which proved more efficient and reliable than other methods tested. PMID:29657751
Optimal averaging of soil moisture predictions from ensemble land surface model simulations
USDA-ARS?s Scientific Manuscript database
The correct interpretation of ensemble information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an instrumental variabl...
Crossover ensembles of random matrices and skew-orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Santosh, E-mail: skumar.physics@gmail.com; Pandey, Akhilesh, E-mail: ap0700@mail.jnu.ac.in
2011-08-15
Highlights: > We study crossover ensembles of Jacobi family of random matrices. > We consider correlations for orthogonal-unitary and symplectic-unitary crossovers. > We use the method of skew-orthogonal polynomials and quaternion determinants. > We prove universality of spectral correlations in crossover ensembles. > We discuss applications to quantum conductance and communication theory problems. - Abstract: In a recent paper (S. Kumar, A. Pandey, Phys. Rev. E, 79, 2009, p. 026211) we considered Jacobi family (including Laguerre and Gaussian cases) of random matrix ensembles and reported exact solutions of crossover problems involving time-reversal symmetry breaking. In the present paper we givemore » details of the work. We start with Dyson's Brownian motion description of random matrix ensembles and obtain universal hierarchic relations among the unfolded correlation functions. For arbitrary dimensions we derive the joint probability density (jpd) of eigenvalues for all transitions leading to unitary ensembles as equilibrium ensembles. We focus on the orthogonal-unitary and symplectic-unitary crossovers and give generic expressions for jpd of eigenvalues, two-point kernels and n-level correlation functions. This involves generalization of the theory of skew-orthogonal polynomials to crossover ensembles. We also consider crossovers in the circular ensembles to show the generality of our method. In the large dimensionality limit, correlations in spectra with arbitrary initial density are shown to be universal when expressed in terms of a rescaled symmetry breaking parameter. Applications of our crossover results to communication theory and quantum conductance problems are also briefly discussed.« less
Evaluation of super-water reducers for highway applications
NASA Astrophysics Data System (ADS)
Whiting, D.
1981-03-01
Super-water reducers were characterized and evaluated as potential candidates for production of low water to cement ratio, high strength concretes for highway construction applications. Admixtures were composed of either naphthalene or melamine sulfonated formaldehyde condensates. A mini-slump procedure was used to assess dosage requirements and behavior of workability with time of cement pastes. Required dosage was found to be a function of tricalcium aluminate content, alkali content, and fineness of the cement. Concretes exhibited high rates of slump loss when super-water reducers were used. The most promising area of application of these products appears to be in production of dense, high cement content concrete using mobile concrete mixer/transporters.
An, Yi; Wang, Jiawei; Li, Chen; Leier, André; Marquez-Lago, Tatiana; Wilksch, Jonathan; Zhang, Yang; Webb, Geoffrey I; Song, Jiangning; Lithgow, Trevor
2018-01-01
Bacterial effector proteins secreted by various protein secretion systems play crucial roles in host-pathogen interactions. In this context, computational tools capable of accurately predicting effector proteins of the various types of bacterial secretion systems are highly desirable. Existing computational approaches use different machine learning (ML) techniques and heterogeneous features derived from protein sequences and/or structural information. These predictors differ not only in terms of the used ML methods but also with respect to the used curated data sets, the features selection and their prediction performance. Here, we provide a comprehensive survey and benchmarking of currently available tools for the prediction of effector proteins of bacterial types III, IV and VI secretion systems (T3SS, T4SS and T6SS, respectively). We review core algorithms, feature selection techniques, tool availability and applicability and evaluate the prediction performance based on carefully curated independent test data sets. In an effort to improve predictive performance, we constructed three ensemble models based on ML algorithms by integrating the output of all individual predictors reviewed. Our benchmarks demonstrate that these ensemble models outperform all the reviewed tools for the prediction of effector proteins of T3SS and T4SS. The webserver of the proposed ensemble methods for T3SS and T4SS effector protein prediction is freely available at http://tbooster.erc.monash.edu/index.jsp. We anticipate that this survey will serve as a useful guide for interested users and that the new ensemble predictors will stimulate research into host-pathogen relationships and inspiration for the development of new bioinformatics tools for predicting effector proteins of T3SS, T4SS and T6SS. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2015-04-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
Zerbino, Daniel R.; Johnson, Nathan; Juetteman, Thomas; Sheppard, Dan; Wilder, Steven P.; Lavidas, Ilias; Nuhn, Michael; Perry, Emily; Raffaillac-Desfosses, Quentin; Sobral, Daniel; Keefe, Damian; Gräf, Stefan; Ahmed, Ikhlak; Kinsella, Rhoda; Pritchard, Bethan; Brent, Simon; Amode, Ridwan; Parker, Anne; Trevanion, Steven; Birney, Ewan; Dunham, Ian; Flicek, Paul
2016-01-01
New experimental techniques in epigenomics allow researchers to assay a diversity of highly dynamic features such as histone marks, DNA modifications or chromatin structure. The study of their fluctuations should provide insights into gene expression regulation, cell differentiation and disease. The Ensembl project collects and maintains the Ensembl regulation data resources on epigenetic marks, transcription factor binding and DNA methylation for human and mouse, as well as microarray probe mappings and annotations for a variety of chordate genomes. From this data, we produce a functional annotation of the regulatory elements along the human and mouse genomes with plans to expand to other species as data becomes available. Starting from well-studied cell lines, we will progressively expand our library of measurements to a greater variety of samples. Ensembl’s regulation resources provide a central and easy-to-query repository for reference epigenomes. As with all Ensembl data, it is freely available at http://www.ensembl.org, from the Perl and REST APIs and from the public Ensembl MySQL database server at ensembldb.ensembl.org. Database URL: http://www.ensembl.org PMID:26888907
Super-resolution study of polymer mobility fluctuations near c*.
King, John T; Yu, Changqian; Wilson, William L; Granick, Steve
2014-09-23
Nanoscale dynamic heterogeneities in synthetic polymer solutions are detected using super-resolution optical microscopy. To this end, we map concentration fluctuations in polystyrene-toluene solutions with spatial resolution below the diffraction limit, focusing on critical fluctuations near the polymer overlap concentration, c*. Two-photon super-resolution microscopy was adapted to be applicable in an organic solvent, and a home-built STED-FCS system with stimulated emission depletion (STED) was used to perform fluorescence correlation spectroscopy (FCS). The polystyrene serving as the tracer probe (670 kg mol(-1), radius of gyration RG ≈ 35 nm, end-labeled with a bodipy derivative chromophore) was dissolved in toluene at room temperature (good solvent) and mixed with matrix polystyrene (3,840 kg mol(-1), RG ≈ 97 nm, Mw/Mn = 1.04) whose concentration was varied from dilute to more than 10c*. Whereas for dilute solutions the intensity-intensity correlation function follows a single diffusion process, it splits starting at c* to imply an additional relaxation process provided that the experimental focal area does not greatly exceed the polymer blob size. We identify the slower mode as self-diffusion and the increasingly rapid mode as correlated segment fluctuations that reflect the cooperative diffusion coefficient, Dcoop. These real-space measurements find quantitative agreement between correlation lengths inferred from dynamic measurements and those from determining the limit below which diffusion coefficients are independent of spot size. This study is considered to illustrate the potential of importing into polymer science the techniques of super-resolution imaging.
A study of fuzzy logic ensemble system performance on face recognition problem
NASA Astrophysics Data System (ADS)
Polyakova, A.; Lipinskiy, L.
2017-02-01
Some problems are difficult to solve by using a single intelligent information technology (IIT). The ensemble of the various data mining (DM) techniques is a set of models which are able to solve the problem by itself, but the combination of which allows increasing the efficiency of the system as a whole. Using the IIT ensembles can improve the reliability and efficiency of the final decision, since it emphasizes on the diversity of its components. The new method of the intellectual informational technology ensemble design is considered in this paper. It is based on the fuzzy logic and is designed to solve the classification and regression problems. The ensemble consists of several data mining algorithms: artificial neural network, support vector machine and decision trees. These algorithms and their ensemble have been tested by solving the face recognition problems. Principal components analysis (PCA) is used for feature selection.
Entanglement distillation for quantum communication network with atomic-ensemble memories.
Li, Tao; Yang, Guo-Jian; Deng, Fu-Guo
2014-10-06
Atomic ensembles are effective memory nodes for quantum communication network due to the long coherence time and the collective enhancement effect for the nonlinear interaction between an ensemble and a photon. Here we investigate the possibility of achieving the entanglement distillation for nonlocal atomic ensembles by the input-output process of a single photon as a result of cavity quantum electrodynamics. We give an optimal entanglement concentration protocol (ECP) for two-atomic-ensemble systems in a partially entangled pure state with known parameters and an efficient ECP for the systems in an unknown partially entangled pure state with a nondestructive parity-check detector (PCD). For the systems in a mixed entangled state, we introduce an entanglement purification protocol with PCDs. These entanglement distillation protocols have high fidelity and efficiency with current experimental techniques, and they are useful for quantum communication network with atomic-ensemble memories.
Multimodel Ensemble Methods for Prediction of Wake-Vortex Transport and Decay Originating NASA
NASA Technical Reports Server (NTRS)
Korner, Stephan; Ahmad, Nashat N.; Holzapfel, Frank; VanValkenburg, Randal L.
2017-01-01
Several multimodel ensemble methods are selected and further developed to improve the deterministic and probabilistic prediction skills of individual wake-vortex transport and decay models. The different multimodel ensemble methods are introduced, and their suitability for wake applications is demonstrated. The selected methods include direct ensemble averaging, Bayesian model averaging, and Monte Carlo simulation. The different methodologies are evaluated employing data from wake-vortex field measurement campaigns conducted in the United States and Germany.
The Super-TIGER Instrument to Probe Galactic Cosmic Ray Origins
NASA Technical Reports Server (NTRS)
Mitchell, John W.; Binns, W. R.; Bose, R, G.; Braun, D. L.; Christian, E. R.; Daniels, W. M; DeNolfo, G. A.; Dowkontt, P. F.; Hahne, D. J.; Hams, T.;
2011-01-01
Super-TIGER (Super Trans-Iron Galactic Element Recorder) is under construction for the first of two planned Antarctic long-duration balloon flights in December 2012. This new instrument will measure the abundances of ultra-heavy elements (30Zn and heavier), with individual element resolution, to provide sensitive tests of the emerging model of cosmic-ray origins in OB associations and models of the mechanism for selection of nuclei for acceleration. Super-TIGER builds on the techniques of TIGER, which produced the first well-resolved measurements of elemental abundances of the elements 31Ga, 32Ge, and 34Se. Plastic scintillators together with acrylic and silica-aerogel Cherenkov detectors measure particle charge. Scintillating-fiber hodoscopes track particle trajectories. Super-TIGER has an active area of 5.4 sq m, divided into two independent modules. With reduced material thickness to decrease interactions, its effective geometry factor is approx.6.4 times larger than TIGER, allowing it to measure elements up to 42Mo with high statistical precision, and make exploratory measurements up to 56Ba. Super-TIGER will also accurately determine the energy spectra of the more abundant elements from l0Ne to 28Ni between 0.8 and 10 GeV/nucleon to test the hypothesis that microquasars or other sources could superpose spectral features. We will discuss the implications of Super-TIGER measurements for the study of cosmic-ray origins and will present the measurement technique, design, status, and expected performance, including numbers of events and resolution. Details of the hodoscopes, scintillators, and Cherenkov detectors will be given in other presentations at this conference.
AUC-based biomarker ensemble with an application on gene scores predicting low bone mineral density.
Zhao, X G; Dai, W; Li, Y; Tian, L
2011-11-01
The area under the receiver operating characteristic (ROC) curve (AUC), long regarded as a 'golden' measure for the predictiveness of a continuous score, has propelled the need to develop AUC-based predictors. However, the AUC-based ensemble methods are rather scant, largely due to the fact that the associated objective function is neither continuous nor concave. Indeed, there is no reliable numerical algorithm identifying optimal combination of a set of biomarkers to maximize the AUC, especially when the number of biomarkers is large. We have proposed a novel AUC-based statistical ensemble methods for combining multiple biomarkers to differentiate a binary response of interest. Specifically, we propose to replace the non-continuous and non-convex AUC objective function by a convex surrogate loss function, whose minimizer can be efficiently identified. With the established framework, the lasso and other regularization techniques enable feature selections. Extensive simulations have demonstrated the superiority of the new methods to the existing methods. The proposal has been applied to a gene expression dataset to construct gene expression scores to differentiate elderly women with low bone mineral density (BMD) and those with normal BMD. The AUCs of the resulting scores in the independent test dataset has been satisfactory. Aiming for directly maximizing AUC, the proposed AUC-based ensemble method provides an efficient means of generating a stable combination of multiple biomarkers, which is especially useful under the high-dimensional settings. lutian@stanford.edu. Supplementary data are available at Bioinformatics online.
The Effect of Conductor Expressivity on Ensemble Performance Evaluation
ERIC Educational Resources Information Center
Morrison, Steven J.; Price, Harry E.; Geiger, Carla G.; Cornacchio, Rachel A.
2009-01-01
In this study, the authors examined whether a conductor's use of high-expressivity or low-expressivity techniques affected evaluations of ensemble performances that were identical across conducting conditions. Two conductors each conducted two 1-minute parallel excerpts from Percy Grainger's "Walking Tune." Each directed one excerpt…
Optimal averaging of soil moisture predictions from ensemble land surface model simulations
USDA-ARS?s Scientific Manuscript database
The correct interpretation of ensemble 3 soil moisture information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an inst...
Kruskal-Wallis-based computationally efficient feature selection for face recognition.
Ali Khan, Sajid; Hussain, Ayyaz; Basit, Abdul; Akram, Sheeraz
2014-01-01
Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques.
Multi-criterion model ensemble of CMIP5 surface air temperature over China
NASA Astrophysics Data System (ADS)
Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming
2018-05-01
The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the South Central China (the Inner Mongolia), the North Eastern China (the South Central China), and the North Western China (the South Central China), under RCP 2.6, RCP 4.5, and RCP 8.5 scenarios, respectively.
NASA Astrophysics Data System (ADS)
Roberge, S.; Chokmani, K.; De Sève, D.
2012-04-01
The snow cover plays an important role in the hydrological cycle of Quebec (Eastern Canada). Consequently, evaluating its spatial extent interests the authorities responsible for the management of water resources, especially hydropower companies. The main objective of this study is the development of a snow-cover mapping strategy using remote sensing data and ensemble based systems techniques. Planned to be tested in a near real-time operational mode, this snow-cover mapping strategy has the advantage to provide the probability of a pixel to be snow covered and its uncertainty. Ensemble systems are made of two key components. First, a method is needed to build an ensemble of classifiers that is diverse as much as possible. Second, an approach is required to combine the outputs of individual classifiers that make up the ensemble in such a way that correct decisions are amplified, and incorrect ones are cancelled out. In this study, we demonstrate the potential of ensemble systems to snow-cover mapping using remote sensing data. The chosen classifier is a sequential thresholds algorithm using NOAA-AVHRR data adapted to conditions over Eastern Canada. Its special feature is the use of a combination of six sequential thresholds varying according to the day in the winter season. Two versions of the snow-cover mapping algorithm have been developed: one is specific for autumn (from October 1st to December 31st) and the other for spring (from March 16th to May 31st). In order to build the ensemble based system, different versions of the algorithm are created by varying randomly its parameters. One hundred of the versions are included in the ensemble. The probability of a pixel to be snow, no-snow or cloud covered corresponds to the amount of votes the pixel has been classified as such by all classifiers. The overall performance of ensemble based mapping is compared to the overall performance of the chosen classifier, and also with ground observations at meteorological stations.
Yanagisawa, T; Ariizumi, M; Shigematsu, Y; Kobayashi, H; Hasegawa, M; Watanabe, K
2010-01-01
This study was made to examine the combined effects of stored temperature and carbon dioxide atmosphere on shell egg quality. The shell eggs were packed into polyethylene terephthalate/polyethylene (PET/PE) pouches and stored at 0 degrees C (super chilling), 10 degrees C, and 20 degrees C, respectively for 90 d. The atmospheric carbon dioxide concentration was controlled to obtain the 3 concentration levels of high (about 2.0%), medium (about 0.5%), and low (below 0.01%). Changes in Haugh unit (HU) values, weakening of vitelline membranes, and generation of volatiles were analyzed to evaluate the freshness of shell eggs. Results showed that, compared with the other combinations, the technique of super chilling and high carbon dioxide concentration enabled shell eggs to be most effectively stored for 90 d, based on estimations of the statistical significances of differences in HU values, and on maintaining the initial HU values during storage. In addition, the storage of shell eggs using this combination technique was found to significantly prevent the weakening of the vitelline membrane based on the estimations of numbers of eggs without vitelline membrane breakage when eggs broke, and significantly lowered the incidence of hexanal in the yolk from exposure to the gas chromatographic-mass spectrometric analyses of volatiles. Thus, these results confirmed that the combination of super chilling and high carbon dioxide concentration was the most effective technique for preserving shell eggs during a long term of 90 d compared with other combination techniques.
Single-molecule imaging in live bacteria cells.
Ritchie, Ken; Lill, Yoriko; Sood, Chetan; Lee, Hochan; Zhang, Shunyuan
2013-02-05
Bacteria, such as Escherichia coli and Caulobacter crescentus, are the most studied and perhaps best-understood organisms in biology. The advances in understanding of living systems gained from these organisms are immense. Application of single-molecule techniques in bacteria have presented unique difficulties owing to their small size and highly curved form. The aim of this review is to show advances made in single-molecule imaging in bacteria over the past 10 years, and to look to the future where the combination of implementing such high-precision techniques in well-characterized and controllable model systems such as E. coli could lead to a greater understanding of fundamental biological questions inaccessible through classic ensemble methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chrzanowski, H. M.; Bernu, J.; Sparkes, B. M.
2011-11-15
The nonlinearity of a conditional photon-counting measurement can be used to ''de-Gaussify'' a Gaussian state of light. Here we present and experimentally demonstrate a technique for photon-number resolution using only homodyne detection. We then apply this technique to inform a conditional measurement, unambiguously reconstructing the statistics of the non-Gaussian one- and two-photon-subtracted squeezed vacuum states. Although our photon-number measurement relies on ensemble averages and cannot be used to prepare non-Gaussian states of light, its high efficiency, photon-number-resolving capabilities, and compatibility with the telecommunications band make it suitable for quantum-information tasks relying on the outcomes of mean values.
Aberrations and adaptive optics in super-resolution microscopy
Booth, Martin; Andrade, Débora; Burke, Daniel; Patton, Brian; Zurauskas, Mantas
2015-01-01
As one of the most powerful tools in the biological investigation of cellular structures and dynamic processes, fluorescence microscopy has undergone extraordinary developments in the past decades. The advent of super-resolution techniques has enabled fluorescence microscopy – or rather nanoscopy – to achieve nanoscale resolution in living specimens and unravelled the interior of cells with unprecedented detail. The methods employed in this expanding field of microscopy, however, are especially prone to the detrimental effects of optical aberrations. In this review, we discuss how super-resolution microscopy techniques based upon single-molecule switching, stimulated emission depletion and structured illumination each suffer from aberrations in different ways that are dependent upon intrinsic technical aspects. We discuss the use of adaptive optics as an effective means to overcome this problem. PMID:26124194
Tools and Techniques for Basin-Scale Climate Change Assessment
NASA Astrophysics Data System (ADS)
Zagona, E.; Rajagopalan, B.; Oakley, W.; Wilson, N.; Weinstein, P.; Verdin, A.; Jerla, C.; Prairie, J. R.
2012-12-01
The Department of Interior's WaterSMART Program seeks to secure and stretch water supplies to benefit future generations and identify adaptive measures to address climate change. Under WaterSMART, Basin Studies are comprehensive water studies to explore options for meeting projected imbalances in water supply and demand in specific basins. Such studies could be most beneficial with application of recent scientific advances in climate projections, stochastic simulation, operational modeling and robust decision-making, as well as computational techniques to organize and analyze many alternatives. A new integrated set of tools and techniques to facilitate these studies includes the following components: Future supply scenarios are produced by the Hydrology Simulator, which uses non-parametric K-nearest neighbor resampling techniques to generate ensembles of hydrologic traces based on historical data, optionally conditioned on long paleo reconstructed data using various Markov Chain techniuqes. Resampling can also be conditioned on climate change projections from e.g., downscaled GCM projections to capture increased variability; spatial and temporal disaggregation is also provided. The simulations produced are ensembles of hydrologic inputs to the RiverWare operations/infrastucture decision modeling software. Alternative demand scenarios can be produced with the Demand Input Tool (DIT), an Excel-based tool that allows modifying future demands by groups such as states; sectors, e.g., agriculture, municipal, energy; and hydrologic basins. The demands can be scaled at future dates or changes ramped over specified time periods. Resulting data is imported directly into the decision model. Different model files can represent infrastructure alternatives and different Policy Sets represent alternative operating policies, including options for noticing when conditions point to unacceptable vulnerabilities, which trigger dynamically executing changes in operations or other options. The over-arching Study Manager provides a graphical tool to create combinations of future supply scenarios, demand scenarios, infrastructure and operating policy alternatives; each scenario is executed as an ensemble of RiverWare runs, driven by the hydrologic supply. The Study Manager sets up and manages multiple executions on multi-core hardware. The sizeable are typically direct model outputs, or post-processed indicators of performance based on model outputs. Post processing statistical analysis of the outputs are possible using the Graphical Policy Analysis Tool or other statistical packages. Several Basin Studies undertaken have used RiverWare to evaluate future scenarios. The Colorado River Basin Study, the most complex and extensive to date, has taken advantage of these tools and techniques to generate supply scenarios, produce alternative demand scenarios and to set up and execute the many combinations of supplies, demands, policies, and infrastructure alternatives. The tools and techniques will be described with example applications.
Gallium nitride light sources for optical coherence tomography
NASA Astrophysics Data System (ADS)
Goldberg, Graham R.; Ivanov, Pavlo; Ozaki, Nobuhiko; Childs, David T. D.; Groom, Kristian M.; Kennedy, Kenneth L.; Hogg, Richard A.
2017-02-01
The advent of optical coherence tomography (OCT) has permitted high-resolution, non-invasive, in vivo imaging of the eye, skin and other biological tissue. The axial resolution is limited by source bandwidth and central wavelength. With the growing demand for short wavelength imaging, super-continuum sources and non-linear fibre-based light sources have been demonstrated in tissue imaging applications exploiting the near-UV and visible spectrum. Whilst the potential has been identified of using gallium nitride devices due to relative maturity of laser technology, there have been limited reports on using such low cost, robust devices in imaging systems. A GaN super-luminescent light emitting diode (SLED) was first reported in 2009, using tilted facets to suppress lasing, with the focus since on high power, low speckle and relatively low bandwidth applications. In this paper we discuss a method of producing a GaN based broadband source, including a passive absorber to suppress lasing. The merits of this passive absorber are then discussed with regards to broad-bandwidth applications, rather than power applications. For the first time in GaN devices, the performance of the light sources developed are assessed though the point spread function (PSF) (which describes an imaging systems response to a point source), calculated from the emission spectra. We show a sub-7μm resolution is possible without the use of special epitaxial techniques, ultimately outlining the suitability of these short wavelength, broadband, GaN devices for use in OCT applications.
Neural system for heartbeats recognition using genetically integrated ensemble of classifiers.
Osowski, Stanislaw; Siwek, Krzysztof; Siroic, Robert
2011-03-01
This paper presents the application of genetic algorithm for the integration of neural classifiers combined in the ensemble for the accurate recognition of heartbeat types on the basis of ECG registration. The idea presented in this paper is that using many classifiers arranged in the form of ensemble leads to the increased accuracy of the recognition. In such ensemble the important problem is the integration of all classifiers into one effective classification system. This paper proposes the use of genetic algorithm. It was shown that application of the genetic algorithm is very efficient and allows to reduce significantly the total error of heartbeat recognition. This was confirmed by the numerical experiments performed on the MIT BIH Arrhythmia Database. Copyright © 2011 Elsevier Ltd. All rights reserved.
Characterization of the dynamics of the atmosphere of Venus with Doppler velocimetry
NASA Astrophysics Data System (ADS)
Machado, Pedro Miguel Borges do Canto Mota
Currently the study of the Venus' atmosphere grows as a theme of major interest among the astrophysics scientific community. The most significant aspect of the general circulation of the atmosphere of Venus is its retrograde super-rotation. A complete characterization of this dynamical phenomenon is crucial for understanding its driving mechanisms. This work participates in the international effort to characterize the atmospheric dynamics of this planet in coordination with orbiter missions, in particular with Venus Express. The objectives of this study are to investigate the nature of the processes governing the super-rotation of the atmosphere of Venus using ground-based observations, thereby complementing measurements by orbiter instruments. This thesis analyzes observations of Venus made with two different instruments and Doppler velocimetry techniques. The data analysis technique allowed an unambiguous characterization of the zonal wind latitudinal profile and its temporal variability, as well as an investigation of large-scale planetary waves signature and their role in the maintenance of the zonal super-rotation, and suggest that detection and investigation of large-scale planetary waves can be carried out with this technique.These studies complement the independent observations of the european space mission Venus Express, in particular as regards the study of atmospheric super-rotation, meridional flow and its variability. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Guo, Wenkang; Yin, Haibo; Wang, Shuyin; He, Zhifeng
2017-04-01
Through studying on the setting times, cement mortar compressive strength and cement mortar compressive strength ratio, the influence of alkali-free liquid accelerators polycarboxylate-type super-plasticizers on the performance of alkali-free liquid accelerators in cement-based material was investigated. The results showed that the compatibility of super-plasticizers and alkali-free liquid accelerators was excellent. However, the dosage of super-plasticizers had a certain impact on the performance of alkali-free liquid accelerators as follows: 1) the setting times of alkali-free liquid accelerators was in the inverse proportional relationship to the dosage of super-plasticizers; 2)the influence of super-plasticizers dosage on the cement mortar compressive strength of alkali-free liquid accelerators was related to the types of accelerators, where exist an optimum super-plasticizers dosage for cement mortar compressive strength at 28d; 3)the later cement mortar compressive strength with alkali-free liquid accelerators were decreasing with the increment of the super-plasticizers dosage. In the practical application of alkali-free liquid accelerators and super-plasticizer, the dosage of super-plasticizer must be determined by dosage optimization test results.
KML Super Overlay to WMS Translator
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2007-01-01
This translator is a server-based application that automatically generates KML super overlay configuration files required by Google Earth for map data access via the Open Geospatial Consortium WMS (Web Map Service) standard. The translator uses a set of URL parameters that mirror the WMS parameters as much as possible, and it also can generate a super overlay subdivision of any given area that is only loaded when needed, enabling very large areas of coverage at very high resolutions. It can make almost any dataset available as a WMS service visible and usable in any KML application, without the need to reformat the data.
Nanoscale surface characterization using laser interference microscopy
NASA Astrophysics Data System (ADS)
Ignatyev, Pavel S.; Skrynnik, Andrey A.; Melnik, Yury A.
2018-03-01
Nanoscale surface characterization is one of the most significant parts of modern materials development and application. The modern microscopes are expensive and complicated tools, and its use for industrial tasks is limited due to laborious sample preparation, measurement procedures, and low operation speed. The laser modulation interference microscopy method (MIM) for real-time quantitative and qualitative analysis of glass, metals, ceramics, and various coatings has a spatial resolution of 0.1 nm for vertical and up to 100 nm for lateral. It is proposed as an alternative to traditional scanning electron microscopy (SEM) and atomic force microscopy (AFM) methods. It is demonstrated that in the cases of roughness metrology for super smooth (Ra >1 nm) surfaces the application of a laser interference microscopy techniques is more optimal than conventional SEM and AFM. The comparison of semiconductor test structure for lateral dimensions measurements obtained with SEM and AFM and white light interferometer also demonstrates the advantages of MIM technique.
NASA Astrophysics Data System (ADS)
Verkade, J. S.; Brown, J. D.; Reggiani, P.; Weerts, A. H.
2013-09-01
The ECMWF temperature and precipitation ensemble reforecasts are evaluated for biases in the mean, spread and forecast probabilities, and how these biases propagate to streamflow ensemble forecasts. The forcing ensembles are subsequently post-processed to reduce bias and increase skill, and to investigate whether this leads to improved streamflow ensemble forecasts. Multiple post-processing techniques are used: quantile-to-quantile transform, linear regression with an assumption of bivariate normality and logistic regression. Both the raw and post-processed ensembles are run through a hydrologic model of the river Rhine to create streamflow ensembles. The results are compared using multiple verification metrics and skill scores: relative mean error, Brier skill score and its decompositions, mean continuous ranked probability skill score and its decomposition, and the ROC score. Verification of the streamflow ensembles is performed at multiple spatial scales: relatively small headwater basins, large tributaries and the Rhine outlet at Lobith. The streamflow ensembles are verified against simulated streamflow, in order to isolate the effects of biases in the forcing ensembles and any improvements therein. The results indicate that the forcing ensembles contain significant biases, and that these cascade to the streamflow ensembles. Some of the bias in the forcing ensembles is unconditional in nature; this was resolved by a simple quantile-to-quantile transform. Improvements in conditional bias and skill of the forcing ensembles vary with forecast lead time, amount, and spatial scale, but are generally moderate. The translation to streamflow forecast skill is further muted, and several explanations are considered, including limitations in the modelling of the space-time covariability of the forcing ensembles and the presence of storages.
Cosmic-Ray Extremely Distributed Observatory: a global cosmic ray detection framework
NASA Astrophysics Data System (ADS)
Sushchov, O.; Homola, P.; Dhital, N.; Bratek, Ł.; Poznański, P.; Wibig, T.; Zamora-Saa, J.; Almeida Cheminant, K.; Alvarez Castillo, D.; Góra, D.; Jagoda, P.; Jałocha, J.; Jarvis, J. F.; Kasztelan, M.; Kopański, K.; Krupiński, M.; Michałek, M.; Nazari, V.; Smelcerz, K.; Smolek, K.; Stasielak, J.; Sułek, M.
2017-12-01
The main objective of the Cosmic-Ray Extremely Distributed Observatory (CREDO) is the detection and analysis of extended cosmic ray phenomena, so-called super-preshowers (SPS), using existing as well as new infrastructure (cosmic-ray observatories, educational detectors, single detectors etc.). The search for ensembles of cosmic ray events initiated by SPS is yet an untouched ground, in contrast to the current state-of-the-art analysis, which is focused on the detection of single cosmic ray events. Theoretical explanation of SPS could be given either within classical (e.g., photon-photon interaction) or exotic (e.g., Super Heavy Dark Matter decay or annihilation) scenarios, thus detection of SPS would provide a better understanding of particle physics, high energy astrophysics and cosmology. The ensembles of cosmic rays can be classified based on the spatial and temporal extent of particles constituting the ensemble. Some classes of SPS are predicted to have huge spatial distribution, a unique signature detectable only with a facility of the global size. Since development and commissioning of a completely new facility with such requirements is economically unwarranted and time-consuming, the global analysis goals are achievable when all types of existing detectors are merged into a worldwide network. The idea to use the instruments in operation is based on a novel trigger algorithm: in parallel to looking for neighbour surface detectors receiving the signal simultaneously, one should also look for spatially isolated stations clustered in a small time window. On the other hand, CREDO strategy is also aimed at an active engagement of a large number of participants, who will contribute to the project by using common electronic devices (e.g., smartphones), capable of detecting cosmic rays. It will help not only in expanding the geographical spread of CREDO, but also in managing a large manpower necessary for a more efficient crowd-sourced pattern recognition scheme to identify and classify SPS. A worldwide network of cosmic-ray detectors could not only become a unique tool to study fundamental physics, it will also provide a number of other opportunities, including space-weather or geophysics studies. Among the latter one has to list the potential to predict earthquakes by monitoring the rate of low energy cosmic-ray events. The diversity of goals motivates us to advertise this concept across the astroparticle physics community.
Infrared super-resolution imaging based on compressed sensing
NASA Astrophysics Data System (ADS)
Sui, Xiubao; Chen, Qian; Gu, Guohua; Shen, Xuewei
2014-03-01
The theoretical basis of traditional infrared super-resolution imaging method is Nyquist sampling theorem. The reconstruction premise is that the relative positions of the infrared objects in the low-resolution image sequences should keep fixed and the image restoration means is the inverse operation of ill-posed issues without fixed rules. The super-resolution reconstruction ability of the infrared image, algorithm's application area and stability of reconstruction algorithm are limited. To this end, we proposed super-resolution reconstruction method based on compressed sensing in this paper. In the method, we selected Toeplitz matrix as the measurement matrix and realized it by phase mask method. We researched complementary matching pursuit algorithm and selected it as the recovery algorithm. In order to adapt to the moving target and decrease imaging time, we take use of area infrared focal plane array to acquire multiple measurements at one time. Theoretically, the method breaks though Nyquist sampling theorem and can greatly improve the spatial resolution of the infrared image. The last image contrast and experiment data indicate that our method is effective in improving resolution of infrared images and is superior than some traditional super-resolution imaging method. The compressed sensing super-resolution method is expected to have a wide application prospect.
Cancer detection based on Raman spectra super-paramagnetic clustering
NASA Astrophysics Data System (ADS)
González-Solís, José Luis; Guizar-Ruiz, Juan Ignacio; Martínez-Espinosa, Juan Carlos; Martínez-Zerega, Brenda Esmeralda; Juárez-López, Héctor Alfonso; Vargas-Rodríguez, Héctor; Gallegos-Infante, Luis Armando; González-Silva, Ricardo Armando; Espinoza-Padilla, Pedro Basilio; Palomares-Anda, Pascual
2016-08-01
The clustering of Raman spectra of serum sample is analyzed using the super-paramagnetic clustering technique based in the Potts spin model. We investigated the clustering of biochemical networks by using Raman data that define edge lengths in the network, and where the interactions are functions of the Raman spectra's individual band intensities. For this study, we used two groups of 58 and 102 control Raman spectra and the intensities of 160, 150 and 42 Raman spectra of serum samples from breast and cervical cancer and leukemia patients, respectively. The spectra were collected from patients from different hospitals from Mexico. By using super-paramagnetic clustering technique, we identified the most natural and compact clusters allowing us to discriminate the control and cancer patients. A special interest was the leukemia case where its nearly hierarchical observed structure allowed the identification of the patients's leukemia type. The goal of this study is to apply a model of statistical physics, as the super-paramagnetic, to find these natural clusters that allow us to design a cancer detection method. To the best of our knowledge, this is the first report of preliminary results evaluating the usefulness of super-paramagnetic clustering in the discipline of spectroscopy where it is used for classification of spectra.
Bahuaud, D; Mørkøre, T; Langsrud, Ø; Sinnes, K; Veiseth, E; Ofstad, R; Thomassen, M S
2008-11-15
The aim of this study was to evaluate the impact of super-chilling on the quality of Atlantic salmon (Salmo salar) pre-rigor fillets. The fillets were kept for 45min in a super-chilling tunnel at -25°C with an air speed in the tunnel at 2.5m/s, to reach a fillet core temperature of -1.5°C, prior to ice storage in a cold room for 4 weeks. Super-chilling seemed to form intra- and extracellular ice crystals in the upper layer of the fillets and prevent myofibre contraction. Lysosome breakages followed by release of cathepsin B and L during storage and myofibre-myofibre detachments were accelerated in the super-chilled fillets. Super-chilling resulted in higher liquid leakage and increased myofibre breakages in the fillets, while texture values of fillets measured instrumentally were not affected by super-chilling one week after treatment. Optimisation of the super-chilling technique is needed to avoid the formation of ice crystals, which may cause irreversible destruction of the myofibres, in order to obtain high quality products. Copyright © 2008 Elsevier Ltd. All rights reserved.
Nanorobotic end-effectors: Design, fabrication, and in situ characterization
NASA Astrophysics Data System (ADS)
Fan, Zheng
Nano-robotic end-effectors have promising applications for nano-fabrication, nano-manufacturing, nano-optics, nano-medical, and nano-sensing; however, low performances of the conventional end-effectors have prevented the widespread utilization of them in various fields. There are two major difficulties in developing the end-effectors: their nano-fabrication and their advanced characterization in the nanoscale. Here we introduce six types of end-effectors: the nanotube fountain pen (NFP), the super-fine nanoprobe, the metal-filled carbon nanotube (m CNT)-based sphere-on-pillar (SOP) nanoantennas, the tunneling nanosensor, and the nanowire-based memristor. The investigations on the NFP are focused on nano-fluidics and nano-fabrications. The NFP could direct write metallic "inks" and fabricating complex metal nanostructures from 0D to 3D with a position servo control, which is critically important to future large-scale, high-throughput nanodevice production. With the help of NFP, we could fabricate the end-effectors such as super-fine nanoprobe and m CNT-based SOP nanoantennas. Those end-effectors are able to detect local flaws or characterize the electrical/mechanical properties of the nanostructure. Moreover, using electron-energy-loss-spectroscopy (EELS) technique during the operation of the SOP optical antenna opens a new basis for the application of nano-robotic end-effectors. The technique allows advanced characterization of the physical changes, such as carrier diffusion, that are directly responsible for the device's properties. As the device was coupled with characterization techniques of scanning-trasmission-electron-microscopy (STEM), the development of tunneling nanosensor advances this field of science into quantum world. Furthermore, the combined STEM-EELS technique plays an important role in our understanding of the memristive switching performance in the nanowire-based memristor. The developments of those nano-robotic end-effectors expend the study abilities in investigating the in situ nanotechnology, providing efficient ways in in situ nanostructure fabrication and the advanced characterization of the nanomaterials.
xEMD procedures as a data - Assisted filtering method
NASA Astrophysics Data System (ADS)
Machrowska, Anna; Jonak, Józef
2018-01-01
The article presents the possibility of using Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD), Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Improved Complete Ensemble Empirical Mode Decomposition (ICEEMD) algorithms for mechanical system condition monitoring applications. There were presented the results of the xEMD procedures used for vibration signals of system in different states of wear.
Image reconstructions from super-sampled data sets with resolution modeling in PET imaging.
Li, Yusheng; Matej, Samuel; Metzler, Scott D
2014-12-01
Spatial resolution in positron emission tomography (PET) is still a limiting factor in many imaging applications. To improve the spatial resolution for an existing scanner with fixed crystal sizes, mechanical movements such as scanner wobbling and object shifting have been considered for PET systems. Multiple acquisitions from different positions can provide complementary information and increased spatial sampling. The objective of this paper is to explore an efficient and useful reconstruction framework to reconstruct super-resolution images from super-sampled low-resolution data sets. The authors introduce a super-sampling data acquisition model based on the physical processes with tomographic, downsampling, and shifting matrices as its building blocks. Based on the model, we extend the MLEM and Landweber algorithms to reconstruct images from super-sampled data sets. The authors also derive a backprojection-filtration-like (BPF-like) method for the super-sampling reconstruction. Furthermore, they explore variant methods for super-sampling reconstructions: the separate super-sampling resolution-modeling reconstruction and the reconstruction without downsampling to further improve image quality at the cost of more computation. The authors use simulated reconstruction of a resolution phantom to evaluate the three types of algorithms with different super-samplings at different count levels. Contrast recovery coefficient (CRC) versus background variability, as an image-quality metric, is calculated at each iteration for all reconstructions. The authors observe that all three algorithms can significantly and consistently achieve increased CRCs at fixed background variability and reduce background artifacts with super-sampled data sets at the same count levels. For the same super-sampled data sets, the MLEM method achieves better image quality than the Landweber method, which in turn achieves better image quality than the BPF-like method. The authors also demonstrate that the reconstructions from super-sampled data sets using a fine system matrix yield improved image quality compared to the reconstructions using a coarse system matrix. Super-sampling reconstructions with different count levels showed that the more spatial-resolution improvement can be obtained with higher count at a larger iteration number. The authors developed a super-sampling reconstruction framework that can reconstruct super-resolution images using the super-sampling data sets simultaneously with known acquisition motion. The super-sampling PET acquisition using the proposed algorithms provides an effective and economic way to improve image quality for PET imaging, which has an important implication in preclinical and clinical region-of-interest PET imaging applications.
Using HPC within an operational forecasting configuration
NASA Astrophysics Data System (ADS)
Jagers, H. R. A.; Genseberger, M.; van den Broek, M. A. F. H.
2012-04-01
Various natural disasters are caused by high-intensity events, for example: extreme rainfall can in a short time cause major damage in river catchments, storms can cause havoc in coastal areas. To assist emergency response teams in operational decisions, it's important to have reliable information and predictions as soon as possible. This starts before the event by providing early warnings about imminent risks and estimated probabilities of possible scenarios. In the context of various applications worldwide, Deltares has developed an open and highly configurable forecasting and early warning system: Delft-FEWS. Finding the right balance between simulation time (and hence prediction lead time) and simulation accuracy and detail is challenging. Model resolution may be crucial to capture certain critical physical processes. Uncertainty in forcing conditions may require running large ensembles of models; data assimilation techniques may require additional ensembles and repeated simulations. The computational demand is steadily increasing and data streams become bigger. Using HPC resources is a logical step; in different settings Delft-FEWS has been configured to take advantage of distributed computational resources available to improve and accelerate the forecasting process (e.g. Montanari et al, 2006). We will illustrate the system by means of a couple of practical applications including the real-time dynamic forecasting of wind driven waves, flow of water, and wave overtopping at dikes of Lake IJssel and neighboring lakes in the center of The Netherlands. Montanari et al., 2006. Development of an ensemble flood forecasting system for the Po river basin, First MAP D-PHASE Scientific Meeting, 6-8 November 2006, Vienna, Austria.
NASA Technical Reports Server (NTRS)
Orlin, W James; Lindner, Norman J; Bitterly, Jack G
1947-01-01
The theory of hydraulic analogy, that is, the analogy between water flow with a free surface and two-dimensional compressible gas flow and the limitations and conditions of the analogy are discussed. A test run was made using the hydraulic analogy as applied to the flow about circular cylinders at various diameters at subsonic velocities extending to the super critical range. The apparatus and techniques used in this application are described and criticized. Reasonably satisfactory agreement of pressure distributions and flow fields existed between water and airflow about corresponding bodies. This agreement indicated the possibility of extending experimental compressibility research by new methods.
NASA Astrophysics Data System (ADS)
Cunningham, Paul D.; Bricker, William P.; Díaz, Sebastián A.; Medintz, Igor L.; Bathe, Mark; Melinger, Joseph S.
2017-08-01
Sequence-selective bis-intercalating dyes exhibit large increases in fluorescence in the presence of specific DNA sequences. This property makes this class of fluorophore of particular importance to biosensing and super-resolution imaging. Here we report ultrafast transient anisotropy measurements of resonance energy transfer (RET) between thiazole orange (TO) molecules in a complex formed between the homodimer TOTO and double-stranded (ds) DNA. Biexponential homo-RET dynamics suggest two subpopulations within the ensemble: 80% intercalated and 20% non-intercalated. Based on the application of the transition density cube method to describe the electronic coupling and Monte Carlo simulations of the TOTO/dsDNA geometry, the dihedral angle between intercalated TO molecules is estimated to be 81° ± 5°, corresponding to a coupling strength of 45 ± 22 cm-1. Dye intercalation with this geometry is found to occur independently of the underlying DNA sequence, despite the known preference of TOTO for the nucleobase sequence CTAG. The non-intercalated subpopulation is inferred to have a mean inter-dye separation distance of 19 Å, corresponding to coupling strengths between 0 and 25 cm-1. This information is important to enable the rational design of energy transfer systems that utilize TOTO as a relay dye. The approach used here is generally applicable to determining the electronic coupling strength and intercalation configuration of other dimeric bis-intercalators.
Smart sensors II; Proceedings of the Seminar, San Diego, CA, July 31, August 1, 1980
NASA Astrophysics Data System (ADS)
Barbe, D. F.
1980-01-01
Topics discussed include technology for smart sensors, smart sensors for tracking and surveillance, and techniques and algorithms for smart sensors. Papers are presented on the application of very large scale integrated circuits to smart sensors, imaging charge-coupled devices for deep-space surveillance, ultra-precise star tracking using charge coupled devices, and automatic target identification of blurred images with super-resolution features. Attention is also given to smart sensors for terminal homing, algorithms for estimating image position, and the computational efficiency of multiple image registration algorithms.
Glyph-based analysis of multimodal directional distributions in vector field ensembles
NASA Astrophysics Data System (ADS)
Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger
2015-04-01
Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.
NASA Astrophysics Data System (ADS)
Clark, E.; Wood, A.; Nijssen, B.; Newman, A. J.; Mendoza, P. A.
2016-12-01
The System for Hydrometeorological Applications, Research and Prediction (SHARP), developed at the National Center for Atmospheric Research (NCAR), University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation, is a fully automated ensemble prediction system for short-term to seasonal applications. It incorporates uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 plausible temperature and precipitation time series through the Sacramento/Snow-17 model. The forcing ensemble explicitly accounts for measurement and interpolation uncertainties in the development of gridded meteorological forcing time series. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. To select the IHCs that are most consistent with the observations, we employ a particle filter (PF) that weights IHC ensemble members based on observations of streamflow and SWE. These particles are then used to initialize ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS), generating a streamflow forecast ensemble. We test this method in two basins in the Pacific Northwest that are important for water resources management: 1) the Green River upstream of Howard Hanson Dam, and 2) the South Fork Flathead River upstream of Hungry Horse Dam. The first of these is characterized by mixed snow and rain, while the second is snow-dominated. The PF-based forecasts are compared to forecasts based on a single IHC (corresponding to median streamflow) paired with the full GEFS ensemble, and 2) the full IHC ensemble, without filtering, paired with the full GEFS ensemble. In addition to assessing improvements in the spread of IHCs, we perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts at 1- to 7-day lead times.
Automatic Estimation of Osteoporotic Fracture Cases by Using Ensemble Learning Approaches.
Kilic, Niyazi; Hosgormez, Erkan
2016-03-01
Ensemble learning methods are one of the most powerful tools for the pattern classification problems. In this paper, the effects of ensemble learning methods and some physical bone densitometry parameters on osteoporotic fracture detection were investigated. Six feature set models were constructed including different physical parameters and they fed into the ensemble classifiers as input features. As ensemble learning techniques, bagging, gradient boosting and random subspace (RSM) were used. Instance based learning (IBk) and random forest (RF) classifiers applied to six feature set models. The patients were classified into three groups such as osteoporosis, osteopenia and control (healthy), using ensemble classifiers. Total classification accuracy and f-measure were also used to evaluate diagnostic performance of the proposed ensemble classification system. The classification accuracy has reached to 98.85 % by the combination of model 6 (five BMD + five T-score values) using RSM-RF classifier. The findings of this paper suggest that the patients will be able to be warned before a bone fracture occurred, by just examining some physical parameters that can easily be measured without invasive operations.
A hybrid variational ensemble data assimilation for the HIgh Resolution Limited Area Model (HIRLAM)
NASA Astrophysics Data System (ADS)
Gustafsson, N.; Bojarova, J.; Vignes, O.
2014-02-01
A hybrid variational ensemble data assimilation has been developed on top of the HIRLAM variational data assimilation. It provides the possibility of applying a flow-dependent background error covariance model during the data assimilation at the same time as full rank characteristics of the variational data assimilation are preserved. The hybrid formulation is based on an augmentation of the assimilation control variable with localised weights to be assigned to a set of ensemble member perturbations (deviations from the ensemble mean). The flow-dependency of the hybrid assimilation is demonstrated in single simulated observation impact studies and the improved performance of the hybrid assimilation in comparison with pure 3-dimensional variational as well as pure ensemble assimilation is also proven in real observation assimilation experiments. The performance of the hybrid assimilation is comparable to the performance of the 4-dimensional variational data assimilation. The sensitivity to various parameters of the hybrid assimilation scheme and the sensitivity to the applied ensemble generation techniques are also examined. In particular, the inclusion of ensemble perturbations with a lagged validity time has been examined with encouraging results.
Collell, Guillem; Prelec, Drazen; Patil, Kaustubh R
2018-01-31
Class imbalance presents a major hurdle in the application of classification methods. A commonly taken approach is to learn ensembles of classifiers using rebalanced data. Examples include bootstrap averaging (bagging) combined with either undersampling or oversampling of the minority class examples. However, rebalancing methods entail asymmetric changes to the examples of different classes, which in turn can introduce their own biases. Furthermore, these methods often require specifying the performance measure of interest a priori, i.e., before learning. An alternative is to employ the threshold moving technique, which applies a threshold to the continuous output of a model, offering the possibility to adapt to a performance measure a posteriori , i.e., a plug-in method. Surprisingly, little attention has been paid to this combination of a bagging ensemble and threshold-moving. In this paper, we study this combination and demonstrate its competitiveness. Contrary to the other resampling methods, we preserve the natural class distribution of the data resulting in well-calibrated posterior probabilities. Additionally, we extend the proposed method to handle multiclass data. We validated our method on binary and multiclass benchmark data sets by using both, decision trees and neural networks as base classifiers. We perform analyses that provide insights into the proposed method.
Super-leadership and work enjoyment: direct and moderated influences.
Müller, Günter F; Georgianna, Sibylle; Schermelleh-Engel, Karin; Roth, Anne C; Schreiber, Walter A; Sauerland, Martin; Muessigmann, Michael J; Jilg, Franziska
2013-12-01
Super-leadership is part of an approach called 'empowering leadership.' Within this approach, super-leadership is assumed to enable subordinates to lead themselves. The current study examined correlates of super-leadership. A questionnaire measuring two dimensions of super-leadership was used to analyze relationships between super-leadership and subordinates' work enjoyment, i.e., job satisfaction, subjective well-being, and emotional organizational commitment. In addition, moderating effects of the organizational context, i.e., organizational decentralization, on the relationships between super-leadership and work enjoyment were explored. 198 German employees from different occupations participated in the study. Latent moderator structural equation analysis revealed that the two factors of super-leadership, "coaching and communicative support" and "facilitation of personal autonomy and responsibility," had direct positive effects on subordinates' work enjoyment. Organizational decentralization moderated the effect of "coaching and communicative support" on work enjoyment but not the relations involving "facilitation of personal autonomy and responsibility." Conclusions for further research and practical applications were discussed.
Intracellular applications of fluorescence correlation spectroscopy: prospects for neuroscience.
Kim, Sally A; Schwille, Petra
2003-10-01
Based on time-averaging fluctuation analysis of small fluorescent molecular ensembles in equilibrium, fluorescence correlation spectroscopy has recently been applied to investigate processes in the intracellular milieu. The exquisite sensitivity of fluorescence correlation spectroscopy provides access to a multitude of measurement parameters (rates of diffusion, local concentration, states of aggregation and molecular interactions) in real time with fast temporal and high spatial resolution. The introduction of dual-color cross-correlation, imaging, two-photon excitation, and coincidence analysis coupled with fluorescence correlation spectroscopy has expanded the utility of the technique to encompass a wide range of promising applications in living cells that may provide unprecedented insight into understanding the molecular mechanisms of intracellular neurobiological processes.
Aberrations and adaptive optics in super-resolution microscopy.
Booth, Martin; Andrade, Débora; Burke, Daniel; Patton, Brian; Zurauskas, Mantas
2015-08-01
As one of the most powerful tools in the biological investigation of cellular structures and dynamic processes, fluorescence microscopy has undergone extraordinary developments in the past decades. The advent of super-resolution techniques has enabled fluorescence microscopy - or rather nanoscopy - to achieve nanoscale resolution in living specimens and unravelled the interior of cells with unprecedented detail. The methods employed in this expanding field of microscopy, however, are especially prone to the detrimental effects of optical aberrations. In this review, we discuss how super-resolution microscopy techniques based upon single-molecule switching, stimulated emission depletion and structured illumination each suffer from aberrations in different ways that are dependent upon intrinsic technical aspects. We discuss the use of adaptive optics as an effective means to overcome this problem. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, G; Zakian, K; Deasy, J
Purpose: To develop a novel super-resolution time-resolved 4DMRI technique to evaluate multi-breath, irregular and complex organ motion without respiratory surrogate for radiotherapy planning. Methods: The super-resolution time-resolved (TR) 4DMRI approach combines a series of low-resolution 3D cine MRI images acquired during free breathing (FB) with a high-resolution breath-hold (BH) 3DMRI via deformable image registration (DIR). Five volunteers participated in the study under an IRB-approved protocol. The 3D cine images with voxel size of 5×5×5 mm{sup 3} at two volumes per second (2Hz) were acquired coronally using a T1 fast field echo sequence, half-scan (0.8) acceleration, and SENSE (3) parallel imaging.more » Phase-encoding was set in the lateral direction to minimize motion artifacts. The BH image with voxel size of 2×2×2 mm{sup 3} was acquired using the same sequence within 10 seconds. A demons-based DIR program was employed to produce super-resolution 2Hz 4DMRI. Registration quality was visually assessed using difference images between TR 4DMRI and 3D cine and quantitatively assessed using average voxel correlation. The fidelity of the 3D cine images was assessed using a gel phantom and a 1D motion platform by comparing mobile and static images. Results: Owing to voxel intensity similarity using the same MRI scanning sequence, accurate DIR between FB and BH images is achieved. The voxel correlations between 3D cine and TR 4DMRI are greater than 0.92 in all cases and the difference images illustrate minimal residual error with little systematic patterns. The 3D cine images of the mobile gel phantom preserve object geometry with minimal scanning artifacts. Conclusion: The super-resolution time-resolved 4DMRI technique has been achieved via DIR, providing a potential solution for multi-breath motion assessment. Accurate DIR mapping has been achieved to map high-resolution BH images to low-resolution FB images, producing 2Hz volumetric high-resolution 4DMRI. Further validation and improvement are still required prior to clinical applications. This study is in part supported by the NIH (U54CA137788/U54CA132378).« less
NASA Astrophysics Data System (ADS)
Descloux, A.; Grußmayer, K. S.; Bostan, E.; Lukes, T.; Bouwens, A.; Sharipov, A.; Geissbuehler, S.; Mahul-Mellier, A.-L.; Lashuel, H. A.; Leutenegger, M.; Lasser, T.
2018-03-01
Super-resolution fluorescence microscopy provides unprecedented insight into cellular and subcellular structures. However, going `beyond the diffraction barrier' comes at a price, since most far-field super-resolution imaging techniques trade temporal for spatial super-resolution. We propose the combination of a novel label-free white light quantitative phase imaging with fluorescence to provide high-speed imaging and spatial super-resolution. The non-iterative phase retrieval relies on the acquisition of single images at each z-location and thus enables straightforward 3D phase imaging using a classical microscope. We realized multi-plane imaging using a customized prism for the simultaneous acquisition of eight planes. This allowed us to not only image live cells in 3D at up to 200 Hz, but also to integrate fluorescence super-resolution optical fluctuation imaging within the same optical instrument. The 4D microscope platform unifies the sensitivity and high temporal resolution of phase imaging with the specificity and high spatial resolution of fluorescence microscopy.
Seo, Jungmok; Lee, Soonil; Han, Heetak; Jung, Hwae Bong; Hong, Juree; Song, Giyoung; Cho, Suk Man; Park, Cheolmin; Lee, Wooyoung; Lee, Taeyoon
2013-08-14
A gas-driven ultrafast adhesion switching of water droplets on palladium-coated Si nanowire arrays is demonstrated. By regulating the gas-ambient between the atmosphere and H2 , the super-hydrophobic adhesion is repeatedly switched between water-repellent and water-adhesive. The capability of modulating the super-hydrophobic adhesion on a super-hydrophobic surface with a non-contact mode could be applicable to novel functional lab-on-a-chip platforms. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Stable biomimetic super-hydrophobic engineering materials.
Guo, Zhiguang; Zhou, Feng; Hao, Jingcheng; Liu, Weimin
2005-11-16
We describe a simple and inexpensive method to produce super-hydrophobic surfaces on aluminum and its alloy by oxidation and chemical modification. Water or aqueous solutions (pH = 1-14) have contact angles of 168 +/- 2 and 161 +/- 2 degrees on the treated surfaces of Al and Al alloy, respectively. The super-hydrophobic surfaces are produced by the cooperation of binary structures at micro- and nanometer scales, thus reducing the energies of the surfaces. Such super-hydrophobic properties will greatly extend the applications of aluminum and its alloy as lubricating materials.
A Stochastic Super-Exponential Growth Model for Population Dynamics
NASA Astrophysics Data System (ADS)
Avila, P.; Rekker, A.
2010-11-01
A super-exponential growth model with environmental noise has been studied analytically. Super-exponential growth rate is a property of dynamical systems exhibiting endogenous nonlinear positive feedback, i.e., of self-reinforcing systems. Environmental noise acts on the growth rate multiplicatively and is assumed to be Gaussian white noise in the Stratonovich interpretation. An analysis of the stochastic super-exponential growth model with derivations of exact analytical formulae for the conditional probability density and the mean value of the population abundance are presented. Interpretations and various applications of the results are discussed.
Scattering by ensembles of small particles experiment, theory and application
NASA Technical Reports Server (NTRS)
Gustafson, B. A. S.
1980-01-01
A hypothetical self consistent picture of evolution of prestellar intertellar dust through a comet phase leads to predictions about the composition of the circum-solar dust cloud. Scattering properties of thus resulting conglomerates with a bird's-nest type of structure are investigated using a micro-wave analogue technique. Approximate theoretical methods of general interest are developed which compared favorably with the experimental results. The principal features of scattering of visible radiation by zodiacal light particles are reasonably reproduced. A component which is suggestive of (ALPHA)-meteoroids is also predicted.
Spatial and temporal analysis of DIII-D 3D magnetic diagnostic data
Strait, E. J.; King, J. D.; Hanson, J. M.; ...
2016-08-11
An extensive set of magnetic diagnostics in DIII-D is aimed at measuring non-axisymmetric "3D" features of tokamak plasmas, with typical amplitudes ~10 -3 to 10 -5 of the total magnetic field. We describe hardware and software techniques used at DIII-D to condition the individual signals and analysis to estimate the spatial structure from an ensemble of discrete measurements. Lastly, applications of the analysis include detection of non-rotating MHD instabilities, plasma control, and validation of MHD stability and 3D equilibrium models.
Multivariate postprocessing techniques for probabilistic hydrological forecasting
NASA Astrophysics Data System (ADS)
Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian
2016-04-01
Hydrologic ensemble forecasts driven by atmospheric ensemble prediction systems need statistical postprocessing in order to account for systematic errors in terms of both mean and spread. Runoff is an inherently multivariate process with typical events lasting from hours in case of floods to weeks or even months in case of droughts. This calls for multivariate postprocessing techniques that yield well calibrated forecasts in univariate terms and ensure a realistic temporal dependence structure at the same time. To this end, the univariate ensemble model output statistics (EMOS; Gneiting et al., 2005) postprocessing method is combined with two different copula approaches that ensure multivariate calibration throughout the entire forecast horizon. These approaches comprise ensemble copula coupling (ECC; Schefzik et al., 2013), which preserves the dependence structure of the raw ensemble, and a Gaussian copula approach (GCA; Pinson and Girard, 2012), which estimates the temporal correlations from training observations. Both methods are tested in a case study covering three subcatchments of the river Rhine that represent different sizes and hydrological regimes: the Upper Rhine up to the gauge Maxau, the river Moselle up to the gauge Trier, and the river Lahn up to the gauge Kalkofen. The results indicate that both ECC and GCA are suitable for modelling the temporal dependences of probabilistic hydrologic forecasts (Hemri et al., 2015). References Gneiting, T., A. E. Raftery, A. H. Westveld, and T. Goldman (2005), Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation, Monthly Weather Review, 133(5), 1098-1118, DOI: 10.1175/MWR2904.1. Hemri, S., D. Lisniak, and B. Klein, Multivariate postprocessing techniques for probabilistic hydrological forecasting, Water Resources Research, 51(9), 7436-7451, DOI: 10.1002/2014WR016473. Pinson, P., and R. Girard (2012), Evaluating the quality of scenarios of short-term wind power generation, Applied Energy, 96, 12-20, DOI: 10.1016/j.apenergy.2011.11.004. Schefzik, R., T. L. Thorarinsdottir, and T. Gneiting (2013), Uncertainty quantification in complex simulation models using ensemble copula coupling, Statistical Science, 28, 616-640, DOI: 10.1214/13-STS443.
Programming Practices of Atlantic Coast Conference Wind Ensembles
ERIC Educational Resources Information Center
Wiltshire, Eric S.; Paul, Timothy A.; Paul, Phyllis M.; Rudnicki, Erika
2010-01-01
This study examined the programming trends of the elite wind bands/ensembles of the Atlantic Coast Conference universities. Using survey techniques previously employed by Powell (2009) and Paul (2010; in press), we contacted the directors of the Atlantic Coast Conference band programs and requested concert programs from their top groups for the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gholipour, Ali, E-mail: ali.gholipour@childrens.harvard.edu; Afacan, Onur; Scherrer, Benoit
Purpose: To compare and evaluate the use of super-resolution reconstruction (SRR), in frequency, image, and wavelet domains, to reduce through-plane partial voluming effects in magnetic resonance imaging. Methods: The reconstruction of an isotropic high-resolution image from multiple thick-slice scans has been investigated through techniques in frequency, image, and wavelet domains. Experiments were carried out with thick-slice T2-weighted fast spin echo sequence on the Academic College of Radiology MRI phantom, where the reconstructed images were compared to a reference high-resolution scan using peak signal-to-noise ratio (PSNR), structural similarity image metric (SSIM), mutual information (MI), and the mean absolute error (MAE) ofmore » image intensity profiles. The application of super-resolution reconstruction was then examined in retrospective processing of clinical neuroimages of ten pediatric patients with tuberous sclerosis complex (TSC) to reduce through-plane partial voluming for improved 3D delineation and visualization of thin radial bands of white matter abnormalities. Results: Quantitative evaluation results show improvements in all evaluation metrics through super-resolution reconstruction in the frequency, image, and wavelet domains, with the highest values obtained from SRR in the image domain. The metric values for image-domain SRR versus the original axial, coronal, and sagittal images were PSNR = 32.26 vs 32.22, 32.16, 30.65; SSIM = 0.931 vs 0.922, 0.924, 0.918; MI = 0.871 vs 0.842, 0.844, 0.831; and MAE = 5.38 vs 7.34, 7.06, 6.19. All similarity metrics showed high correlations with expert ranking of image resolution with MI showing the highest correlation at 0.943. Qualitative assessment of the neuroimages of ten TSC patients through in-plane and out-of-plane visualization of structures showed the extent of partial voluming effect in a real clinical scenario and its reduction using SRR. Blinded expert evaluation of image resolution in resampled out-of-plane views consistently showed the superiority of SRR compared to original axial and coronal image acquisitions. Conclusions: Thick-slice 2D T2-weighted MRI scans are part of many routine clinical protocols due to their high signal-to-noise ratio, but are often severely affected by through-plane partial voluming effects. This study shows that while radiologic assessment is performed in 2D on thick-slice scans, super-resolution MRI reconstruction techniques can be used to fuse those scans to generate a high-resolution image with reduced partial voluming for improved postacquisition processing. Qualitative and quantitative evaluation showed the efficacy of all SRR techniques with the best results obtained from SRR in the image domain. The limitations of SRR techniques are uncertainties in modeling the slice profile, density compensation, quantization in resampling, and uncompensated motion between scans.« less
Gholipour, Ali; Afacan, Onur; Aganj, Iman; Scherrer, Benoit; Prabhu, Sanjay P.; Sahin, Mustafa; Warfield, Simon K.
2015-01-01
Purpose: To compare and evaluate the use of super-resolution reconstruction (SRR), in frequency, image, and wavelet domains, to reduce through-plane partial voluming effects in magnetic resonance imaging. Methods: The reconstruction of an isotropic high-resolution image from multiple thick-slice scans has been investigated through techniques in frequency, image, and wavelet domains. Experiments were carried out with thick-slice T2-weighted fast spin echo sequence on the Academic College of Radiology MRI phantom, where the reconstructed images were compared to a reference high-resolution scan using peak signal-to-noise ratio (PSNR), structural similarity image metric (SSIM), mutual information (MI), and the mean absolute error (MAE) of image intensity profiles. The application of super-resolution reconstruction was then examined in retrospective processing of clinical neuroimages of ten pediatric patients with tuberous sclerosis complex (TSC) to reduce through-plane partial voluming for improved 3D delineation and visualization of thin radial bands of white matter abnormalities. Results: Quantitative evaluation results show improvements in all evaluation metrics through super-resolution reconstruction in the frequency, image, and wavelet domains, with the highest values obtained from SRR in the image domain. The metric values for image-domain SRR versus the original axial, coronal, and sagittal images were PSNR = 32.26 vs 32.22, 32.16, 30.65; SSIM = 0.931 vs 0.922, 0.924, 0.918; MI = 0.871 vs 0.842, 0.844, 0.831; and MAE = 5.38 vs 7.34, 7.06, 6.19. All similarity metrics showed high correlations with expert ranking of image resolution with MI showing the highest correlation at 0.943. Qualitative assessment of the neuroimages of ten TSC patients through in-plane and out-of-plane visualization of structures showed the extent of partial voluming effect in a real clinical scenario and its reduction using SRR. Blinded expert evaluation of image resolution in resampled out-of-plane views consistently showed the superiority of SRR compared to original axial and coronal image acquisitions. Conclusions: Thick-slice 2D T2-weighted MRI scans are part of many routine clinical protocols due to their high signal-to-noise ratio, but are often severely affected by through-plane partial voluming effects. This study shows that while radiologic assessment is performed in 2D on thick-slice scans, super-resolution MRI reconstruction techniques can be used to fuse those scans to generate a high-resolution image with reduced partial voluming for improved postacquisition processing. Qualitative and quantitative evaluation showed the efficacy of all SRR techniques with the best results obtained from SRR in the image domain. The limitations of SRR techniques are uncertainties in modeling the slice profile, density compensation, quantization in resampling, and uncompensated motion between scans. PMID:26632048
Regge trajectories and Hagedorn behavior: Hadronic realizations of dynamical dark matter
NASA Astrophysics Data System (ADS)
Dienes, Keith R.; Huang, Fei; Su, Shufang; Thomas, Brooks
2017-11-01
Dynamical Dark Matter (DDM) is an alternative framework for dark-matter physics in which the dark sector comprises a vast ensemble of particle species whose Standard-Model decay widths are balanced against their cosmological abundances. In this talk, we study the properties of a hitherto-unexplored class of DDM ensembles in which the ensemble constituents are the "hadronic" resonances associated with the confining phase of a strongly-coupled dark sector. Such ensembles exhibit masses lying along Regge trajectories and Hagedorn-like densities of states that grow exponentially with mass. We investigate the applicable constraints on such dark-"hadronic" DDM ensembles and find that these constraints permit a broad range of mass and confinement scales for these ensembles. We also find that the distribution of the total present-day abundance across the ensemble is highly correlated with the values of these scales. This talk reports on research originally presented in Ref. [1].
NASA Astrophysics Data System (ADS)
Clark, Elizabeth; Wood, Andy; Nijssen, Bart; Mendoza, Pablo; Newman, Andy; Nowak, Kenneth; Arnold, Jeffrey
2017-04-01
In an automated forecast system, hydrologic data assimilation (DA) performs the valuable function of correcting raw simulated watershed model states to better represent external observations, including measurements of streamflow, snow, soil moisture, and the like. Yet the incorporation of automated DA into operational forecasting systems has been a long-standing challenge due to the complexities of the hydrologic system, which include numerous lags between state and output variations. To help demonstrate that such methods can succeed in operational automated implementations, we present results from the real-time application of an ensemble particle filter (PF) for short-range (7 day lead) ensemble flow forecasts in western US river basins. We use the System for Hydromet Applications, Research and Prediction (SHARP), developed by the National Center for Atmospheric Research (NCAR) in collaboration with the University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. SHARP is a fully automated platform for short-term to seasonal hydrologic forecasting applications, incorporating uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions through ensemble methods. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 temperature and precipitation time series through conceptual and physically-oriented models. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. The PF selects and/or weights and resamples the IHCs that are most consistent with external streamflow observations, and uses the particles to initialize a streamflow forecast ensemble driven by ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS). We apply this method in real-time for several basins in the western US that are important for water resources management, and perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts skill. This presentation describes findings, including a comparison of sequential and non-sequential particle weighting methods.
NASA Astrophysics Data System (ADS)
Batté, Lauriane; Déqué, Michel
2016-06-01
Stochastic methods are increasingly used in global coupled model climate forecasting systems to account for model uncertainties. In this paper, we describe in more detail the stochastic dynamics technique introduced by Batté and Déqué (2012) in the ARPEGE-Climate atmospheric model. We present new results with an updated version of CNRM-CM using ARPEGE-Climate v6.1, and show that the technique can be used both as a means of analyzing model error statistics and accounting for model inadequacies in a seasonal forecasting framework.The perturbations are designed as corrections of model drift errors estimated from a preliminary weakly nudged re-forecast run over an extended reference period of 34 boreal winter seasons. A detailed statistical analysis of these corrections is provided, and shows that they are mainly made of intra-month variance, thereby justifying their use as in-run perturbations of the model in seasonal forecasts. However, the interannual and systematic error correction terms cannot be neglected. Time correlation of the errors is limited, but some consistency is found between the errors of up to 3 consecutive days.These findings encourage us to test several settings of the random draws of perturbations in seasonal forecast mode. Perturbations are drawn randomly but consistently for all three prognostic variables perturbed. We explore the impact of using monthly mean perturbations throughout a given forecast month in a first ensemble re-forecast (SMM, for stochastic monthly means), and test the use of 5-day sequences of perturbations in a second ensemble re-forecast (S5D, for stochastic 5-day sequences). Both experiments are compared in the light of a REF reference ensemble with initial perturbations only. Results in terms of forecast quality are contrasted depending on the region and variable of interest, but very few areas exhibit a clear degradation of forecasting skill with the introduction of stochastic dynamics. We highlight some positive impacts of the method, mainly on Northern Hemisphere extra-tropics. The 500 hPa geopotential height bias is reduced, and improvements project onto the representation of North Atlantic weather regimes. A modest impact on ensemble spread is found over most regions, which suggests that this method could be complemented by other stochastic perturbation techniques in seasonal forecasting mode.
A path integral approach to the full Dicke model with dipole-dipole interaction
NASA Astrophysics Data System (ADS)
Aparicio Alcalde, M.; Stephany, J.; Svaiter, N. F.
2011-12-01
We consider the full Dicke spin-boson model composed by a single bosonic mode and an ensemble of N identical two-level atoms with different couplings for the resonant and anti-resonant interaction terms, and incorporate a dipole-dipole interaction between the atoms. Assuming that the system is in thermal equilibrium with a reservoir at temperature β-1, we compute the free energy in the thermodynamic limit N → ∞ in the saddle-point approximation to the path integral and determine the critical temperature for the super-radiant phase transition. In the zero temperature limit, we recover the critical coupling of the quantum phase transition, presented in the literature.
Effects of ensemble and summary displays on interpretations of geospatial uncertainty data.
Padilla, Lace M; Ruginski, Ian T; Creem-Regehr, Sarah H
2017-01-01
Ensemble and summary displays are two widely used methods to represent visual-spatial uncertainty; however, there is disagreement about which is the most effective technique to communicate uncertainty to the general public. Visualization scientists create ensemble displays by plotting multiple data points on the same Cartesian coordinate plane. Despite their use in scientific practice, it is more common in public presentations to use visualizations of summary displays, which scientists create by plotting statistical parameters of the ensemble members. While prior work has demonstrated that viewers make different decisions when viewing summary and ensemble displays, it is unclear what components of the displays lead to diverging judgments. This study aims to compare the salience of visual features - or visual elements that attract bottom-up attention - as one possible source of diverging judgments made with ensemble and summary displays in the context of hurricane track forecasts. We report that salient visual features of both ensemble and summary displays influence participant judgment. Specifically, we find that salient features of summary displays of geospatial uncertainty can be misunderstood as displaying size information. Further, salient features of ensemble displays evoke judgments that are indicative of accurate interpretations of the underlying probability distribution of the ensemble data. However, when participants use ensemble displays to make point-based judgments, they may overweight individual ensemble members in their decision-making process. We propose that ensemble displays are a promising alternative to summary displays in a geospatial context but that decisions about visualization methods should be informed by the viewer's task.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nugraha, Andri Dian; Adisatrio, Philipus Ronnie
2013-09-09
Seismic refraction survey is one of geophysical method useful for imaging earth interior, definitely for imaging near surface. One of the common problems in seismic refraction survey is weak amplitude due to attenuations at far offset. This phenomenon will make it difficult to pick first refraction arrival, hence make it challenging to produce the near surface image. Seismic interferometry is a new technique to manipulate seismic trace for obtaining Green's function from a pair of receiver. One of its uses is for improving first refraction arrival quality at far offset. This research shows that we could estimate physical properties suchmore » as seismic velocity and thickness from virtual refraction processing. Also, virtual refraction could enhance the far offset signal amplitude since there is stacking procedure involved in it. Our results show super - virtual refraction processing produces seismic image which has higher signal-to-noise ratio than its raw seismic image. In the end, the numbers of reliable first arrival picks are also increased.« less
Adaptive Wiener filter super-resolution of color filter array images.
Karch, Barry K; Hardie, Russell C
2013-08-12
Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.
Highly photostable "super"-photoacids for ultrasensitive fluorescence spectroscopy.
Finkler, Björn; Spies, Christian; Vester, Michael; Walte, Frederick; Omlor, Kathrin; Riemann, Iris; Zimmer, Manuel; Stracke, Frank; Gerhards, Markus; Jung, Gregor
2014-03-01
The photoacid 8-hydroxypyren-1,3,6-trisulfonic acid (HPTS, pyranine) is a widely used model compound for the examination of excited state proton transfer (ESPT). We synthesized five "super"-photoacids with varying hydrophilicity and acidity on the basis of HPTS. By chemical modification of the three sulfonic acid substituents, the photoacidity is enhanced by up to more than five logarithmic units from pK*≈ 1.4 to ∼-3.9 for the most acidic compound. As a result, nearly quantitative ESPT in DMSO can be observed. The novel photoacids were characterized by steady-state and time-resolved fluorescence techniques showing distinctively red shifted spectra compared to HPTS while maintaining a high quantum yield near 90%. Photostability of the compounds was checked by fluorescence correlation spectroscopy (FCS) and was found to be adequately high for ultrasensitive fluorescence spectroscopy. The described photoacids present a valuable palette for a wide range of applications, especially when the properties of HPTS, i.e. highly charged, low photostability and only moderate excited state acidity, are limiting.
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-01-01
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744
Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao; Chang, Chin-Chen
2016-12-01
Iris recognition has gained increasing popularity over the last few decades; however, the stand-off distance in a conventional iris recognition system is too short, which limits its application. In this paper, we propose a novel hardware-software hybrid method to increase the stand-off distance in an iris recognition system. When designing the system hardware, we use an optimized wavefront coding technique to extend the depth of field. To compensate for the blurring of the image caused by wavefront coding, on the software side, the proposed system uses a local patch-based super-resolution method to restore the blurred image to its clear version. The collaborative effect of the new hardware design and software post-processing showed great potential in our experiment. The experimental results showed that such improvement cannot be achieved by using a hardware-or software-only design. The proposed system can increase the capture volume of a conventional iris recognition system by three times and maintain the system's high recognition rate.
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-05-22
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.
NASA Astrophysics Data System (ADS)
Sokol, Zbyněk; Mejsnar, Jan; Pop, Lukáš; Bližňák, Vojtěch
2017-09-01
A new method for the probabilistic nowcasting of instantaneous rain rates (ENS) based on the ensemble technique and extrapolation along Lagrangian trajectories of current radar reflectivity is presented. Assuming inaccurate forecasts of the trajectories, an ensemble of precipitation forecasts is calculated and used to estimate the probability that rain rates will exceed a given threshold in a given grid point. Although the extrapolation neglects the growth and decay of precipitation, their impact on the probability forecast is taken into account by the calibration of forecasts using the reliability component of the Brier score (BS). ENS forecasts the probability that the rain rates will exceed thresholds of 0.1, 1.0 and 3.0 mm/h in squares of 3 km by 3 km. The lead times were up to 60 min, and the forecast accuracy was measured by the BS. The ENS forecasts were compared with two other methods: combined method (COM) and neighbourhood method (NEI). NEI considered the extrapolated values in the square neighbourhood of 5 by 5 grid points of the point of interest as ensemble members, and the COM ensemble was comprised of united ensemble members of ENS and NEI. The results showed that the calibration technique significantly improves bias of the probability forecasts by including additional uncertainties that correspond to neglected processes during the extrapolation. In addition, the calibration can also be used for finding the limits of maximum lead times for which the forecasting method is useful. We found that ENS is useful for lead times up to 60 min for thresholds of 0.1 and 1 mm/h and approximately 30 to 40 min for a threshold of 3 mm/h. We also found that a reasonable size of the ensemble is 100 members, which provided better scores than ensembles with 10, 25 and 50 members. In terms of the BS, the best results were obtained by ENS and COM, which are comparable. However, ENS is better calibrated and thus preferable.
Assessment of Surface Air Temperature over China Using Multi-criterion Model Ensemble Framework
NASA Astrophysics Data System (ADS)
Li, J.; Zhu, Q.; Su, L.; He, X.; Zhang, X.
2017-12-01
The General Circulation Models (GCMs) are designed to simulate the present climate and project future trends. It has been noticed that the performances of GCMs are not always in agreement with each other over different regions. Model ensemble techniques have been developed to post-process the GCMs' outputs and improve their prediction reliabilities. To evaluate the performances of GCMs, root-mean-square error, correlation coefficient, and uncertainty are commonly used statistical measures. However, the simultaneous achievements of these satisfactory statistics cannot be guaranteed when using many model ensemble techniques. Meanwhile, uncertainties and future scenarios are critical for Water-Energy management and operation. In this study, a new multi-model ensemble framework was proposed. It uses a state-of-art evolutionary multi-objective optimization algorithm, termed Multi-Objective Complex Evolution Global Optimization with Principle Component Analysis and Crowding Distance (MOSPD), to derive optimal GCM ensembles and demonstrate the trade-offs among various solutions. Such trade-off information was further analyzed with a robust Pareto front with respect to different statistical measures. A case study was conducted to optimize the surface air temperature (SAT) ensemble solutions over seven geographical regions of China for the historical period (1900-2005) and future projection (2006-2100). The results showed that the ensemble solutions derived with MOSPD algorithm are superior over the simple model average and any single model output during the historical simulation period. For the future prediction, the proposed ensemble framework identified that the largest SAT change would occur in the South Central China under RCP 2.6 scenario, North Eastern China under RCP 4.5 scenario, and North Western China under RCP 8.5 scenario, while the smallest SAT change would occur in the Inner Mongolia under RCP 2.6 scenario, South Central China under RCP 4.5 scenario, and South Central China under RCP 8.5 scenario.
Liu, Shuguang; Tan, Zhengxi; Chen, Mingshi; Liu, Jinxun; Wein, Anne; Li, Zhengpeng; Huang, Shengli; Oeding, Jennifer; Young, Claudia; Verma, Shashi B.; Suyker, Andrew E.; Faulkner, Stephen P.
2012-01-01
The General Ensemble Biogeochemical Modeling System (GEMS) was es in individual models, it uses multiple site-scale biogeochemical models to perform model simulations. Second, it adopts Monte Carlo ensemble simulations of each simulation unit (one site/pixel or group of sites/pixels with similar biophysical conditions) to incorporate uncertainties and variability (as measured by variances and covariance) of input variables into model simulations. In this chapter, we illustrate the applications of GEMS at the site and regional scales with an emphasis on incorporating agricultural practices. Challenges in modeling soil carbon dynamics and greenhouse emissions are also discussed.
Pauci ex tanto numero: reducing redundancy in multi-model ensembles
NASA Astrophysics Data System (ADS)
Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.
2013-02-01
We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date no attempts in this direction are documented within the air quality (AQ) community, although the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared biases among models will determine a biased ensemble, making therefore essential the errors of the ensemble members to be independent so that bias can cancel out. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated) we discourage selecting the members of the ensemble simply on the basis of scores, that is, independence and skills need to be considered disjointly.
Cassette Series Designed for Live-Cell Imaging of Proteins and High Resolution Techniques in Yeast
Young, Carissa L.; Raden, David L.; Caplan, Jeffrey; Czymmek, Kirk; Robinson, Anne S.
2012-01-01
During the past decade, it has become clear that protein function and regulation are highly dependent upon intracellular localization. Although fluorescent protein variants are ubiquitously used to monitor protein dynamics, localization, and abundance; fluorescent light microscopy techniques often lack the resolution to explore protein heterogeneity and cellular ultrastructure. Several approaches have been developed to identify, characterize, and monitor the spatial localization of proteins and complexes at the sub-organelle level; yet, many of these techniques have not been applied to yeast. Thus, we have constructed a series of cassettes containing codon-optimized epitope tags, fluorescent protein variants that cover the full spectrum of visible light, a TetCys motif used for FlAsH-based localization, and the first evaluation in yeast of a photoswitchable variant – mEos2 – to monitor discrete subpopulations of proteins via confocal microscopy. This series of modules, complete with six different selection markers, provides the optimal flexibility during live-cell imaging and multicolor labeling in vivo. Furthermore, high-resolution imaging techniques include the yeast-enhanced TetCys motif that is compatible with diaminobenzidine photooxidation used for protein localization by electron microscopy and mEos2 that is ideal for super-resolution microscopy. We have examined the utility of our cassettes by analyzing all probes fused to the C-terminus of Sec61, a polytopic membrane protein of the endoplasmic reticulum of moderate protein concentration, in order to directly compare fluorescent probes, their utility and technical applications. Our series of cassettes expand the repertoire of molecular tools available to advance targeted spatiotemporal investigations using multiple live-cell, super-resolution or electron microscopy imaging techniques. PMID:22473760
Intermediate-filaments: from disordered building blocks to well-ordered cells
NASA Astrophysics Data System (ADS)
Kornreich, Micha; Malka-Gibor, Eti; Laser-Azogui, Adi; Doron, Ofer; Avinery, Ram; Herrmann, Harald; Beck, Roy
In the past decade it was found that ~50% of human proteins contain long disordered regions, which play significant functional roles. As these regions lack a defined 3D folded structure, their ensemble conformations can be studied using polymer physics statistical-mechanics arguments. We measure the structure and mechanical response of hydrogels composed of neuronal intermediate filaments proteins. In the nervous system, these proteins provide cells with their mechanical support and shape, via interactions of their long, highly charged and disordered protein chains. We employ synchrotron small-angle X-ray scattering and various microscopy techniques to investigate such hydrogels from the nano- to the macro-scale. In contrast to previous polymer physics theories and experiments, we find that shorter and less charged chains can promote network expansion. The results are explained by intricate interactions between specific domains on the interacting chains, and also suggest a novel structural justification for the changing protein compositions observed during neuronal development. We address the following questions: Can protein disorder have an important role in cellular architecture? Can structural disorder in the micro-scale induce orientational and translational order on the macro-scale? How do the physical properties of disordered protein regions, such as charge, length, and hydrophobicity, modulate the cellular super-structure?
Generalized Green's function molecular dynamics for canonical ensemble simulations
NASA Astrophysics Data System (ADS)
Coluci, V. R.; Dantas, S. O.; Tewary, V. K.
2018-05-01
The need of small integration time steps (˜1 fs) in conventional molecular dynamics simulations is an important issue that inhibits the study of physical, chemical, and biological systems in real timescales. Additionally, to simulate those systems in contact with a thermal bath, thermostating techniques are usually applied. In this work, we generalize the Green's function molecular dynamics technique to allow simulations within the canonical ensemble. By applying this technique to one-dimensional systems, we were able to correctly describe important thermodynamic properties such as the temperature fluctuations, the temperature distribution, and the velocity autocorrelation function. We show that the proposed technique also allows the use of time steps one order of magnitude larger than those typically used in conventional molecular dynamics simulations. We expect that this technique can be used in long-timescale molecular dynamics simulations.
Assembly and microscopic characterization of DNA origami structures.
Scheible, Max; Jungmann, Ralf; Simmel, Friedrich C
2012-01-01
DNA origami is a revolutionary method for the assembly of molecular nanostructures from DNA with precisely defined dimensions and with an unprecedented yield. This can be utilized to arrange nanoscale components such as proteins or nanoparticles into pre-defined patterns. For applications it will now be of interest to arrange such components into functional complexes and study their geometry-dependent interactions. While commonly DNA nanostructures are characterized by atomic force microscopy or electron microscopy, these techniques often lack the time-resolution to study dynamic processes. It is therefore of considerable interest to also apply fluorescence microscopic techniques to DNA nanostructures. Of particular importance here is the utilization of novel super-resolved microscopy methods that enable imaging beyond the classical diffraction limit.
Bartke, Rebecca M; Cameron, Elizabeth L; Cristie-David, Ajitha S; Custer, Thomas C; Denies, Maxwell S; Daher, May; Dhakal, Soma; Ghosh, Soumi; Heinicke, Laurie A; Hoff, J Damon; Hou, Qian; Kahlscheuer, Matthew L; Karslake, Joshua; Krieger, Adam G; Li, Jieming; Li, Xiang; Lund, Paul E; Vo, Nguyen N; Park, Jun; Pitchiaya, Sethuramasundaram; Rai, Victoria; Smith, David J; Suddala, Krishna C; Wang, Jiarui; Widom, Julia R; Walter, Nils G
2015-05-01
Four days after the announcement of the 2014 Nobel Prize in Chemistry for "the development of super-resolved fluorescence microscopy" based on single molecule detection, the Single Molecule Analysis in Real-Time (SMART) Center at the University of Michigan hosted a "Principles of Single Molecule Techniques 2014" course. Through a combination of plenary lectures and an Open House at the SMART Center, the course took a snapshot of a technology with an especially broad and rapidly expanding range of applications in the biomedical and materials sciences. Highlighting the continued rapid emergence of technical and scientific advances, the course underscored just how brightly the future of the single molecule field shines. © 2014 Wiley Periodicals, Inc.
Multi-dimensional super-resolution imaging enables surface hydrophobicity mapping
NASA Astrophysics Data System (ADS)
Bongiovanni, Marie N.; Godet, Julien; Horrocks, Mathew H.; Tosatto, Laura; Carr, Alexander R.; Wirthensohn, David C.; Ranasinghe, Rohan T.; Lee, Ji-Eun; Ponjavic, Aleks; Fritz, Joelle V.; Dobson, Christopher M.; Klenerman, David; Lee, Steven F.
2016-12-01
Super-resolution microscopy allows biological systems to be studied at the nanoscale, but has been restricted to providing only positional information. Here, we show that it is possible to perform multi-dimensional super-resolution imaging to determine both the position and the environmental properties of single-molecule fluorescent emitters. The method presented here exploits the solvatochromic and fluorogenic properties of nile red to extract both the emission spectrum and the position of each dye molecule simultaneously enabling mapping of the hydrophobicity of biological structures. We validated this by studying synthetic lipid vesicles of known composition. We then applied both to super-resolve the hydrophobicity of amyloid aggregates implicated in neurodegenerative diseases, and the hydrophobic changes in mammalian cell membranes. Our technique is easily implemented by inserting a transmission diffraction grating into the optical path of a localization-based super-resolution microscope, enabling all the information to be extracted simultaneously from a single image plane.
Multi-dimensional super-resolution imaging enables surface hydrophobicity mapping
Bongiovanni, Marie N.; Godet, Julien; Horrocks, Mathew H.; Tosatto, Laura; Carr, Alexander R.; Wirthensohn, David C.; Ranasinghe, Rohan T.; Lee, Ji-Eun; Ponjavic, Aleks; Fritz, Joelle V.; Dobson, Christopher M.; Klenerman, David; Lee, Steven F.
2016-01-01
Super-resolution microscopy allows biological systems to be studied at the nanoscale, but has been restricted to providing only positional information. Here, we show that it is possible to perform multi-dimensional super-resolution imaging to determine both the position and the environmental properties of single-molecule fluorescent emitters. The method presented here exploits the solvatochromic and fluorogenic properties of nile red to extract both the emission spectrum and the position of each dye molecule simultaneously enabling mapping of the hydrophobicity of biological structures. We validated this by studying synthetic lipid vesicles of known composition. We then applied both to super-resolve the hydrophobicity of amyloid aggregates implicated in neurodegenerative diseases, and the hydrophobic changes in mammalian cell membranes. Our technique is easily implemented by inserting a transmission diffraction grating into the optical path of a localization-based super-resolution microscope, enabling all the information to be extracted simultaneously from a single image plane. PMID:27929085
Bhattacharyya, Moitrayee; Vishveshwara, Saraswathi
2011-07-01
In this article, we present a novel application of a quantum clustering (QC) technique to objectively cluster the conformations, sampled by molecular dynamics simulations performed on different ligand bound structures of the protein. We further portray each conformational population in terms of dynamically stable network parameters which beautifully capture the ligand induced variations in the ensemble in atomistic detail. The conformational populations thus identified by the QC method and verified by network parameters are evaluated for different ligand bound states of the protein pyrrolysyl-tRNA synthetase (DhPylRS) from D. hafniense. The ligand/environment induced re-distribution of protein conformational ensembles forms the basis for understanding several important biological phenomena such as allostery and enzyme catalysis. The atomistic level characterization of each population in the conformational ensemble in terms of the re-orchestrated networks of amino acids is a challenging problem, especially when the changes are minimal at the backbone level. Here we demonstrate that the QC method is sensitive to such subtle changes and is able to cluster MD snapshots which are similar at the side-chain interaction level. Although we have applied these methods on simulation trajectories of a modest time scale (20 ns each), we emphasize that our methodology provides a general approach towards an objective clustering of large-scale MD simulation data and may be applied to probe multistate equilibria at higher time scales, and to problems related to protein folding for any protein or protein-protein/RNA/DNA complex of interest with a known structure.
Magnetic Resonance Super-resolution Imaging Measurement with Dictionary-optimized Sparse Learning
NASA Astrophysics Data System (ADS)
Li, Jun-Bao; Liu, Jing; Pan, Jeng-Shyang; Yao, Hongxun
2017-06-01
Magnetic Resonance Super-resolution Imaging Measurement (MRIM) is an effective way of measuring materials. MRIM has wide applications in physics, chemistry, biology, geology, medical and material science, especially in medical diagnosis. It is feasible to improve the resolution of MR imaging through increasing radiation intensity, but the high radiation intensity and the longtime of magnetic field harm the human body. Thus, in the practical applications the resolution of hardware imaging reaches the limitation of resolution. Software-based super-resolution technology is effective to improve the resolution of image. This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method. The framework is to solve the problem of sample selection for dictionary learning of sparse reconstruction. The textural complexity-based image quality representation is proposed to choose the optimal samples for dictionary learning. Comprehensive experiments show that the dictionary-optimized sparse learning improves the performance of sparse representation.
Paparelli, Laura; Corthout, Nikky; Pavie, Benjamin; Annaert, Wim; Munck, Sebastian
2016-01-01
The spatial distribution of proteins within the cell affects their capability to interact with other molecules and directly influences cellular processes and signaling. At the plasma membrane, multiple factors drive protein compartmentalization into specialized functional domains, leading to the formation of clusters in which intermolecule interactions are facilitated. Therefore, quantifying protein distributions is a necessity for understanding their regulation and function. The recent advent of super-resolution microscopy has opened up the possibility of imaging protein distributions at the nanometer scale. In parallel, new spatial analysis methods have been developed to quantify distribution patterns in super-resolution images. In this chapter, we provide an overview of super-resolution microscopy and summarize the factors influencing protein arrangements on the plasma membrane. Finally, we highlight methods for analyzing clusterization of plasma membrane proteins, including examples of their applications.
Bhattacharya, D; Bhattacharya, R; Dhar, T K
1999-11-19
In an earlier communication we have described a novel signal amplification technology termed Super-CARD, which is able to significantly improve antigen detection sensitivity in conventional Dot-ELISA by approximately 10(5)-fold. The method utilizes hitherto unreported synthesized electron rich proteins containing multiple phenolic groups which, when immobilized over a solid phase as blocking agent, markedly increases the signal amplification capability of the existing CARD method (Bhattacharya, R., Bhattacharya, D., Dhar, T.K., 1999. A novel signal amplification technology based on catalyzed reporter deposition and its application in a Dot-ELISA with ultra high sensitivity. J. Immunol. Methods 227, 31.). In this paper we describe the utilization of this Super-CARD amplification technique in ELISA and its applicability for the rapid determination of aflatoxin B(1) (AFB(1)) in infected seeds. Using this method under identical conditions, the increase in absorbance over the CARD method was approximately 400%. The limit of detection of AFB(1) by this method was 0.1 pg/well, the sensitivity enhancement being 5-fold over the optimized CARD ELISA. Furthermore, the total incubation time was reduced to 16 min compared to 50 min for the CARD method. Assay specificity was not adversely affected and the amount of AFB(1) measured in seed extracts correlated well with the values obtained by conventional ELISA.
Z2×Z2 generalizations of 𝒩 =2 super Schrödinger algebras and their representations
NASA Astrophysics Data System (ADS)
Aizawa, N.; Segar, J.
2017-11-01
We generalize the real and chiral N =2 super Schrödinger algebras to Z2×Z2-graded Lie superalgebras. This is done by D-module presentation, and as a consequence, the D-module presentations of Z2×Z2-graded superalgebras are identical to the ones of super Schrödinger algebras. We then generalize the calculus over the Grassmann number to Z2×Z2 setting. Using it and the standard technique of Lie theory, we obtain a vector field realization of Z2×Z2-graded superalgebras. A vector field realization of the Z2×Z2 generalization of N =1 super Schrödinger algebra is also presented.
Platform for High-Assurance Cloud Computing
2016-06-01
to create today’s standard cloud computing applications and services. Additionally , our SuperCloud (a related but distinct project under the same... Additionally , our SuperCloud (a related but distinct project under the same MRC funding) reduces vendor lock-in and permits application to migrate, to follow...managing key- value storage with strong assurance properties. This first accomplishment allows us to climb the cloud technical stack, by offering
New textile composite materials development, production, application
NASA Technical Reports Server (NTRS)
Mikhailov, Petr Y.
1993-01-01
New textile composite materials development, production, and application are discussed. Topics covered include: super-high-strength, super-high-modulus fibers, filaments, and materials manufactured on their basis; heat-resistant and nonflammable fibers, filaments, and textile fabrics; fibers and textile fabrics based on fluorocarbon poylmers; antifriction textile fabrics based on polyfen filaments; development of new types of textile combines and composite materials; and carbon filament-based fabrics.
Ensemble: an Architecture for Mission-Operations Software
NASA Technical Reports Server (NTRS)
Norris, Jeffrey; Powell, Mark; Fox, Jason; Rabe, Kenneth; Shu, IHsiang; McCurdy, Michael; Vera, Alonso
2008-01-01
Ensemble is the name of an open architecture for, and a methodology for the development of, spacecraft mission operations software. Ensemble is also potentially applicable to the development of non-spacecraft mission-operations- type software. Ensemble capitalizes on the strengths of the open-source Eclipse software and its architecture to address several issues that have arisen repeatedly in the development of mission-operations software: Heretofore, mission-operations application programs have been developed in disparate programming environments and integrated during the final stages of development of missions. The programs have been poorly integrated, and it has been costly to develop, test, and deploy them. Users of each program have been forced to interact with several different graphical user interfaces (GUIs). Also, the strategy typically used in integrating the programs has yielded serial chains of operational software tools of such a nature that during use of a given tool, it has not been possible to gain access to the capabilities afforded by other tools. In contrast, the Ensemble approach offers a low-risk path towards tighter integration of mission-operations software tools.
NASA Astrophysics Data System (ADS)
Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.
2015-11-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.
A target recognition method for maritime surveillance radars based on hybrid ensemble selection
NASA Astrophysics Data System (ADS)
Fan, Xueman; Hu, Shengliang; He, Jingbo
2017-11-01
In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.
Quasi-most unstable modes: a window to 'À la carte' ensemble diversity?
NASA Astrophysics Data System (ADS)
Homar Santaner, Victor; Stensrud, David J.
2010-05-01
The atmospheric scientific community is nowadays facing the ambitious challenge of providing useful forecasts of atmospheric events that produce high societal impact. The low level of social resilience to false alarms creates tremendous pressure on forecasting offices to issue accurate, timely and reliable warnings.Currently, no operational numerical forecasting system is able to respond to the societal demand for high-resolution (in time and space) predictions in the 12-72h time span. The main reasons for such deficiencies are the lack of adequate observations and the high non-linearity of the numerical models that are currently used. The whole weather forecasting problem is intrinsically probabilistic and current methods aim at coping with the various sources of uncertainties and the error propagation throughout the forecasting system. This probabilistic perspective is often created by generating ensembles of deterministic predictions that are aimed at sampling the most important sources of uncertainty in the forecasting system. The ensemble generation/sampling strategy is a crucial aspect of their performance and various methods have been proposed. Although global forecasting offices have been using ensembles of perturbed initial conditions for medium-range operational forecasts since 1994, no consensus exists regarding the optimum sampling strategy for high resolution short-range ensemble forecasts. Bred vectors, however, have been hypothesized to better capture the growing modes in the highly nonlinear mesoscale dynamics of severe episodes than singular vectors or observation perturbations. Yet even this technique is not able to produce enough diversity in the ensembles to accurately and routinely predict extreme phenomena such as severe weather. Thus, we propose a new method to generate ensembles of initial conditions perturbations that is based on the breeding technique. Given a standard bred mode, a set of customized perturbations is derived with specified amplitudes and horizontal scales. This allows the ensemble to excite growing modes across a wider range of scales. Results show that this approach produces significantly more spread in the ensemble prediction than standard bred modes alone. Several examples that illustrate the benefits from this approach for severe weather forecasts will be provided.
ERIC Educational Resources Information Center
Silvey, Brian A.; Fisher, Ryan A.
2015-01-01
The purpose of this study was to examine whether one aspect of conducting technique, the conducting plane, would affect band and/or choral musicians' perceptions of conductor and ensemble expressivity. A band and a choral conductor were each videotaped conducting 1-min excerpts from Morten Lauridsen's "O Magnum Mysterium" while using a…
Negative correlation learning for customer churn prediction: a comparison study.
Rodan, Ali; Fayyoumi, Ayham; Faris, Hossam; Alsakran, Jamal; Al-Kadi, Omar
2015-01-01
Recently, telecommunication companies have been paying more attention toward the problem of identification of customer churn behavior. In business, it is well known for service providers that attracting new customers is much more expensive than retaining existing ones. Therefore, adopting accurate models that are able to predict customer churn can effectively help in customer retention campaigns and maximizing the profit. In this paper we will utilize an ensemble of Multilayer perceptrons (MLP) whose training is obtained using negative correlation learning (NCL) for predicting customer churn in a telecommunication company. Experiments results confirm that NCL based MLP ensemble can achieve better generalization performance (high churn rate) compared with ensemble of MLP without NCL (flat ensemble) and other common data mining techniques used for churn analysis.
Designing a deep brain stimulator to suppress pathological neuronal synchrony.
Montaseri, Ghazal; Yazdanpanah, Mohammad Javad; Bahrami, Fariba
2015-03-01
Some of neuropathologies are believed to be related to abnormal synchronization of neurons. In the line of therapy, designing effective deep brain stimulators to suppress the pathological synchrony among neuronal ensembles is a challenge of high clinical relevance. The stimulation should be able to disrupt the synchrony in the presence of latencies due to imperfect knowledge about parameters of a neuronal ensemble and stimulation impacts on the ensemble. We propose an adaptive desynchronizing deep brain stimulator capable of dealing with these uncertainties. We analyze the collective behavior of the stimulated neuronal ensemble and show that, using the designed stimulator, the resulting asynchronous state is stable. Simulation results reveal the efficiency of the proposed technique. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacGillavry, Harold D., E-mail: h.d.macgillavry@uu.nl; Hoogenraad, Casper C., E-mail: c.hoogenraad@uu.nl
2015-07-15
The molecular architecture of dendritic spines defines the efficiency of signal transmission across excitatory synapses. It is therefore critical to understand the mechanisms that control the dynamic localization of the molecular constituents within spines. However, because of the small scale at which most processes within spines take place, conventional light microscopy techniques are not adequate to provide the necessary level of resolution. Recently, super-resolution imaging techniques have overcome the classical barrier imposed by the diffraction of light, and can now resolve the localization and dynamic behavior of proteins within small compartments with nanometer precision, revolutionizing the study of dendritic spinemore » architecture. Here, we highlight exciting new findings from recent super-resolution studies on neuronal spines, and discuss how these studies revealed important new insights into how protein complexes are assembled and how their dynamic behavior shapes the efficiency of synaptic transmission.« less
Ashtiani Haghighi, Donya; Mobayen, Saleh
2018-04-01
This paper proposes an adaptive super-twisting decoupled terminal sliding mode control technique for a class of fourth-order systems. The adaptive-tuning law eliminates the requirement of the knowledge about the upper bounds of external perturbations. Using the proposed control procedure, the state variables of cart-pole system are converged to decoupled terminal sliding surfaces and their equilibrium points in the finite time. Moreover, via the super-twisting algorithm, the chattering phenomenon is avoided without affecting the control performance. The numerical results demonstrate the high stabilization accuracy and lower performance indices values of the suggested method over the other ones. The simulation results on the cart-pole system as well as experimental validations demonstrate that the proposed control technique exhibits a reasonable performance in comparison with the other methods. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Localization-based super-resolution imaging meets high-content screening.
Beghin, Anne; Kechkar, Adel; Butler, Corey; Levet, Florian; Cabillic, Marine; Rossier, Olivier; Giannone, Gregory; Galland, Rémi; Choquet, Daniel; Sibarita, Jean-Baptiste
2017-12-01
Single-molecule localization microscopy techniques have proven to be essential tools for quantitatively monitoring biological processes at unprecedented spatial resolution. However, these techniques are very low throughput and are not yet compatible with fully automated, multiparametric cellular assays. This shortcoming is primarily due to the huge amount of data generated during imaging and the lack of software for automation and dedicated data mining. We describe an automated quantitative single-molecule-based super-resolution methodology that operates in standard multiwell plates and uses analysis based on high-content screening and data-mining software. The workflow is compatible with fixed- and live-cell imaging and allows extraction of quantitative data like fluorophore photophysics, protein clustering or dynamic behavior of biomolecules. We demonstrate that the method is compatible with high-content screening using 3D dSTORM and DNA-PAINT based super-resolution microscopy as well as single-particle tracking.
NASA Astrophysics Data System (ADS)
Hoteit, I.; Hollt, T.; Hadwiger, M.; Knio, O. M.; Gopalakrishnan, G.; Zhan, P.
2016-02-01
Ocean reanalyses and forecasts are nowadays generated by combining ensemble simulations with data assimilation techniques. Most of these techniques resample the ensemble members after each assimilation cycle. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially with the number of assimilation cycles. In general a single possible path is not of interest but only the probabilities that any point in space might be reached by a particle at some point in time. We present an approach using probability-weighted piecewise particle trajectories to allow for interactive probability mapping. This is achieved by binning the domain and splitting up the tracing process into the individual assimilation cycles, so that particles that fall into the same bin after a cycle can be treated as a single particle with a larger probability as input for the next cycle. As a result we loose the possibility to track individual particles, but can create probability maps for any desired seed at interactive rates. The technique is integrated in an interactive visualization system that enables the visual analysis of the particle traces side by side with other forecast variables, such as the sea surface height, and their corresponding behavior over time. By harnessing the power of modern graphics processing units (GPUs) for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real-time, view specific parameter settings or simulation models and move between different spatial or temporal regions without delay. In addition our system provides advanced visualizations to highlight the uncertainty, or show the complete distribution of the simulations at user-defined positions over the complete time series of the domain.
Locally Weighted Ensemble Clustering.
Huang, Dong; Wang, Chang-Dong; Lai, Jian-Huang
2018-05-01
Due to its ability to combine multiple base clusterings into a probably better and more robust clustering, the ensemble clustering technique has been attracting increasing attention in recent years. Despite the significant success, one limitation to most of the existing ensemble clustering methods is that they generally treat all base clusterings equally regardless of their reliability, which makes them vulnerable to low-quality base clusterings. Although some efforts have been made to (globally) evaluate and weight the base clusterings, yet these methods tend to view each base clustering as an individual and neglect the local diversity of clusters inside the same base clustering. It remains an open problem how to evaluate the reliability of clusters and exploit the local diversity in the ensemble to enhance the consensus performance, especially, in the case when there is no access to data features or specific assumptions on data distribution. To address this, in this paper, we propose a novel ensemble clustering approach based on ensemble-driven cluster uncertainty estimation and local weighting strategy. In particular, the uncertainty of each cluster is estimated by considering the cluster labels in the entire ensemble via an entropic criterion. A novel ensemble-driven cluster validity measure is introduced, and a locally weighted co-association matrix is presented to serve as a summary for the ensemble of diverse clusters. With the local diversity in ensembles exploited, two novel consensus functions are further proposed. Extensive experiments on a variety of real-world datasets demonstrate the superiority of the proposed approach over the state-of-the-art.
Dynamics and Statistical Mechanics of Rotating and non-Rotating Vortical Flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Chjan
Three projects were analyzed with the overall aim of developing a computational/analytical model for estimating values of the energy, angular momentum, enstrophy and total variation of fluid height at phase transitions between disordered and self-organized flow states in planetary atmospheres. It is believed that these transitions in equilibrium statistical mechanics models play a role in the construction of large-scale, stable structures including super-rotation in the Venusian atmosphere and the formation of the Great Red Spot on Jupiter. Exact solutions of the spherical energy-enstrophy models for rotating planetary atmospheres by Kac's method of steepest descent predicted phase transitions to super-rotating solid-bodymore » flows at high energy to enstrophy ratio for all planetary spins and to sub-rotating modes if the planetary spin is large enough. These canonical statistical ensembles are well-defined for the long-range energy interactions that arise from 2D fluid flows on compact oriented manifolds such as the surface of the sphere and torus. This is because in Fourier space available through Hodge theory, the energy terms are exactly diagonalizable and hence has zero range, leading to well-defined heat baths.« less
Mental Health Risk Adjustment with Clinical Categories and Machine Learning.
Shrestha, Akritee; Bergquist, Savannah; Montz, Ellen; Rose, Sherri
2017-12-15
To propose nonparametric ensemble machine learning for mental health and substance use disorders (MHSUD) spending risk adjustment formulas, including considering Clinical Classification Software (CCS) categories as diagnostic covariates over the commonly used Hierarchical Condition Category (HCC) system. 2012-2013 Truven MarketScan database. We implement 21 algorithms to predict MHSUD spending, as well as a weighted combination of these algorithms called super learning. The algorithm collection included seven unique algorithms that were supplied with three differing sets of MHSUD-related predictors alongside demographic covariates: HCC, CCS, and HCC + CCS diagnostic variables. Performance was evaluated based on cross-validated R 2 and predictive ratios. Results show that super learning had the best performance based on both metrics. The top single algorithm was random forests, which improved on ordinary least squares regression by 10 percent with respect to relative efficiency. CCS categories-based formulas were generally more predictive of MHSUD spending compared to HCC-based formulas. Literature supports the potential benefit of implementing a separate MHSUD spending risk adjustment formula. Our results suggest there is an incentive to explore machine learning for MHSUD-specific risk adjustment, as well as considering CCS categories over HCCs. © Health Research and Educational Trust.
Comparison of Confocal and Super-Resolution Reflectance Imaging of Metal Oxide Nanoparticles
Guggenheim, Emily J.; Khan, Abdullah; Pike, Jeremy; Chang, Lynne; Lynch, Iseult; Rappoport, Joshua Z.
2016-01-01
The potential for human exposure to manufactured nanoparticles (NPs) has increased in recent years, in part through the incorporation of engineered particles into a wide range of commercial goods and medical applications. NP are ideal candidates for use as therapeutic and diagnostic tools within biomedicine, however concern exists regarding their efficacy and safety. Thus, developing techniques for the investigation of NP uptake into cells is critically important. Current intracellular NP investigations rely on the use of either Transmission Electron Microscopy (TEM), which provides ultrahigh resolution, but involves cumbersome sample preparation rendering the technique incompatible with live cell imaging, or fluorescent labelling, which suffers from photobleaching, poor bioconjugation and, often, alteration of NP surface properties. Reflected light imaging provides an alternative non-destructive label free technique well suited, but not limited to, the visualisation of NP uptake within model systems, such as cells. Confocal reflectance microscopy provides optical sectioning and live imaging capabilities, with little sample preparation. However confocal microscopy is diffraction limited, thus the X-Y resolution is restricted to ~250 nm, substantially larger than the <100 nm size of NPs. Techniques such as super-resolution light microscopy overcome this fundamental limitation, providing increased X-Y resolution. The use of Reflectance SIM (R-SIM) for NP imaging has previously only been demonstrated on custom built microscopes, restricting the widespread use and limiting NP investigations. This paper demonstrates the use of a commercial SIM microscope for the acquisition of super-resolution reflectance data with X-Y resolution of 115 nm, a greater than two-fold increase compared to that attainable with RCM. This increase in resolution is advantageous for visualising small closely spaced structures, such as NP clusters, previously unresolvable by RCM. This is advantageous when investigating the subcellular trafficking of NP within fluorescently labelled cellular compartments. NP signal can be observed using RCM, R-SIM and TEM and a direct comparison is presented. Each of these techniques has its own benefits and limitations; RCM and R-SIM provide novel complementary information while the combination of modalities provides a unique opportunity to gain additional information regarding NP uptake. The use of multiple imaging methods therefore greatly enhances the range of NPs that can be studied under label-free conditions. PMID:27695038
Nonlinear problems in data-assimilation : Can synchronization help?
NASA Astrophysics Data System (ADS)
Tribbia, J. J.; Duane, G. S.
2009-12-01
Over the past several years, operational weather centers have initiated ensemble prediction and assimilation techniques to estimate the error covariance of forecasts in the short and the medium range. The ensemble techniques used are based on linear methods. The theory This technique s been shown to be a useful indicator of skill in the linear range where forecast errors are small relative to climatological variance. While this advance has been impressive, there are still ad hoc aspects of its use in practice, like the need for covariance inflation which are troubling. Furthermore, to be of utility in the nonlinear range an ensemble assimilation and prediction method must be capable of giving probabilistic information for the situation where a probability density forecast becomes multi-modal. A prototypical, simplest example of such a situation is the planetary-wave regime transition where the pdf is bimodal. Our recent research show how the inconsistencies and extensions of linear methodology can be consistently treated using the paradigm of synchronization which views the problems of assimilation and forecasting as that of optimizing the forecast model state with respect to the future evolution of the atmosphere.
Impact of Damping Uncertainty on SEA Model Response Variance
NASA Technical Reports Server (NTRS)
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
Multiphysics superensemble forecast applied to Mediterranean heavy precipitation situations
NASA Astrophysics Data System (ADS)
Vich, M.; Romero, R.
2010-11-01
The high-impact precipitation events that regularly affect the western Mediterranean coastal regions are still difficult to predict with the current prediction systems. Bearing this in mind, this paper focuses on the superensemble technique applied to the precipitation field. Encouraged by the skill shown by a previous multiphysics ensemble prediction system applied to western Mediterranean precipitation events, the superensemble is fed with this ensemble. The training phase of the superensemble contributes to the actual forecast with weights obtained by comparing the past performance of the ensemble members and the corresponding observed states. The non-hydrostatic MM5 mesoscale model is used to run the multiphysics ensemble. Simulations are performed with a 22.5 km resolution domain (Domain 1 in http://mm5forecasts.uib.es) nested in the ECMWF forecast fields. The period between September and December 2001 is used to train the superensemble and a collection of 19~MEDEX cyclones is used to test it. The verification procedure involves testing the superensemble performance and comparing it with that of the poor-man and bias-corrected ensemble mean and the multiphysic EPS control member. The results emphasize the need of a well-behaved training phase to obtain good results with the superensemble technique. A strategy to obtain this improved training phase is already outlined.
A new family Jacobian solver for global three-dimensional modeling of atmospheric chemistry
NASA Astrophysics Data System (ADS)
Zhao, Xuepeng; Turco, Richard P.; Shen, Mei
1999-01-01
We present a new technique to solve complex sets of photochemical rate equations that is applicable to global modeling of the troposphere and stratosphere. The approach is based on the concept of "families" of species, whose chemical rate equations are tightly coupled. Variations of species concentrations within a family can be determined by inverting a linearized Jacobian matrix representing the family group. Since this group consists of a relatively small number of species the corresponding Jacobian has a low order (a minimatrix) compared to the Jacobian of the entire system. However, we go further and define a super-family that is the set of all families. The super-family is also solved by linearization and matrix inversion. The resulting Super-Family Matrix Inversion (SFMI) scheme is more stable and accurate than common family approaches. We discuss the numerical structure of the SFMI scheme and apply our algorithms to a comprehensive set of photochemical reactions. To evaluate performance, the SFMI scheme is compared with an optimized Gear solver. We find that the SFMI technique can be at least an order of magnitude more efficient than existing chemical solvers while maintaining relative errors in the calculations of 15% or less over a diurnal cycle. The largest SFMI errors arise at sunrise and sunset and during the evening when species concentrations may be very low. We show that sunrise/sunset errors can be minimized through a careful treatment of photodissociation during these periods; the nighttime deviations are negligible from the point of view of acceptable computational accuracy. The stability and flexibility of the SFMI algorithm should be sufficient for most modeling applications until major improvements in other modeling factors are achieved. In addition, because of its balanced computational design, SFMI can easily be adapted to parallel computing architectures. SFMI thus should allow practical long-term integrations of global chemistry coupled to general circulation and climate models, studies of interannual and interdecadal variability in atmospheric composition, simulations of past multidecadal trends owing to anthropogenic emissions, long-term forecasting associated with projected emissions, and sensitivity analyses for a wide range of physical and chemical parameters.
Pauci ex tanto numero: reduce redundancy in multi-model ensembles
NASA Astrophysics Data System (ADS)
Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.
2013-08-01
We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date, no attempts in this direction have been documented within the air quality (AQ) community despite the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared, dependant biases among models do not cancel out but will instead determine a biased ensemble. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated), we discourage selecting the members of the ensemble simply on the basis of scores; that is, independence and skills need to be considered disjointly.
Using ensembles in water management: forecasting dry and wet episodes
NASA Astrophysics Data System (ADS)
van het Schip-Haverkamp, Tessa; van den Berg, Wim; van de Beek, Remco
2015-04-01
Extreme weather situations as droughts and extensive precipitation are becoming more frequent, which makes it more important to obtain accurate weather forecasts for the short and long term. Ensembles can provide a solution in terms of scenario forecasts. MeteoGroup uses ensembles in a new forecasting technique which presents a number of weather scenarios for a dynamical water management project, called Water-Rijk, in which water storage and water retention plays a large role. The Water-Rijk is part of Park Lingezegen, which is located between Arnhem and Nijmegen in the Netherlands. In collaboration with the University of Wageningen, Alterra and Eijkelkamp a forecasting system is developed for this area which can provide water boards with a number of weather and hydrology scenarios in order to assist in the decision whether or not water retention or water storage is necessary in the near future. In order to make a forecast for drought and extensive precipitation, the difference 'precipitation- evaporation' is used as a measurement of drought in the weather forecasts. In case of an upcoming drought this difference will take larger negative values. In case of a wet episode, this difference will be positive. The Makkink potential evaporation is used which gives the most accurate potential evaporation values during the summer, when evaporation plays an important role in the availability of surface water. Scenarios are determined by reducing the large number of forecasts in the ensemble to a number of averaged members with each its own likelihood of occurrence. For the Water-Rijk project 5 scenario forecasts are calculated: extreme dry, dry, normal, wet and extreme wet. These scenarios are constructed for two forecasting periods, each using its own ensemble technique: up to 48 hours ahead and up to 15 days ahead. The 48-hour forecast uses an ensemble constructed from forecasts of multiple high-resolution regional models: UKMO's Euro4 model,the ECMWF model, WRF and Hirlam. Using multiple model runs and additional post processing, an ensemble can be created from non-ensemble models. The 15-day forecast uses the ECMWF Ensemble Prediction System forecast from which scenarios can be deduced directly. A combination of the ensembles from the two forecasting periods is used in order to have the highest possible resolution of the forecast for the first 48 hours followed by the lower resolution long term forecast.
Targeted estimation of nuisance parameters to obtain valid statistical inference.
van der Laan, Mark J
2014-01-01
In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special case, we also demonstrate the required targeting of the propensity score for the inverse probability of treatment weighted estimator using super-learning to fit the propensity score.
Project Integration Architecture: A Practical Demonstration of Information Propagation
NASA Technical Reports Server (NTRS)
Jones, William Henry
2005-01-01
One of the goals of the Project Integration Architecture (PIA) effort is to provide the ability to propagate information between disparate applications. With this ability, applications may then be formed into an application graph constituting a super-application. Such a super-application would then provide all of the analysis appropriate to a given technical system. This paper reports on a small demonstration of this concept in which a Computer Aided Design (CAD) application was connected to an inlet analysis code and geometry information automatically propagated from one to the other. The majority of the work reported involved not the technology of information propagation, but rather the conversion of propagated information into a form usable by the receiving application.
A study of degradation resistance and cytocompatibility of super-hydrophobic coating on magnesium.
Zhang, Yufen; Feyerabend, Frank; Tang, Shawei; Hu, Jin; Lu, Xiaopeng; Blawert, Carsten; Lin, Tiegui
2017-09-01
Calcium stearate based super-hydrophobic coating was deposited on plasma electrolytic oxidation (PEO) pre-treated magnesium substrate. The pre-treated magnesium and super-hydrophobic coating covered sample were characterized by scanning electron microscopy, X-ray diffraction and electrochemical corrosion measurements. The cytocompatibility and degradation resistance of magnesium, pre-treated magnesium and super-hydrophobic coating were analysed in terms of cell adhesion and osteoblast differentiation. The results indicate that the calcium stearate top coating shows super-hydrophobicity and that the surface is composed of micro/nanostructure. The super-hydrophobic coating covered sample shows higher barrier properties compared with the PEO pre-treated magnesium and bare magnesium. Human osteoblast proliferation, but not differentiation is enhanced by the PEO coating. Contrary, the super-hydrophobic coating reduces proliferation, but enhances differentiation of osteoblast, observable by the formation of hydroxyapatite. The combination of corrosion protection and cell reaction indicates that this system could be interesting for biomedical applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Fabrication of TiO2/EP super-hydrophobic thin film on filter paper surface.
Gao, Zhengxin; Zhai, Xianglin; Liu, Feng; Zhang, Ming; Zang, Deli; Wang, Chengyu
2015-09-05
A composite filter paper with super-hydrophobicity was obtained by adhering micro/nano structure of amorphous titanium dioxide on the filter paper surface with modifying low surface energy material. By virtue of the coupling agent, which plays an important part in bonding amorphous titanium dioxide and epoxy resin, the structure of super-hydrophobic thin film on the filter paper surface is extremely stable. The microstructure of super-hydrophobic filter paper was characterized by scanning electron microscopy (SEM), the images showed that the as-prepared filter paper was covered with uniform amorphous titanium dioxide particles, generating a roughness structure on the filter paper surface. The super-hydrophobic performance of the filter paper was characterized by water contact angle measurements. The observations showed that the wettability of filter paper samples transformed from super-hydrophilicity to super-hydrophobicity with the water contact angle of 153 ± 1°. Some experiments were also designed to test the effect of water-oil separation and UV-resistant by the super-hydrophobic filter paper. The prepared super-hydrophobic filter paper worked efficiently and simply in water-oil separation as well as enduringly in anti-UV property after the experiments. This method offers an opportunity to the practical applications of the super-hydrophobic filter paper. Copyright © 2015 Elsevier Ltd. All rights reserved.
Shape memory effect and super elasticity. Its dental applications.
Kotian, R
2001-01-01
The shape memory alloys are quite fascinating materials characterized by a shape memory effect and super elasticity which ordinary metals do not have. This unique behaviour was first found in a Au-47.5 at % Cd alloy in 1951, and was published in 1963 by the discovery of Ti-Ni alloy. Shape memory alloys now being practically used as new functional alloys for various dental and medical applications.
NASA Astrophysics Data System (ADS)
Dong, Huan He; Guo, Bao Yong; Yin, Bao Shu
2016-06-01
In the paper, based on the modified Riemann-Liouville fractional derivative and Tu scheme, the fractional super NLS-MKdV hierarchy is derived, especially the self-consistent sources term is considered. Meanwhile, the generalized fractional supertrace identity is proposed, which is a beneficial supplement to the existing literature on integrable system. As an application, the super Hamiltonian structure of fractional super NLS-MKdV hierarchy is obtained.
Recent advances in the field of super resolved imaging and sensing
NASA Astrophysics Data System (ADS)
Zalevsky, Zeev; Borkowski, Amikam; Marom, Emanuel; Javidi, Bahram; Beiderman, Yevgeny; Micó, Vicente; García, Javier
2011-05-01
In this paper we start by presenting one recent development in the field of geometric super resolution. The new approach overcomes the reduction of resolution caused by the non ideal sampling of the image done by the spatial averaging of each pixel of the sampling array. Right after, we demonstrate a remote super sensing technique allowing monitoring, from a distance, the heart beats, blood pulse pressure and the glucose level in the blood stream of a patient by tracking the trajectory of secondary speckle patterns reflected from the skin of the wrist or from the sclera.
DURIP: Super-Resolution Module for Confocal Microscopy of Reconfigurable Matter
2014-09-28
Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 superresolution microscopy, colloidal particles, self-assembly REPORT...previously have been resolved by optical microscopy. Results of Super Resolution Technique Evaluation Commercially available superresolution imaging...Weaknesses of the method are that is fundamentally a measurement that can only be deployed for fixed samples. Because superresolution is obtained by
ERIC Educational Resources Information Center
Basch, Reva
This book presents the collected wisdom of 35 leading Internet hunters and gatherers. Through interviews, these experts offer insights, anecdotes, tips, techniques, and case histories which will raise the "searching IQ" of any serious Internet user. The Super Net Searchers explain how they find valuable information on the Internet,…
Reducible dictionaries for single image super-resolution based on patch matching and mean shifting
NASA Astrophysics Data System (ADS)
Rasti, Pejman; Nasrollahi, Kamal; Orlova, Olga; Tamberg, Gert; Moeslund, Thomas B.; Anbarjafari, Gholamreza
2017-03-01
A single-image super-resolution (SR) method is proposed. The proposed method uses a generated dictionary from pairs of high resolution (HR) images and their corresponding low resolution (LR) representations. First, HR images and the corresponding LR ones are divided into patches of HR and LR, respectively, and then they are collected into separate dictionaries. Afterward, when performing SR, the distance between every patch of the input LR image and those of available LR patches in the LR dictionary is calculated. The minimum distance between the input LR patch and those in the LR dictionary is taken, and its counterpart from the HR dictionary is passed through an illumination enhancement process. By this technique, the noticeable change of illumination between neighbor patches in the super-resolved image is significantly reduced. The enhanced HR patch represents the HR patch of the super-resolved image. Finally, to remove the blocking effect caused by merging the patches, an average of the obtained HR image and the interpolated image obtained using bicubic interpolation is calculated. The quantitative and qualitative analyses show the superiority of the proposed technique over the conventional and state-of-art methods.
EnsembleGraph: Interactive Visual Analysis of Spatial-Temporal Behavior for Ensemble Simulation Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shu, Qingya; Guo, Hanqi; Che, Limei
We present a novel visualization framework—EnsembleGraph— for analyzing ensemble simulation data, in order to help scientists understand behavior similarities between ensemble members over space and time. A graph-based representation is used to visualize individual spatiotemporal regions with similar behaviors, which are extracted by hierarchical clustering algorithms. A user interface with multiple-linked views is provided, which enables users to explore, locate, and compare regions that have similar behaviors between and then users can investigate and analyze the selected regions in detail. The driving application of this paper is the studies on regional emission influences over tropospheric ozone, which is based onmore » ensemble simulations conducted with different anthropogenic emission absences using the MOZART-4 (model of ozone and related tracers, version 4) model. We demonstrate the effectiveness of our method by visualizing the MOZART-4 ensemble simulation data and evaluating the relative regional emission influences on tropospheric ozone concentrations. Positive feedbacks from domain experts and two case studies prove efficiency of our method.« less
Li Manni, Giovanni; Smart, Simon D; Alavi, Ali
2016-03-08
A novel stochastic Complete Active Space Self-Consistent Field (CASSCF) method has been developed and implemented in the Molcas software package. A two-step procedure is used, in which the CAS configuration interaction secular equations are solved stochastically with the Full Configuration Interaction Quantum Monte Carlo (FCIQMC) approach, while orbital rotations are performed using an approximated form of the Super-CI method. This new method does not suffer from the strong combinatorial limitations of standard MCSCF implementations using direct schemes and can handle active spaces well in excess of those accessible to traditional CASSCF approaches. The density matrix formulation of the Super-CI method makes this step independent of the size of the CI expansion, depending exclusively on one- and two-body density matrices with indices restricted to the relatively small number of active orbitals. No sigma vectors need to be stored in memory for the FCIQMC eigensolver--a substantial gain in comparison to implementations using the Davidson method, which require three or more vectors of the size of the CI expansion. Further, no orbital Hessian is computed, circumventing limitations on basis set expansions. Like the parent FCIQMC method, the present technique is scalable on massively parallel architectures. We present in this report the method and its application to the free-base porphyrin, Mg(II) porphyrin, and Fe(II) porphyrin. In the present study, active spaces up to 32 electrons and 29 orbitals in orbital expansions containing up to 916 contracted functions are treated with modest computational resources. Results are quite promising even without accounting for the correlation outside the active space. The systems here presented clearly demonstrate that large CASSCF calculations are possible via FCIQMC-CASSCF without limitations on basis set size.