Sample records for gaussian network model

  1. An adaptive Hinfinity controller design for bank-to-turn missiles using ridge Gaussian neural networks.

    PubMed

    Lin, Chuan-Kai; Wang, Sheng-De

    2004-11-01

    A new autopilot design for bank-to-turn (BTT) missiles is presented. In the design of autopilot, a ridge Gaussian neural network with local learning capability and fewer tuning parameters than Gaussian neural networks is proposed to model the controlled nonlinear systems. We prove that the proposed ridge Gaussian neural network, which can be a universal approximator, equals the expansions of rotated and scaled Gaussian functions. Although ridge Gaussian neural networks can approximate the nonlinear and complex systems accurately, the small approximation errors may affect the tracking performance significantly. Therefore, by employing the Hinfinity control theory, it is easy to attenuate the effects of the approximation errors of the ridge Gaussian neural networks to a prescribed level. Computer simulation results confirm the effectiveness of the proposed ridge Gaussian neural networks-based autopilot with Hinfinity stabilization.

  2. MSEE: Stochastic Cognitive Linguistic Behavior Models for Semantic Sensing

    DTIC Science & Technology

    2013-09-01

    recognition, a Gaussian Process Dynamic Model with Social Network Analysis (GPDM-SNA) for a small human group action recognition, an extended GPDM-SNA...44  3.2. Small Human Group Activity Modeling Based on Gaussian Process Dynamic Model and Social Network Analysis (SN-GPDM...51  Approved for public release; distribution unlimited. 3 3.2.3. Gaussian Process Dynamical Model and

  3. Comparing fixed and variable-width Gaussian networks.

    PubMed

    Kůrková, Věra; Kainen, Paul C

    2014-09-01

    The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  5. Elastic Network Models For Biomolecular Dynamics: Theory and Application to Membrane Proteins and Viruses

    NASA Astrophysics Data System (ADS)

    Lezon, Timothy R.; Shrivastava, Indira H.; Yang, Zheng; Bahar, Ivet

    The following sections are included: * Introduction * Theory and Assumptions * Statistical mechanical foundations * Anisotropic network models * Gaussian network model * Rigid block models * Treatment of perturbations * Langevin dynamics * Applications * Membrane proteins * Viruses * Conclusion * References

  6. Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Chang, K. C.

    2005-05-01

    Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.

  7. A clustering-based fuzzy wavelet neural network model for short-term load forecasting.

    PubMed

    Kodogiannis, Vassilis S; Amina, Mahdi; Petrounias, Ilias

    2013-10-01

    Load forecasting is a critical element of power system operation, involving prediction of the future level of demand to serve as the basis for supply and demand planning. This paper presents the development of a novel clustering-based fuzzy wavelet neural network (CB-FWNN) model and validates its prediction on the short-term electric load forecasting of the Power System of the Greek Island of Crete. The proposed model is obtained from the traditional Takagi-Sugeno-Kang fuzzy system by replacing the THEN part of fuzzy rules with a "multiplication" wavelet neural network (MWNN). Multidimensional Gaussian type of activation functions have been used in the IF part of the fuzzyrules. A Fuzzy Subtractive Clustering scheme is employed as a pre-processing technique to find out the initial set and adequate number of clusters and ultimately the number of multiplication nodes in MWNN, while Gaussian Mixture Models with the Expectation Maximization algorithm are utilized for the definition of the multidimensional Gaussians. The results corresponding to the minimum and maximum power load indicate that the proposed load forecasting model provides significantly accurate forecasts, compared to conventional neural networks models.

  8. Revisiting Gaussian Process Regression Modeling for Localization in Wireless Sensor Networks

    PubMed Central

    Richter, Philipp; Toledano-Ayala, Manuel

    2015-01-01

    Signal strength-based positioning in wireless sensor networks is a key technology for seamless, ubiquitous localization, especially in areas where Global Navigation Satellite System (GNSS) signals propagate poorly. To enable wireless local area network (WLAN) location fingerprinting in larger areas while maintaining accuracy, methods to reduce the effort of radio map creation must be consolidated and automatized. Gaussian process regression has been applied to overcome this issue, also with auspicious results, but the fit of the model was never thoroughly assessed. Instead, most studies trained a readily available model, relying on the zero mean and squared exponential covariance function, without further scrutinization. This paper studies the Gaussian process regression model selection for WLAN fingerprinting in indoor and outdoor environments. We train several models for indoor/outdoor- and combined areas; we evaluate them quantitatively and compare them by means of adequate model measures, hence assessing the fit of these models directly. To illuminate the quality of the model fit, the residuals of the proposed model are investigated, as well. Comparative experiments on the positioning performance verify and conclude the model selection. In this way, we show that the standard model is not the most appropriate, discuss alternatives and present our best candidate. PMID:26370996

  9. Comparative evaluation of reverse engineering gene regulatory networks with relevance networks, graphical gaussian models and bayesian networks.

    PubMed

    Werhli, Adriano V; Grzegorczyk, Marco; Husmeier, Dirk

    2006-10-15

    An important problem in systems biology is the inference of biochemical pathways and regulatory networks from postgenomic data. Various reverse engineering methods have been proposed in the literature, and it is important to understand their relative merits and shortcomings. In the present paper, we compare the accuracy of reconstructing gene regulatory networks with three different modelling and inference paradigms: (1) Relevance networks (RNs): pairwise association scores independent of the remaining network; (2) graphical Gaussian models (GGMs): undirected graphical models with constraint-based inference, and (3) Bayesian networks (BNs): directed graphical models with score-based inference. The evaluation is carried out on the Raf pathway, a cellular signalling network describing the interaction of 11 phosphorylated proteins and phospholipids in human immune system cells. We use both laboratory data from cytometry experiments as well as data simulated from the gold-standard network. We also compare passive observations with active interventions. On Gaussian observational data, BNs and GGMs were found to outperform RNs. The difference in performance was not significant for the non-linear simulated data and the cytoflow data, though. Also, we did not observe a significant difference between BNs and GGMs on observational data in general. However, for interventional data, BNs outperform GGMs and RNs, especially when taking the edge directions rather than just the skeletons of the graphs into account. This suggests that the higher computational costs of inference with BNs over GGMs and RNs are not justified when using only passive observations, but that active interventions in the form of gene knockouts and over-expressions are required to exploit the full potential of BNs. Data, software and supplementary material are available from http://www.bioss.sari.ac.uk/staff/adriano/research.html

  10. Statistical modelling of networked human-automation performance using working memory capacity.

    PubMed

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.

  11. Rapid Simulation of Blast Wave Propagation in Built Environments Using Coarse-Grain Based Intelligent Modeling Methods

    DTIC Science & Technology

    2011-04-01

    experiments was performed using an artificial neural network to try to capture the nonlinearities. The radial Gaussian artificial neural network system...Modeling Blast-Wave Propagation using Artificial Neural Network Methods‖, in International Journal of Advanced Engineering Informatics, Elsevier

  12. Static and transient performance prediction for CFB boilers using a Bayesian-Gaussian Neural Network

    NASA Astrophysics Data System (ADS)

    Ye, Haiwen; Ni, Weidou

    1997-06-01

    A Bayesian-Gaussian Neural Network (BGNN) is put forward in this paper to predict the static and transient performance of Circulating Fluidized Bed (CFB) boilers. The advantages of this network over Back-Propagation Neural Networks (BPNNs), easier determination of topology, simpler and time saving in training process as well as self-organizing ability, make this network more practical in on-line performance prediction for complicated processes. Simulation shows that this network is comparable to the BPNNs in predicting the performance of CFB boilers. Good and practical on-line performance predictions are essential for operation guide and model predictive control of CFB boilers, which are under research by the authors.

  13. Dirichlet Process Gaussian-mixture model: An application to localizing coalescing binary neutron stars with gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.

    2018-06-01

    We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.

  14. Parallel logic gates in synthetic gene networks induced by non-Gaussian noise.

    PubMed

    Xu, Yong; Jin, Xiaoqin; Zhang, Huiqing

    2013-11-01

    The recent idea of logical stochastic resonance is verified in synthetic gene networks induced by non-Gaussian noise. We realize the switching between two kinds of logic gates under optimal moderate noise intensity by varying two different tunable parameters in a single gene network. Furthermore, in order to obtain more logic operations, thus providing additional information processing capacity, we obtain in a two-dimensional toggle switch model two complementary logic gates and realize the transformation between two logic gates via the methods of changing different parameters. These simulated results contribute to improve the computational power and functionality of the networks.

  15. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    PubMed

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    USGS Publications Warehouse

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  17. The Gaussian Graphical Model in Cross-Sectional and Time-Series Data.

    PubMed

    Epskamp, Sacha; Waldorp, Lourens J; Mõttus, René; Borsboom, Denny

    2018-04-16

    We discuss the Gaussian graphical model (GGM; an undirected network of partial correlation coefficients) and detail its utility as an exploratory data analysis tool. The GGM shows which variables predict one-another, allows for sparse modeling of covariance structures, and may highlight potential causal relationships between observed variables. We describe the utility in three kinds of psychological data sets: data sets in which consecutive cases are assumed independent (e.g., cross-sectional data), temporally ordered data sets (e.g., n = 1 time series), and a mixture of the 2 (e.g., n > 1 time series). In time-series analysis, the GGM can be used to model the residual structure of a vector-autoregression analysis (VAR), also termed graphical VAR. Two network models can then be obtained: a temporal network and a contemporaneous network. When analyzing data from multiple subjects, a GGM can also be formed on the covariance structure of stationary means-the between-subjects network. We discuss the interpretation of these models and propose estimation methods to obtain these networks, which we implement in the R packages graphicalVAR and mlVAR. The methods are showcased in two empirical examples, and simulation studies on these methods are included in the supplementary materials.

  18. Gaussian processes: a method for automatic QSAR modeling of ADME properties.

    PubMed

    Obrezanova, Olga; Csanyi, Gabor; Gola, Joelle M R; Segall, Matthew D

    2007-01-01

    In this article, we discuss the application of the Gaussian Process method for the prediction of absorption, distribution, metabolism, and excretion (ADME) properties. On the basis of a Bayesian probabilistic approach, the method is widely used in the field of machine learning but has rarely been applied in quantitative structure-activity relationship and ADME modeling. The method is suitable for modeling nonlinear relationships, does not require subjective determination of the model parameters, works for a large number of descriptors, and is inherently resistant to overtraining. The performance of Gaussian Processes compares well with and often exceeds that of artificial neural networks. Due to these features, the Gaussian Processes technique is eminently suitable for automatic model generation-one of the demands of modern drug discovery. Here, we describe the basic concept of the method in the context of regression problems and illustrate its application to the modeling of several ADME properties: blood-brain barrier, hERG inhibition, and aqueous solubility at pH 7.4. We also compare Gaussian Processes with other modeling techniques.

  19. Gaussian Graphical Models Identify Networks of Dietary Intake in a German Adult Population.

    PubMed

    Iqbal, Khalid; Buijsse, Brian; Wirth, Janine; Schulze, Matthias B; Floegel, Anna; Boeing, Heiner

    2016-03-01

    Data-reduction methods such as principal component analysis are often used to derive dietary patterns. However, such methods do not assess how foods are consumed in relation to each other. Gaussian graphical models (GGMs) are a set of novel methods that can address this issue. We sought to apply GGMs to derive sex-specific dietary intake networks representing consumption patterns in a German adult population. Dietary intake data from 10,780 men and 16,340 women of the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam cohort were cross-sectionally analyzed to construct dietary intake networks. Food intake for each participant was estimated using a 148-item food-frequency questionnaire that captured the intake of 49 food groups. GGMs were applied to log-transformed intakes (grams per day) of 49 food groups to construct sex-specific food networks. Semiparametric Gaussian copula graphical models (SGCGMs) were used to confirm GGM results. In men, GGMs identified 1 major dietary network that consisted of intakes of red meat, processed meat, cooked vegetables, sauces, potatoes, cabbage, poultry, legumes, mushrooms, soup, and whole-grain and refined breads. For women, a similar network was identified with the addition of fried potatoes. Other identified networks consisted of dairy products and sweet food groups. SGCGMs yielded results comparable to those of GGMs. GGMs are a powerful exploratory method that can be used to construct dietary networks representing dietary intake patterns that reveal how foods are consumed in relation to each other. GGMs indicated an apparent major role of red meat intake in a consumption pattern in the studied population. In the future, identified networks might be transformed into pattern scores for investigating their associations with health outcomes. © 2016 American Society for Nutrition.

  20. A Gaussian Mixture Model-based continuous Boundary Detection for 3D sensor networks.

    PubMed

    Chen, Jiehui; Salim, Mariam B; Matsumoto, Mitsuji

    2010-01-01

    This paper proposes a high precision Gaussian Mixture Model-based novel Boundary Detection 3D (BD3D) scheme with reasonable implementation cost for 3D cases by selecting a minimum number of Boundary sensor Nodes (BNs) in continuous moving objects. It shows apparent advantages in that two classes of boundary and non-boundary sensor nodes can be efficiently classified using the model selection techniques for finite mixture models; furthermore, the set of sensor readings within each sensor node's spatial neighbors is formulated using a Gaussian Mixture Model; different from DECOMO [1] and COBOM [2], we also formatted a BN Array with an additional own sensor reading to benefit selecting Event BNs (EBNs) and non-EBNs from the observations of BNs. In particular, we propose a Thick Section Model (TSM) to solve the problem of transition between 2D and 3D. It is verified by simulations that the BD3D 2D model outperforms DECOMO and COBOM in terms of average residual energy and the number of BNs selected, while the BD3D 3D model demonstrates sound performance even for sensor networks with low densities especially when the value of the sensor transmission range (r) is larger than the value of Section Thickness (d) in TSM. We have also rigorously proved its correctness for continuous geometric domains and full robustness for sensor networks over 3D terrains.

  1. A range-based predictive localization algorithm for WSID networks

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Chen, Junjie; Li, Gang

    2017-11-01

    Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.

  2. The Prediction of Length-of-day Variations Based on Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Lei, Y.; Zhao, D. N.; Gao, Y. P.; Cai, H. B.

    2015-01-01

    Due to the complicated time-varying characteristics of the length-of-day (LOD) variations, the accuracies of traditional strategies for the prediction of the LOD variations such as the least squares extrapolation model, the time-series analysis model, and so on, have not met the requirements for real-time and high-precision applications. In this paper, a new machine learning algorithm --- the Gaussian process (GP) model is employed to forecast the LOD variations. Its prediction precisions are analyzed and compared with those of the back propagation neural networks (BPNN), general regression neural networks (GRNN) models, and the Earth Orientation Parameters Prediction Comparison Campaign (EOP PCC). The results demonstrate that the application of the GP model to the prediction of the LOD variations is efficient and feasible.

  3. Application of Gaussian beam ray-equivalent model and back-propagation artificial neural network in laser diode fast axis collimator assembly.

    PubMed

    Yu, Hao; Rossi, Giammarco; Braglia, Andrea; Perrone, Guido

    2016-08-10

    The paper presents the development of a tool based on a back-propagation artificial neural network to assist in the accurate positioning of the lenses used to collimate the beam from semiconductor laser diodes along the so-called fast axis. After training using a Gaussian beam ray-equivalent model, the network is capable of indicating the tilt, decenter, and defocus of such lenses from the measured field distribution, so the operator can determine the errors with respect to the actual lens position and optimize the diode assembly procedure. An experimental validation using a typical configuration exploited in multi-emitter diode module assembly and fast axis collimating lenses with different focal lengths and numerical apertures is reported.

  4. Reverse engineering gene regulatory networks from measurement with missing values.

    PubMed

    Ogundijo, Oyetunji E; Elmas, Abdulkadir; Wang, Xiaodong

    2016-12-01

    Gene expression time series data are usually in the form of high-dimensional arrays. Unfortunately, the data may sometimes contain missing values: for either the expression values of some genes at some time points or the entire expression values of a single time point or some sets of consecutive time points. This significantly affects the performance of many algorithms for gene expression analysis that take as an input, the complete matrix of gene expression measurement. For instance, previous works have shown that gene regulatory interactions can be estimated from the complete matrix of gene expression measurement. Yet, till date, few algorithms have been proposed for the inference of gene regulatory network from gene expression data with missing values. We describe a nonlinear dynamic stochastic model for the evolution of gene expression. The model captures the structural, dynamical, and the nonlinear natures of the underlying biomolecular systems. We present point-based Gaussian approximation (PBGA) filters for joint state and parameter estimation of the system with one-step or two-step missing measurements . The PBGA filters use Gaussian approximation and various quadrature rules, such as the unscented transform (UT), the third-degree cubature rule and the central difference rule for computing the related posteriors. The proposed algorithm is evaluated with satisfying results for synthetic networks, in silico networks released as a part of the DREAM project, and the real biological network, the in vivo reverse engineering and modeling assessment (IRMA) network of yeast Saccharomyces cerevisiae . PBGA filters are proposed to elucidate the underlying gene regulatory network (GRN) from time series gene expression data that contain missing values. In our state-space model, we proposed a measurement model that incorporates the effect of the missing data points into the sequential algorithm. This approach produces a better inference of the model parameters and hence, more accurate prediction of the underlying GRN compared to when using the conventional Gaussian approximation (GA) filters ignoring the missing data points.

  5. Min-max hyperellipsoidal clustering for anomaly detection in network security.

    PubMed

    Sarasamma, Suseela T; Zhu, Qiuming A

    2006-08-01

    A novel hyperellipsoidal clustering technique is presented for an intrusion-detection system in network security. Hyperellipsoidal clusters toward maximum intracluster similarity and minimum intercluster similarity are generated from training data sets. The novelty of the technique lies in the fact that the parameters needed to construct higher order data models in general multivariate Gaussian functions are incrementally derived from the data sets using accretive processes. The technique is implemented in a feedforward neural network that uses a Gaussian radial basis function as the model generator. An evaluation based on the inclusiveness and exclusiveness of samples with respect to specific criteria is applied to accretively learn the output clusters of the neural network. One significant advantage of this is its ability to detect individual anomaly types that are hard to detect with other anomaly-detection schemes. Applying this technique, several feature subsets of the tcptrace network-connection records that give above 95% detection at false-positive rates below 5% were identified.

  6. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.

    PubMed

    Zhang, Kai; Zuo, Wangmeng; Chen, Yunjin; Meng, Deyu; Zhang, Lei

    2017-07-01

    The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.

  7. Characterizing structural association alterations within brain networks in normal aging using Gaussian Bayesian networks.

    PubMed

    Guo, Xiaojuan; Wang, Yan; Chen, Kewei; Wu, Xia; Zhang, Jiacai; Li, Ke; Jin, Zhen; Yao, Li

    2014-01-01

    Recent multivariate neuroimaging studies have revealed aging-related alterations in brain structural networks. However, the sensory/motor networks such as the auditory, visual and motor networks, have obtained much less attention in normal aging research. In this study, we used Gaussian Bayesian networks (BN), an approach investigating possible inter-regional directed relationship, to characterize aging effects on structural associations between core brain regions within each of these structural sensory/motor networks using volumetric MRI data. We then further examined the discriminability of BN models for the young (N = 109; mean age =22.73 years, range 20-28) and old (N = 82; mean age =74.37 years, range 60-90) groups. The results of the BN modeling demonstrated that structural associations exist between two homotopic brain regions from the left and right hemispheres in each of the three networks. In particular, compared with the young group, the old group had significant connection reductions in each of the three networks and lesser connection numbers in the visual network. Moreover, it was found that the aging-related BN models could distinguish the young and old individuals with 90.05, 73.82, and 88.48% accuracy for the auditory, visual, and motor networks, respectively. Our findings suggest that BN models can be used to investigate the normal aging process with reliable statistical power. Moreover, these differences in structural inter-regional interactions may help elucidate the neuronal mechanism of anatomical changes in normal aging.

  8. Noise effects in nonlinear biochemical signaling

    NASA Astrophysics Data System (ADS)

    Bostani, Neda; Kessler, David A.; Shnerb, Nadav M.; Rappel, Wouter-Jan; Levine, Herbert

    2012-01-01

    It has been generally recognized that stochasticity can play an important role in the information processing accomplished by reaction networks in biological cells. Most treatments of that stochasticity employ Gaussian noise even though it is a priori obvious that this approximation can violate physical constraints, such as the positivity of chemical concentrations. Here, we show that even when such nonphysical fluctuations are rare, an exact solution of the Gaussian model shows that the model can yield unphysical results. This is done in the context of a simple incoherent-feedforward model which exhibits perfect adaptation in the deterministic limit. We show how one can use the natural separation of time scales in this model to yield an approximate model, that is analytically solvable, including its dynamical response to an environmental change. Alternatively, one can employ a cutoff procedure to regularize the Gaussian result.

  9. Gene network inference by fusing data from diverse distributions

    PubMed Central

    Žitnik, Marinka; Zupan, Blaž

    2015-01-01

    Motivation: Markov networks are undirected graphical models that are widely used to infer relations between genes from experimental data. Their state-of-the-art inference procedures assume the data arise from a Gaussian distribution. High-throughput omics data, such as that from next generation sequencing, often violates this assumption. Furthermore, when collected data arise from multiple related but otherwise nonidentical distributions, their underlying networks are likely to have common features. New principled statistical approaches are needed that can deal with different data distributions and jointly consider collections of datasets. Results: We present FuseNet, a Markov network formulation that infers networks from a collection of nonidentically distributed datasets. Our approach is computationally efficient and general: given any number of distributions from an exponential family, FuseNet represents model parameters through shared latent factors that define neighborhoods of network nodes. In a simulation study, we demonstrate good predictive performance of FuseNet in comparison to several popular graphical models. We show its effectiveness in an application to breast cancer RNA-sequencing and somatic mutation data, a novel application of graphical models. Fusion of datasets offers substantial gains relative to inference of separate networks for each dataset. Our results demonstrate that network inference methods for non-Gaussian data can help in accurate modeling of the data generated by emergent high-throughput technologies. Availability and implementation: Source code is at https://github.com/marinkaz/fusenet. Contact: blaz.zupan@fri.uni-lj.si Supplementary information: Supplementary information is available at Bioinformatics online. PMID:26072487

  10. Spin-glass phase in a neutral network with asymmetric couplings

    NASA Astrophysics Data System (ADS)

    Kree, R.; Widmaier, D.; Zippelius, A.

    1988-12-01

    The author studies the phase diagram of a neural network model which has learnt with the ADALINE algorithm, starting from tabula non rasa conditions. The resulting synaptic efficacies are not symmetric under an exchange of the pre- and post-synaptic neuron. In contrast to several other models which have been discussed in the literature, he finds a spin-glass phase in the asymmetrically coupled network. The main difference compared with the other models consists of long-ranged Gaussian correlations in the ensemble of couplings.

  11. Reconstructing gene regulatory networks from knock-out data using Gaussian Noise Model and Pearson Correlation Coefficient.

    PubMed

    Mohamed Salleh, Faridah Hani; Arif, Shereena Mohd; Zainudin, Suhaila; Firdaus-Raih, Mohd

    2015-12-01

    A gene regulatory network (GRN) is a large and complex network consisting of interacting elements that, over time, affect each other's state. The dynamics of complex gene regulatory processes are difficult to understand using intuitive approaches alone. To overcome this problem, we propose an algorithm for inferring the regulatory interactions from knock-out data using a Gaussian model combines with Pearson Correlation Coefficient (PCC). There are several problems relating to GRN construction that have been outlined in this paper. We demonstrated the ability of our proposed method to (1) predict the presence of regulatory interactions between genes, (2) their directionality and (3) their states (activation or suppression). The algorithm was applied to network sizes of 10 and 50 genes from DREAM3 datasets and network sizes of 10 from DREAM4 datasets. The predicted networks were evaluated based on AUROC and AUPR. We discovered that high false positive values were generated by our GRN prediction methods because the indirect regulations have been wrongly predicted as true relationships. We achieved satisfactory results as the majority of sub-networks achieved AUROC values above 0.5. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.

    PubMed

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-09-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.

  13. Incorporating prior information into differential network analysis using non-paranormal graphical models.

    PubMed

    Zhang, Xiao-Fei; Ou-Yang, Le; Yan, Hong

    2017-08-15

    Understanding how gene regulatory networks change under different cellular states is important for revealing insights into network dynamics. Gaussian graphical models, which assume that the data follow a joint normal distribution, have been used recently to infer differential networks. However, the distributions of the omics data are non-normal in general. Furthermore, although much biological knowledge (or prior information) has been accumulated, most existing methods ignore the valuable prior information. Therefore, new statistical methods are needed to relax the normality assumption and make full use of prior information. We propose a new differential network analysis method to address the above challenges. Instead of using Gaussian graphical models, we employ a non-paranormal graphical model that can relax the normality assumption. We develop a principled model to take into account the following prior information: (i) a differential edge less likely exists between two genes that do not participate together in the same pathway; (ii) changes in the networks are driven by certain regulator genes that are perturbed across different cellular states and (iii) the differential networks estimated from multi-view gene expression data likely share common structures. Simulation studies demonstrate that our method outperforms other graphical model-based algorithms. We apply our method to identify the differential networks between platinum-sensitive and platinum-resistant ovarian tumors, and the differential networks between the proneural and mesenchymal subtypes of glioblastoma. Hub nodes in the estimated differential networks rediscover known cancer-related regulator genes and contain interesting predictions. The source code is at https://github.com/Zhangxf-ccnu/pDNA. szuouyl@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  14. Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan

    2016-07-01

    The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.

  15. Theory and Experimental and Chemical Instabilities

    DTIC Science & Technology

    1989-01-31

    Thresholds, Hysteresis, and Neuromodulation of Signal-to-Noise; and Statistical-Mechanical Theory of Many-body Effects in Reaction Rates. T Ic 2 UL3...submitted to the Journal of Physical Chemistry. 6. Noise in Neural Networks: Thresholds, Hysteresis, and Neuromodulation of Signal-to-Noise. We study a...neural-network model including Gaussian noise, higher-order neuronal interactions, and neuromodulation . For a first-order network, there is a

  16. Detection of Foreign Matter in Transfusion Solution Based on Gaussian Background Modeling and an Optimized BP Neural Network

    PubMed Central

    Zhou, Fuqiang; Su, Zhen; Chai, Xinghua; Chen, Lipeng

    2014-01-01

    This paper proposes a new method to detect and identify foreign matter mixed in a plastic bottle filled with transfusion solution. A spin-stop mechanism and mixed illumination style are applied to obtain high contrast images between moving foreign matter and a static transfusion background. The Gaussian mixture model is used to model the complex background of the transfusion image and to extract moving objects. A set of features of moving objects are extracted and selected by the ReliefF algorithm, and optimal feature vectors are fed into the back propagation (BP) neural network to distinguish between foreign matter and bubbles. The mind evolutionary algorithm (MEA) is applied to optimize the connection weights and thresholds of the BP neural network to obtain a higher classification accuracy and faster convergence rate. Experimental results show that the proposed method can effectively detect visible foreign matter in 250-mL transfusion bottles. The misdetection rate and false alarm rate are low, and the detection accuracy and detection speed are satisfactory. PMID:25347581

  17. Experimental Observation of Two Features Unexpected from the Classical Theories of Rubber Elasticity

    NASA Astrophysics Data System (ADS)

    Nishi, Kengo; Fujii, Kenta; Chung, Ung-il; Shibayama, Mitsuhiro; Sakai, Takamasa

    2017-12-01

    Although the elastic modulus of a Gaussian chain network is thought to be successfully described by classical theories of rubber elasticity, such as the affine and phantom models, verification experiments are largely lacking owing to difficulties in precisely controlling of the network structure. We prepared well-defined model polymer networks experimentally, and measured the elastic modulus G for a broad range of polymer concentrations and connectivity probabilities, p . In our experiment, we observed two features that were distinct from those predicted by classical theories. First, we observed the critical behavior G ˜|p -pc|1.95 near the sol-gel transition. This scaling law is different from the prediction of classical theories, but can be explained by analogy between the electric conductivity of resistor networks and the elasticity of polymer networks. Here, pc is the sol-gel transition point. Furthermore, we found that the experimental G -p relations in the region above C* did not follow the affine or phantom theories. Instead, all the G /G0-p curves fell onto a single master curve when G was normalized by the elastic modulus at p =1 , G0. We show that the effective medium approximation for Gaussian chain networks explains this master curve.

  18. A new method for constructing networks from binary data

    NASA Astrophysics Data System (ADS)

    van Borkulo, Claudia D.; Borsboom, Denny; Epskamp, Sacha; Blanken, Tessa F.; Boschloo, Lynn; Schoevers, Robert A.; Waldorp, Lourens J.

    2014-08-01

    Network analysis is entering fields where network structures are unknown, such as psychology and the educational sciences. A crucial step in the application of network models lies in the assessment of network structure. Current methods either have serious drawbacks or are only suitable for Gaussian data. In the present paper, we present a method for assessing network structures from binary data. Although models for binary data are infamous for their computational intractability, we present a computationally efficient model for estimating network structures. The approach, which is based on Ising models as used in physics, combines logistic regression with model selection based on a Goodness-of-Fit measure to identify relevant relationships between variables that define connections in a network. A validation study shows that this method succeeds in revealing the most relevant features of a network for realistic sample sizes. We apply our proposed method to estimate the network of depression and anxiety symptoms from symptom scores of 1108 subjects. Possible extensions of the model are discussed.

  19. Node-Based Learning of Multiple Gaussian Graphical Models

    PubMed Central

    Mohan, Karthik; London, Palma; Fazel, Maryam; Witten, Daniela; Lee, Su-In

    2014-01-01

    We consider the problem of estimating high-dimensional Gaussian graphical models corresponding to a single set of variables under several distinct conditions. This problem is motivated by the task of recovering transcriptional regulatory networks on the basis of gene expression data containing heterogeneous samples, such as different disease states, multiple species, or different developmental stages. We assume that most aspects of the conditional dependence networks are shared, but that there are some structured differences between them. Rather than assuming that similarities and differences between networks are driven by individual edges, we take a node-based approach, which in many cases provides a more intuitive interpretation of the network differences. We consider estimation under two distinct assumptions: (1) differences between the K networks are due to individual nodes that are perturbed across conditions, or (2) similarities among the K networks are due to the presence of common hub nodes that are shared across all K networks. Using a row-column overlap norm penalty function, we formulate two convex optimization problems that correspond to these two assumptions. We solve these problems using an alternating direction method of multipliers algorithm, and we derive a set of necessary and sufficient conditions that allows us to decompose the problem into independent subproblems so that our algorithm can be scaled to high-dimensional settings. Our proposal is illustrated on synthetic data, a webpage data set, and a brain cancer gene expression data set. PMID:25309137

  20. Pattern learning with deep neural networks in EMG-based speech recognition.

    PubMed

    Wand, Michael; Schultz, Tanja

    2014-01-01

    We report on classification of phones and phonetic features from facial electromyographic (EMG) data, within the context of our EMG-based Silent Speech interface. In this paper we show that a Deep Neural Network can be used to perform this classification task, yielding a significant improvement over conventional Gaussian Mixture models. Our central contribution is the visualization of patterns which are learned by the neural network. With increasing network depth, these patterns represent more and more intricate electromyographic activity.

  1. Robust Gaussian Graphical Modeling via l1 Penalization

    PubMed Central

    Sun, Hokeun; Li, Hongzhe

    2012-01-01

    Summary Gaussian graphical models have been widely used as an effective method for studying the conditional independency structure among genes and for constructing genetic networks. However, gene expression data typically have heavier tails or more outlying observations than the standard Gaussian distribution. Such outliers in gene expression data can lead to wrong inference on the dependency structure among the genes. We propose a l1 penalized estimation procedure for the sparse Gaussian graphical models that is robustified against possible outliers. The likelihood function is weighted according to how the observation is deviated, where the deviation of the observation is measured based on its own likelihood. An efficient computational algorithm based on the coordinate gradient descent method is developed to obtain the minimizer of the negative penalized robustified-likelihood, where nonzero elements of the concentration matrix represents the graphical links among the genes. After the graphical structure is obtained, we re-estimate the positive definite concentration matrix using an iterative proportional fitting algorithm. Through simulations, we demonstrate that the proposed robust method performs much better than the graphical Lasso for the Gaussian graphical models in terms of both graph structure selection and estimation when outliers are present. We apply the robust estimation procedure to an analysis of yeast gene expression data and show that the resulting graph has better biological interpretation than that obtained from the graphical Lasso. PMID:23020775

  2. Synaptic convergence regulates synchronization-dependent spike transfer in feedforward neural networks.

    PubMed

    Sailamul, Pachaya; Jang, Jaeson; Paik, Se-Bum

    2017-12-01

    Correlated neural activities such as synchronizations can significantly alter the characteristics of spike transfer between neural layers. However, it is not clear how this synchronization-dependent spike transfer can be affected by the structure of convergent feedforward wiring. To address this question, we implemented computer simulations of model neural networks: a source and a target layer connected with different types of convergent wiring rules. In the Gaussian-Gaussian (GG) model, both the connection probability and the strength are given as Gaussian distribution as a function of spatial distance. In the Uniform-Constant (UC) and Uniform-Exponential (UE) models, the connection probability density is a uniform constant within a certain range, but the connection strength is set as a constant value or an exponentially decaying function, respectively. Then we examined how the spike transfer function is modulated under these conditions, while static or synchronized input patterns were introduced to simulate different levels of feedforward spike synchronization. We observed that the synchronization-dependent modulation of the transfer function appeared noticeably different for each convergence condition. The modulation of the spike transfer function was largest in the UC model, and smallest in the UE model. Our analysis showed that this difference was induced by the different spike weight distributions that was generated from convergent synapses in each model. Our results suggest that, the structure of the feedforward convergence is a crucial factor for correlation-dependent spike control, thus must be considered important to understand the mechanism of information transfer in the brain.

  3. CGBayesNets: Conditional Gaussian Bayesian Network Learning and Inference with Mixed Discrete and Continuous Data

    PubMed Central

    Weiss, Scott T.

    2014-01-01

    Bayesian Networks (BN) have been a popular predictive modeling formalism in bioinformatics, but their application in modern genomics has been slowed by an inability to cleanly handle domains with mixed discrete and continuous variables. Existing free BN software packages either discretize continuous variables, which can lead to information loss, or do not include inference routines, which makes prediction with the BN impossible. We present CGBayesNets, a BN package focused around prediction of a clinical phenotype from mixed discrete and continuous variables, which fills these gaps. CGBayesNets implements Bayesian likelihood and inference algorithms for the conditional Gaussian Bayesian network (CGBNs) formalism, one appropriate for predicting an outcome of interest from, e.g., multimodal genomic data. We provide four different network learning algorithms, each making a different tradeoff between computational cost and network likelihood. CGBayesNets provides a full suite of functions for model exploration and verification, including cross validation, bootstrapping, and AUC manipulation. We highlight several results obtained previously with CGBayesNets, including predictive models of wood properties from tree genomics, leukemia subtype classification from mixed genomic data, and robust prediction of intensive care unit mortality outcomes from metabolomic profiles. We also provide detailed example analysis on public metabolomic and gene expression datasets. CGBayesNets is implemented in MATLAB and available as MATLAB source code, under an Open Source license and anonymous download at http://www.cgbayesnets.com. PMID:24922310

  4. CGBayesNets: conditional Gaussian Bayesian network learning and inference with mixed discrete and continuous data.

    PubMed

    McGeachie, Michael J; Chang, Hsun-Hsien; Weiss, Scott T

    2014-06-01

    Bayesian Networks (BN) have been a popular predictive modeling formalism in bioinformatics, but their application in modern genomics has been slowed by an inability to cleanly handle domains with mixed discrete and continuous variables. Existing free BN software packages either discretize continuous variables, which can lead to information loss, or do not include inference routines, which makes prediction with the BN impossible. We present CGBayesNets, a BN package focused around prediction of a clinical phenotype from mixed discrete and continuous variables, which fills these gaps. CGBayesNets implements Bayesian likelihood and inference algorithms for the conditional Gaussian Bayesian network (CGBNs) formalism, one appropriate for predicting an outcome of interest from, e.g., multimodal genomic data. We provide four different network learning algorithms, each making a different tradeoff between computational cost and network likelihood. CGBayesNets provides a full suite of functions for model exploration and verification, including cross validation, bootstrapping, and AUC manipulation. We highlight several results obtained previously with CGBayesNets, including predictive models of wood properties from tree genomics, leukemia subtype classification from mixed genomic data, and robust prediction of intensive care unit mortality outcomes from metabolomic profiles. We also provide detailed example analysis on public metabolomic and gene expression datasets. CGBayesNets is implemented in MATLAB and available as MATLAB source code, under an Open Source license and anonymous download at http://www.cgbayesnets.com.

  5. Phase-space networks of geometrically frustrated systems.

    PubMed

    Han, Yilong

    2009-11-01

    We illustrate a network approach to the phase-space study by using two geometrical frustration models: antiferromagnet on triangular lattice and square ice. Their highly degenerated ground states are mapped as discrete networks such that the quantitative network analysis can be applied to phase-space studies. The resulting phase spaces share some comon features and establish a class of complex networks with unique Gaussian spectral densities. Although phase-space networks are heterogeneously connected, the systems are still ergodic due to the random Poisson processes. This network approach can be generalized to phase spaces of some other complex systems.

  6. An improved advertising CTR prediction approach based on the fuzzy deep neural network

    PubMed Central

    Gao, Shu; Li, Mingjiang

    2018-01-01

    Combining a deep neural network with fuzzy theory, this paper proposes an advertising click-through rate (CTR) prediction approach based on a fuzzy deep neural network (FDNN). In this approach, fuzzy Gaussian-Bernoulli restricted Boltzmann machine (FGBRBM) is first applied to input raw data from advertising datasets. Next, fuzzy restricted Boltzmann machine (FRBM) is used to construct the fuzzy deep belief network (FDBN) with the unsupervised method layer by layer. Finally, fuzzy logistic regression (FLR) is utilized for modeling the CTR. The experimental results show that the proposed FDNN model outperforms several baseline models in terms of both data representation capability and robustness in advertising click log datasets with noise. PMID:29727443

  7. An improved advertising CTR prediction approach based on the fuzzy deep neural network.

    PubMed

    Jiang, Zilong; Gao, Shu; Li, Mingjiang

    2018-01-01

    Combining a deep neural network with fuzzy theory, this paper proposes an advertising click-through rate (CTR) prediction approach based on a fuzzy deep neural network (FDNN). In this approach, fuzzy Gaussian-Bernoulli restricted Boltzmann machine (FGBRBM) is first applied to input raw data from advertising datasets. Next, fuzzy restricted Boltzmann machine (FRBM) is used to construct the fuzzy deep belief network (FDBN) with the unsupervised method layer by layer. Finally, fuzzy logistic regression (FLR) is utilized for modeling the CTR. The experimental results show that the proposed FDNN model outperforms several baseline models in terms of both data representation capability and robustness in advertising click log datasets with noise.

  8. Bayesian Recurrent Neural Network for Language Modeling.

    PubMed

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  9. Quantifying networks complexity from information geometry viewpoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Domenico, E-mail: domenico.felice@unicam.it; Mancini, Stefano; INFN-Sezione di Perugia, Via A. Pascoli, I-06123 Perugia

    We consider a Gaussian statistical model whose parameter space is given by the variances of random variables. Underlying this model we identify networks by interpreting random variables as sitting on vertices and their correlations as weighted edges among vertices. We then associate to the parameter space a statistical manifold endowed with a Riemannian metric structure (that of Fisher-Rao). Going on, in analogy with the microcanonical definition of entropy in Statistical Mechanics, we introduce an entropic measure of networks complexity. We prove that it is invariant under networks isomorphism. Above all, considering networks as simplicial complexes, we evaluate this entropy onmore » simplexes and find that it monotonically increases with their dimension.« less

  10. Steady-state distributions of probability fluxes on complex networks

    NASA Astrophysics Data System (ADS)

    Chełminiak, Przemysław; Kurzyński, Michał

    2017-02-01

    We consider a simple model of the Markovian stochastic dynamics on complex networks to examine the statistical properties of the probability fluxes. The additional transition, called hereafter a gate, powered by the external constant force breaks a detailed balance in the network. We argue, using a theoretical approach and numerical simulations, that the stationary distributions of the probability fluxes emergent under such conditions converge to the Gaussian distribution. By virtue of the stationary fluctuation theorem, its standard deviation depends directly on the square root of the mean flux. In turn, the nonlinear relation between the mean flux and the external force, which provides the key result of the present study, allows us to calculate the two parameters that entirely characterize the Gaussian distribution of the probability fluxes both close to as well as far from the equilibrium state. Also, the other effects that modify these parameters, such as the addition of shortcuts to the tree-like network, the extension and configuration of the gate and a change in the network size studied by means of computer simulations are widely discussed in terms of the rigorous theoretical predictions.

  11. A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.

    PubMed

    Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua

    2017-07-01

    Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.

  12. Complete stability of delayed recurrent neural networks with Gaussian activation functions.

    PubMed

    Liu, Peng; Zeng, Zhigang; Wang, Jun

    2017-01-01

    This paper addresses the complete stability of delayed recurrent neural networks with Gaussian activation functions. By means of the geometrical properties of Gaussian function and algebraic properties of nonsingular M-matrix, some sufficient conditions are obtained to ensure that for an n-neuron neural network, there are exactly 3 k equilibrium points with 0≤k≤n, among which 2 k and 3 k -2 k equilibrium points are locally exponentially stable and unstable, respectively. Moreover, it concludes that all the states converge to one of the equilibrium points; i.e., the neural networks are completely stable. The derived conditions herein can be easily tested. Finally, a numerical example is given to illustrate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Neural Nets for Generalization and Classification: Comment on Staddon and Reid (1990).

    ERIC Educational Resources Information Center

    Shepard, Roger N.

    1990-01-01

    The neural net model of J. E. R. Staddon and A. K. Reid (1990) explains exponential and Gaussian generalization gradients in the same way as the diffusion model of R. N. Shepard (1958). The cognitive generalization theory of Shepard (1987), also implemented as a connectionist network, goes beyond both models in accounting for classification…

  14. Unifying Pore Network Modeling, Continuous Time Random Walk Theory and Experiment - Accomplishments and Future Directions

    NASA Astrophysics Data System (ADS)

    Bijeljic, B.

    2008-05-01

    This talk will describe and highlight the advantages offered by a methodology that unifies pore network modeling, CTRW theory and experiment in description of solute dispersion in porous media. Solute transport in a porous medium is characterized by the interplay of advection and diffusion (described by Peclet number, Pe) that cause spreading of solute particles. This spreading is traditionally described by dispersion coefficients, D, defined by σ 2 = 2Dt, where σ 2 is the variance of the solute position and t is the time. Using a pore-scale network model based on particle tracking, the rich Peclet- number dependence of dispersion coefficient is predicted from first principles and is shown to compare well with experimental data for restricted diffusion, transition, power-law and mechanical dispersion regimes in the asymptotic limit. In the asymptotic limit D is constant and can be used in an averaged advection-dispersion equation. However, it is highly important to recognize that, until the velocity field is fully sampled, the particle transport is non-Gaussian and D possesses temporal or spatial variation. Furthermore, temporal probability density functions (PDF) of tracer particles are studied in pore networks and an excellent agreement for the spectrum of transition times for particles from pore to pore is obtained between network model results and CTRW theory. Based on the truncated power-law interpretation of PDF-s, the physical origin of the power-law scaling of dispersion coefficient vs. Peclet number has been explained for unconsolidated porous media, sands and a number of sandstones, arriving at the same conclusion from numerical network modelling, analytic CTRW theory and experiment. Future directions for further applications of the methodology presented are discussed in relation to the scale- dependent solute dispersion and reactive transport. Significance of pre-asymptotic dispersion in porous media is addressed from pore-scale upwards and the impact of heterogeneity is discussed. The length traveled by solute plumes before Gaussian behaviour is reached increases with an increase in heterogeneity and/or Pe. This opens up the question on the nature of dispersion in natural systems where the heterogeneities at the larger scales will profoundly increase the range of velocities in the aquifer, thus considerably delaying the asymptotic approach to Gaussian behaviour. As a consequence, the asymptotic behaviour might not be reached at the field scale.

  15. A Novel Higher Order Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuxiang

    2010-05-01

    In this paper a new Higher Order Neural Network (HONN) model is introduced and applied in several data mining tasks. Data Mining extracts hidden patterns and valuable information from large databases. A hyperbolic tangent function is used as the neuron activation function for the new HONN model. Experiments are conducted to demonstrate the advantages and disadvantages of the new HONN model, when compared with several conventional Artificial Neural Network (ANN) models: Feedforward ANN with the sigmoid activation function; Feedforward ANN with the hyperbolic tangent activation function; and Radial Basis Function (RBF) ANN with the Gaussian activation function. The experimental results seem to suggest that the new HONN holds higher generalization capability as well as abilities in handling missing data.

  16. Parallel and Distributed Systems for Probabilistic Reasoning

    DTIC Science & Technology

    2012-12-01

    work at CMU I had the opportunity to work with Andreas Krause on Gaussian process models for signal quality estimation in wireless sensor networks ...we reviewed the natural parallelization of the belief propagation algorithm using the synchronous schedule and demonstrated both theoretically and...problem is that the power-law sparsity structure, commonly found in graphs derived from natural phenomena (e.g., social networks and the web

  17. Semi-supervised anomaly detection - towards model-independent searches of new physics

    NASA Astrophysics Data System (ADS)

    Kuusela, Mikael; Vatanen, Tommi; Malmi, Eric; Raiko, Tapani; Aaltonen, Timo; Nagai, Yoshikazu

    2012-06-01

    Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an algorithm based on semi-supervised anomaly detection techniques, which does not require a MC training sample for the signal data. We first model the background using a multivariate Gaussian mixture model. We then search for deviations from this model by fitting to the observations a mixture of the background model and a number of additional Gaussians. This allows us to perform pattern recognition of any anomalous excess over the background. We show by a comparison to neural network classifiers that such an approach is a lot more robust against misspecification of the signal MC than supervised classification. In cases where there is an unexpected signal, a neural network might fail to correctly identify it, while anomaly detection does not suffer from such a limitation. On the other hand, when there are no systematic errors in the training data, both methods perform comparably.

  18. A dynamic feedforward neural network based on gaussian particle swarm optimization and its application for predictive control.

    PubMed

    Han, Min; Fan, Jianchao; Wang, Jun

    2011-09-01

    A dynamic feedforward neural network (DFNN) is proposed for predictive control, whose adaptive parameters are adjusted by using Gaussian particle swarm optimization (GPSO) in the training process. Adaptive time-delay operators are added in the DFNN to improve its generalization for poorly known nonlinear dynamic systems with long time delays. Furthermore, GPSO adopts a chaotic map with Gaussian function to balance the exploration and exploitation capabilities of particles, which improves the computational efficiency without compromising the performance of the DFNN. The stability of the particle dynamics is analyzed, based on the robust stability theory, without any restrictive assumption. A stability condition for the GPSO+DFNN model is derived, which ensures a satisfactory global search and quick convergence, without the need for gradients. The particle velocity ranges could change adaptively during the optimization process. The results of a comparative study show that the performance of the proposed algorithm can compete with selected algorithms on benchmark problems. Additional simulation results demonstrate the effectiveness and accuracy of the proposed combination algorithm in identifying and controlling nonlinear systems with long time delays.

  19. Maximally informative pairwise interactions in networks

    PubMed Central

    Fitzgerald, Jeffrey D.; Sharpee, Tatyana O.

    2010-01-01

    Several types of biological networks have recently been shown to be accurately described by a maximum entropy model with pairwise interactions, also known as the Ising model. Here we present an approach for finding the optimal mappings between input signals and network states that allow the network to convey the maximal information about input signals drawn from a given distribution. This mapping also produces a set of linear equations for calculating the optimal Ising-model coupling constants, as well as geometric properties that indicate the applicability of the pairwise Ising model. We show that the optimal pairwise interactions are on average zero for Gaussian and uniformly distributed inputs, whereas they are nonzero for inputs approximating those in natural environments. These nonzero network interactions are predicted to increase in strength as the noise in the response functions of each network node increases. This approach also suggests ways for how interactions with unmeasured parts of the network can be inferred from the parameters of response functions for the measured network nodes. PMID:19905153

  20. On the Statistical Errors of RADAR Location Sensor Networks with Built-In Wi-Fi Gaussian Linear Fingerprints

    PubMed Central

    Zhou, Mu; Xu, Yu Bin; Ma, Lin; Tian, Shuo

    2012-01-01

    The expected errors of RADAR sensor networks with linear probabilistic location fingerprints inside buildings with varying Wi-Fi Gaussian strength are discussed. As far as we know, the statistical errors of equal and unequal-weighted RADAR networks have been suggested as a better way to evaluate the behavior of different system parameters and the deployment of reference points (RPs). However, up to now, there is still not enough related work on the relations between the statistical errors, system parameters, number and interval of the RPs, let alone calculating the correlated analytical expressions of concern. Therefore, in response to this compelling problem, under a simple linear distribution model, much attention will be paid to the mathematical relations of the linear expected errors, number of neighbors, number and interval of RPs, parameters in logarithmic attenuation model and variations of radio signal strength (RSS) at the test point (TP) with the purpose of constructing more practical and reliable RADAR location sensor networks (RLSNs) and also guaranteeing the accuracy requirements for the location based services in future ubiquitous context-awareness environments. Moreover, the numerical results and some real experimental evaluations of the error theories addressed in this paper will also be presented for our future extended analysis. PMID:22737027

  1. On the statistical errors of RADAR location sensor networks with built-in Wi-Fi Gaussian linear fingerprints.

    PubMed

    Zhou, Mu; Xu, Yu Bin; Ma, Lin; Tian, Shuo

    2012-01-01

    The expected errors of RADAR sensor networks with linear probabilistic location fingerprints inside buildings with varying Wi-Fi Gaussian strength are discussed. As far as we know, the statistical errors of equal and unequal-weighted RADAR networks have been suggested as a better way to evaluate the behavior of different system parameters and the deployment of reference points (RPs). However, up to now, there is still not enough related work on the relations between the statistical errors, system parameters, number and interval of the RPs, let alone calculating the correlated analytical expressions of concern. Therefore, in response to this compelling problem, under a simple linear distribution model, much attention will be paid to the mathematical relations of the linear expected errors, number of neighbors, number and interval of RPs, parameters in logarithmic attenuation model and variations of radio signal strength (RSS) at the test point (TP) with the purpose of constructing more practical and reliable RADAR location sensor networks (RLSNs) and also guaranteeing the accuracy requirements for the location based services in future ubiquitous context-awareness environments. Moreover, the numerical results and some real experimental evaluations of the error theories addressed in this paper will also be presented for our future extended analysis.

  2. A Gaussian Weave for Kinematical Loop Quantum Gravity

    NASA Astrophysics Data System (ADS)

    Corichi, A.; Reyes, J. M.; Ashtekar, A.

    Remarkable efforts in the study of the semiclassical regime of kinematical loop quantum gravity are currently underway. In this note, we construct a ``quasicoherent'' weave state using Gaussian factors. In a similar fashion to some other proposals, this state is peaked in both the connection and the spin network basis. However, the state constructed here has the novel feature that, in the spin network basis, the main contribution for this state is given by the fundamental representation, independently of the value of the parameter that regulates the Gaussian width.

  3. Unifying Pore Network Modeling, Continuous Time Random Walk (CTRW) Theory and Experiment to Describe Impact of Spatial Heterogeneities on Solute Dispersion at Multiple Length-scales

    NASA Astrophysics Data System (ADS)

    Bijeljic, B.; Blunt, M. J.; Rhodes, M. E.

    2009-04-01

    This talk will describe and highlight the advantages offered by a novel methodology that unifies pore network modeling, CTRW theory and experiment in description of solute dispersion in porous media. Solute transport in a porous medium is characterized by the interplay of advection and diffusion (described by Peclet number, Pe) that cause dispersion of solute particles. Dispersion is traditionally described by dispersion coefficients, D, that are commonly calculated from the spatial moments of the plume. Using a pore-scale network model based on particle tracking, the rich Peclet-number dependence of dispersion coefficient is predicted from first principles and is shown to compare well with experimental data for restricted diffusion, transition, power-law and mechanical dispersion regimes in the asymptotic limit. In the asymptotic limit D is constant and can be used in an averaged advection-dispersion equation. However, it is highly important to recognize that, until the velocity field is fully sampled, the particle transport is non-Gaussian and D possesses temporal or spatial variation. Furthermore, temporal probability density functions (PDF) of tracer particles are studied in pore networks and an excellent agreement for the spectrum of transition times for particles from pore to pore is obtained between network model results and CTRW theory. Based on the truncated power-law interpretation of PDF-s, the physical origin of the power-law scaling of dispersion coefficient vs. Peclet number has been explained for unconsolidated porous media, sands and a number of sandstones, arriving at the same conclusion from numerical network modelling, analytic CTRW theory and experiment. The length traveled by solute plumes before Gaussian behaviour is reached increases with an increase in heterogeneity and/or Pe. This opens up the question on the nature of dispersion in natural systems where the heterogeneities at the larger scales will significantly increase the range of velocities in the reservoir, thus significantly delaying the asymptotic approach to Gaussian behaviour. As a consequence, the asymptotic behaviour might not be reached at the field scale. This is illustrated by the multi-scale approach in which transport at core, gridblock and field scale is viewed as a series of particle transitions between discrete nodes governed by probability distributions. At each scale of interest a distribution that represents transport physics (and the heterogeneity) is used as an input to model a subsequent reservoir scale. The extensions to reactive transport are discussed.

  4. Chemical Source Inversion using Assimilated Constituent Observations in an Idealized Two-dimensional System

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Cooper, Robert; Pawson, Steven; Sun, Zhibin

    2009-01-01

    We present a source inversion technique for chemical constituents that uses assimilated constituent observations rather than directly using the observations. The method is tested with a simple model problem, which is a two-dimensional Fourier-Galerkin transport model combined with a Kalman filter for data assimilation. Inversion is carried out using a Green's function method and observations are simulated from a true state with added Gaussian noise. The forecast state uses the same spectral spectral model, but differs by an unbiased Gaussian model error, and emissions models with constant errors. The numerical experiments employ both simulated in situ and satellite observation networks. Source inversion was carried out by either direct use of synthetically generated observations with added noise, or by first assimilating the observations and using the analyses to extract observations. We have conducted 20 identical twin experiments for each set of source and observation configurations, and find that in the limiting cases of a very few localized observations, or an extremely large observation network there is little advantage to carrying out assimilation first. However, in intermediate observation densities, there decreases in source inversion error standard deviation using the Kalman filter algorithm followed by Green's function inversion by 50% to 95%.

  5. Synchronisation of networked Kuramoto oscillators under stable Lévy noise

    NASA Astrophysics Data System (ADS)

    Kalloniatis, Alexander C.; Roberts, Dale O.

    2017-01-01

    We study the Kuramoto model on several classes of network topologies examining the dynamics under the influence of Lévy noise. Such noise exhibits heavier tails than Gaussian and allows us to understand how 'shocks' influence the individual oscillator and collective system behaviour. Skewed α-stable Lévy noise, equivalent to fractional diffusion perturbations, are considered. We perform numerical simulations for Erdős-Rényi (ER) and Barabási-Albert (BA) scale free networks of size N = 1000 while varying the Lévy index α for the noise. We find that synchrony now assumes a surprising variety of forms, not seen for Gaussian-type noise, and changing with α: a noise-generated drift, a smooth α dependence of the point of cross-over of ER and BA networks in the degree of synchronisation, and a severe loss of synchronisation at low values of α. We also show that this robustness of the BA network across most values of α can also be understood as a consequence of the Laplacian of the graph working within the fractional Fokker-Planck equation of the linearised system, close to synchrony, with both eigenvalues and eigenvectors alternately contributing in different regimes of α.

  6. Annealed Importance Sampling for Neural Mass Models

    PubMed Central

    Penny, Will; Sengupta, Biswa

    2016-01-01

    Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606

  7. A neural-network based estimator to search for primordial non-Gaussianity in Planck CMB maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novaes, C.P.; Bernui, A.; Ferreira, I.S.

    2015-09-01

    We present an upgraded combined estimator, based on Minkowski Functionals and Neural Networks, with excellent performance in detecting primordial non-Gaussianity in simulated maps that also contain a weighted mixture of Galactic contaminations, besides real pixel's noise from Planck cosmic microwave background radiation data. We rigorously test the efficiency of our estimator considering several plausible scenarios for residual non-Gaussianities in the foreground-cleaned Planck maps, with the intuition to optimize the training procedure of the Neural Network to discriminate between contaminations with primordial and secondary non-Gaussian signatures. We look for constraints of primordial local non-Gaussianity at large angular scales in the foreground-cleanedmore » Planck maps. For the SMICA map we found f{sub NL} = 33 ± 23, at 1σ confidence level, in excellent agreement with the WMAP-9yr and Planck results. In addition, for the other three Planck maps we obtain similar constraints with values in the interval f{sub NL}  element of  [33, 41], concomitant with the fact that these maps manifest distinct features in reported analyses, like having different pixel's noise intensities.« less

  8. Deep Learning Method for Denial of Service Attack Detection Based on Restricted Boltzmann Machine.

    PubMed

    Imamverdiyev, Yadigar; Abdullayeva, Fargana

    2018-06-01

    In this article, the application of the deep learning method based on Gaussian-Bernoulli type restricted Boltzmann machine (RBM) to the detection of denial of service (DoS) attacks is considered. To increase the DoS attack detection accuracy, seven additional layers are added between the visible and the hidden layers of the RBM. Accurate results in DoS attack detection are obtained by optimization of the hyperparameters of the proposed deep RBM model. The form of the RBM that allows application of the continuous data is used. In this type of RBM, the probability distribution of the visible layer is replaced by a Gaussian distribution. Comparative analysis of the accuracy of the proposed method with Bernoulli-Bernoulli RBM, Gaussian-Bernoulli RBM, deep belief network type deep learning methods on DoS attack detection is provided. Detection accuracy of the methods is verified on the NSL-KDD data set. Higher accuracy from the proposed multilayer deep Gaussian-Bernoulli type RBM is obtained.

  9. Evaluation of regional and local atmospheric dispersion models for the analysis of traffic-related air pollution in urban areas

    NASA Astrophysics Data System (ADS)

    Fallah-Shorshani, Masoud; Shekarrizfard, Maryam; Hatzopoulou, Marianne

    2017-10-01

    Dispersion of road transport emissions in urban metropolitan areas is typically simulated using Gaussian models that ignore the turbulence and drag induced by buildings, which are especially relevant for areas with dense downtown cores. To consider the effect of buildings, street canyon models are used but often at the level of single urban corridors and small road networks. In this paper, we compare and validate two dispersion models with widely varying algorithms, across a modelling domain consisting of the City of Montreal, Canada accounting for emissions of more 40,000 roads. The first dispersion model is based on flow decomposition into the urban canopy sub-flow as well as overlying airflow. It takes into account the specific height and geometry of buildings along each road. The second model is a Gaussian puff dispersion model, which handles complex terrain and incorporates three-dimensional meteorology, but accounts for buildings only through variations in the initial vertical mixing coefficient. Validation against surface observations indicated that both models under-predicted measured concentrations. Average weekly exposure surfaces derived from both models were found to be reasonably correlated (r = 0.8) although the Gaussian dispersion model tended to underestimate concentrations around the roadways compared to the street canyon model. In addition, both models were used to estimate exposures of a representative sample of the Montreal population composed of 1319 individuals. Large differences were noted whereby exposures derived from the Gaussian puff model were significantly lower than exposures derived from the street canyon model, an expected result considering the concentration of population around roadways. These differences have large implications for the analyses of health effects associated with NO2 exposure.

  10. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    PubMed Central

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  11. Spatially Controlled Relay Beamforming

    NASA Astrophysics Data System (ADS)

    Kalogerias, Dionysios

    This thesis is about fusion of optimal stochastic motion control and physical layer communications. Distributed, networked communication systems, such as relay beamforming networks (e.g., Amplify & Forward (AF)), are typically designed without explicitly considering how the positions of the respective nodes might affect the quality of the communication. Optimum placement of network nodes, which could potentially improve the quality of the communication, is not typically considered. However, in most practical settings in physical layer communications, such as relay beamforming, the Channel State Information (CSI) observed by each node, per channel use, although it might be (modeled as) random, it is both spatially and temporally correlated. It is, therefore, reasonable to ask if and how the performance of the system could be improved by (predictively) controlling the positions of the network nodes (e.g., the relays), based on causal side (CSI) information, and exploitting the spatiotemporal dependencies of the wireless medium. In this work, we address this problem in the context of AF relay beamforming networks. This novel, cyber-physical system approach to relay beamforming is termed as "Spatially Controlled Relay Beamforming". First, we discuss wireless channel modeling, however, in a rigorous, Bayesian framework. Experimentally accurate and, at the same time, technically precise channel modeling is absolutely essential for designing and analyzing spatially controlled communication systems. In this work, we are interested in two distinct spatiotemporal statistical models, for describing the behavior of the log-scale magnitude of the wireless channel: 1. Stationary Gaussian Fields: In this case, the channel is assumed to evolve as a stationary, Gaussian stochastic field in continuous space and discrete time (say, for instance, time slots). Under such assumptions, spatial and temporal statistical interactions are determined by a set of time and space invariant parameters, which completely determine the mean and covariance of the underlying Gaussian measure. This model is relatively simple to describe, and can be sufficiently characterized, at least for our purposes, both statistically and topologically. Additionally, the model is rather versatile and there is existing experimental evidence, supporting its practical applicability. Our contributions are summarized in properly formulating the whole spatiotemporal model in a completely rigorous mathematical setting, under a convenient measure theoretic framework. Such framework greatly facilitates formulation of meaningful stochastic control problems, where the wireless channel field (or a function of it) can be regarded as a stochastic optimization surface.. 2. Conditionally Gaussian Fields, when conditioned on a Markovian channel state: This is a completely novel approach to wireless channel modeling. In this approach, the communication medium is assumed to behave as a partially observable (or hidden) system, where a hidden, global, temporally varying underlying stochastic process, called the channel state, affects the spatial interactions of the actual channel magnitude, evaluated at any set of locations in the plane. More specifically, we assume that, conditioned on the channel state, the wireless channel constitutes an observable, conditionally Gaussian stochastic process. The channel state evolves in time according to a known, possibly non stationary, non Gaussian, low dimensional Markov kernel. Recognizing the intractability of general nonlinear state estimation, we advocate the use of grid based approximate nonlinear filters as an effective and robust means for recursive tracking of the channel state. We also propose a sequential spatiotemporal predictor for tracking the channel gains at any point in time and space, providing real time sequential estimates for the respective channel gain map. In this context, our contributions are multifold. Except for the introduction of the layered channel model previously described, this line of research has resulted in a number of general, asymptotic convergence results, advancing the theory of grid-based approximate nonlinear stochastic filtering. In particular, sufficient conditions, ensuring asymptotic optimality are relaxed, and, at the same time, the mode of convergence is strengthened. Although the need for such results initiated as an attempt to theoretically characterize the performance of the proposed approximate methods for statistical inference, in regard to the proposed channel modeling approach, they turn out to be of fundamental importance in the areas of nonlinear estimation and stochastic control. The experimental validation of the proposed channel model, as well as the related parameter estimation problem, termed as "Markovian Channel Profiling (MCP)", fundamentally important for any practical deployment, are subject of current, ongoing research. Second, adopting the first of the two aforementioned channel modeling approaches, we consider the spatially controlled relay beamforming problem for an AF network with a single source, a single destination, and multiple, controlled at will, relay nodes. (Abstract shortened by ProQuest.).

  12. Equivalence between entanglement and the optimal fidelity of continuous variable teleportation.

    PubMed

    Adesso, Gerardo; Illuminati, Fabrizio

    2005-10-07

    We devise the optimal form of Gaussian resource states enabling continuous-variable teleportation with maximal fidelity. We show that a nonclassical optimal fidelity of N-user teleportation networks is necessary and sufficient for N-party entangled Gaussian resources, yielding an estimator of multipartite entanglement. The entanglement of teleportation is equivalent to the entanglement of formation in a two-user protocol, and to the localizable entanglement in a multiuser one. Finally, we show that the continuous-variable tangle, quantifying entanglement sharing in three-mode Gaussian states, is defined operationally in terms of the optimal fidelity of a tripartite teleportation network.

  13. A prediction model for cognitive performance in health ageing using diffusion tensor imaging with graph theory.

    PubMed

    Yun, Ruijuan; Lin, Chung-Chih; Wu, Shuicai; Huang, Chu-Chung; Lin, Ching-Po; Chao, Yi-Ping

    2013-01-01

    In this study, we employed diffusion tensor imaging (DTI) to construct brain structural network and then derive the connection matrices from 96 healthy elderly subjects. The correlation analysis between these topological properties of network based on graph theory and the Cognitive Abilities Screening Instrument (CASI) index were processed to extract the significant network characteristics. These characteristics were then integrated to estimate the models by various machine-learning algorithms to predict user's cognitive performance. From the results, linear regression model and Gaussian processes model showed presented better abilities with lower mean absolute errors of 5.8120 and 6.25 to predict the cognitive performance respectively. Moreover, these extracted topological properties of brain structural network derived from DTI also could be regarded as the bio-signatures for further evaluation of brain degeneration in healthy aged and early diagnosis of mild cognitive impairment (MCI).

  14. Speech reconstruction using a deep partially supervised neural network.

    PubMed

    McLoughlin, Ian; Li, Jingjie; Song, Yan; Sharifzadeh, Hamid R

    2017-08-01

    Statistical speech reconstruction for larynx-related dysphonia has achieved good performance using Gaussian mixture models and, more recently, restricted Boltzmann machine arrays; however, deep neural network (DNN)-based systems have been hampered by the limited amount of training data available from individual voice-loss patients. The authors propose a novel DNN structure that allows a partially supervised training approach on spectral features from smaller data sets, yielding very good results compared with the current state-of-the-art.

  15. Effects of biases in domain wall network evolution. II. Quantitative analysis

    NASA Astrophysics Data System (ADS)

    Correia, J. R. C. C. C.; Leite, I. S. C. R.; Martins, C. J. A. P.

    2018-04-01

    Domain walls form at phase transitions which break discrete symmetries. In a cosmological context, they often overclose the Universe (contrary to observational evidence), although one may prevent this by introducing biases or forcing anisotropic evolution of the walls. In a previous work [Correia et al., Phys. Rev. D 90, 023521 (2014), 10.1103/PhysRevD.90.023521], we numerically studied the evolution of various types of biased domain wall networks in the early Universe, confirming that anisotropic networks ultimately reach scaling while those with a biased potential or biased initial conditions decay. We also found that the analytic decay law obtained by Hindmarsh was in good agreement with simulations of biased potentials, but not of biased initial conditions, and suggested that the difference was related to the Gaussian approximation underlying the analytic law. Here, we extend our previous work in several ways. For the cases of biased potential and biased initial conditions, we study in detail the field distributions in the simulations, confirming that the validity (or not) of the Gaussian approximation is the key difference between the two cases. For anisotropic walls, we carry out a more extensive set of numerical simulations and compare them to the canonical velocity-dependent one-scale model for domain walls, finding that the model accurately predicts the linear scaling regime after isotropization. Overall, our analysis provides a quantitative description of the cosmological evolution of these networks.

  16. Probabilistic neural networks modeling of the 48-h LC50 acute toxicity endpoint to Daphnia magna.

    PubMed

    Niculescu, S P; Lewis, M A; Tigner, J

    2008-01-01

    Two modeling experiments based on the maximum likelihood estimation paradigm and targeting prediction of the Daphnia magna 48-h LC50 acute toxicity endpoint for both organic and inorganic compounds are reported. The resulting models computational algorithms are implemented as basic probabilistic neural networks with Gaussian kernel (statistical corrections included). The first experiment uses strictly D. magna information for 971 structures as training/learning data and the resulting model targets practical applications. The second experiment uses the same training/learning information plus additional data on another 29 compounds whose endpoint information is originating from D. pulex and Ceriodaphnia dubia. It only targets investigation of the effect of mixing strictly D. magna 48-h LC50 modeling information with small amounts of similar information estimated from related species, and this is done as part of the validation process. A complementary 81 compounds dataset (involving only strictly D. magna information) is used to perform external testing. On this external test set, the Gaussian character of the distribution of the residuals is confirmed for both models. This allows the use of traditional statistical methodology to implement computation of confidence intervals for the unknown measured values based on the models predictions. Examples are provided for the model targeting practical applications. For the same model, a comparison with other existing models targeting the same endpoint is performed.

  17. Non-Gaussian information from weak lensing data via deep learning

    NASA Astrophysics Data System (ADS)

    Gupta, Arushi; Matilla, José Manuel Zorrilla; Hsu, Daniel; Haiman, Zoltán

    2018-05-01

    Weak lensing maps contain information beyond two-point statistics on small scales. Much recent work has tried to extract this information through a range of different observables or via nonlinear transformations of the lensing field. Here we train and apply a two-dimensional convolutional neural network to simulated noiseless lensing maps covering 96 different cosmological models over a range of {Ωm,σ8} . Using the area of the confidence contour in the {Ωm,σ8} plane as a figure of merit, derived from simulated convergence maps smoothed on a scale of 1.0 arcmin, we show that the neural network yields ≈5 × tighter constraints than the power spectrum, and ≈4 × tighter than the lensing peaks. Such gains illustrate the extent to which weak lensing data encode cosmological information not accessible to the power spectrum or even other, non-Gaussian statistics such as lensing peaks.

  18. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    PubMed Central

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  19. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  20. Rubber elasticity for percolation network consisting of Gaussian chains.

    PubMed

    Nishi, Kengo; Noguchi, Hiroshi; Sakai, Takamasa; Shibayama, Mitsuhiro

    2015-11-14

    A theory describing the elastic modulus for percolation networks of Gaussian chains on general lattices such as square and cubic lattices is proposed and its validity is examined with simulation and mechanical experiments on well-defined polymer networks. The theory was developed by generalizing the effective medium approximation (EMA) for Hookian spring network to Gaussian chain networks. From EMA theory, we found that the ratio of the elastic modulus at p, G to that at p = 1, G0, must be equal to G/G0 = (p - 2/f)/(1 - 2/f) if the position of sites can be determined so as to meet the force balance, where p is the degree of cross-linking reaction. However, the EMA prediction cannot be applicable near its percolation threshold because EMA is a mean field theory. Thus, we combine real-space renormalization and EMA and propose a theory called real-space renormalized EMA, i.e., REMA. The elastic modulus predicted by REMA is in excellent agreement with the results of simulations and experiments of near-ideal diamond lattice gels.

  1. Short-term prediction of chaotic time series by using RBF network with regression weights.

    PubMed

    Rojas, I; Gonzalez, J; Cañas, A; Diaz, A F; Rojas, F J; Rodriguez, M

    2000-10-01

    We propose a framework for constructing and training a radial basis function (RBF) neural network. The structure of the gaussian functions is modified using a pseudo-gaussian function (PG) in which two scaling parameters sigma are introduced, which eliminates the symmetry restriction and provides the neurons in the hidden layer with greater flexibility with respect to function approximation. We propose a modified PG-BF (pseudo-gaussian basis function) network in which the regression weights are used to replace the constant weights in the output layer. For this purpose, a sequential learning algorithm is presented to adapt the structure of the network, in which it is possible to create a new hidden unit and also to detect and remove inactive units. A salient feature of the network systems is that the method used for calculating the overall output is the weighted average of the output associated with each receptive field. The superior performance of the proposed PG-BF system over the standard RBF are illustrated using the problem of short-term prediction of chaotic time series.

  2. Rubber elasticity for percolation network consisting of Gaussian chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishi, Kengo, E-mail: kengo.nishi@phys.uni-goettingen.de, E-mail: sakai@tetrapod.t.u-tokyo.ac.jp, E-mail: sibayama@issp.u-tokyo.ac.jp; Noguchi, Hiroshi; Shibayama, Mitsuhiro, E-mail: kengo.nishi@phys.uni-goettingen.de, E-mail: sakai@tetrapod.t.u-tokyo.ac.jp, E-mail: sibayama@issp.u-tokyo.ac.jp

    2015-11-14

    A theory describing the elastic modulus for percolation networks of Gaussian chains on general lattices such as square and cubic lattices is proposed and its validity is examined with simulation and mechanical experiments on well-defined polymer networks. The theory was developed by generalizing the effective medium approximation (EMA) for Hookian spring network to Gaussian chain networks. From EMA theory, we found that the ratio of the elastic modulus at p, G to that at p = 1, G{sub 0}, must be equal to G/G{sub 0} = (p − 2/f)/(1 − 2/f) if the position of sites can be determined somore » as to meet the force balance, where p is the degree of cross-linking reaction. However, the EMA prediction cannot be applicable near its percolation threshold because EMA is a mean field theory. Thus, we combine real-space renormalization and EMA and propose a theory called real-space renormalized EMA, i.e., REMA. The elastic modulus predicted by REMA is in excellent agreement with the results of simulations and experiments of near-ideal diamond lattice gels.« less

  3. Lower white matter microstructure in the superior longitudinal fasciculus is associated with increased response time variability in adults with attention-deficit/ hyperactivity disorder.

    PubMed

    Wolfers, Thomas; Onnink, A Marten H; Zwiers, Marcel P; Arias-Vasquez, Alejandro; Hoogman, Martine; Mostert, Jeanette C; Kan, Cornelis C; Slaats-Willemse, Dorine; Buitelaar, Jan K; Franke, Barbara

    2015-09-01

    Response time variability (RTV) is consistently increased in patients with attention-deficit/hyperactivity disorder (ADHD). A right-hemispheric frontoparietal attention network model has been implicated in these patients. The 3 main connecting fibre tracts in this network, the superior longitudinal fasciculus (SLF), inferior longitudinal fasciculus (ILF) and the cingulum bundle (CB), show microstructural abnormalities in patients with ADHD. We hypothesized that the microstructural integrity of the 3 white matter tracts of this network are associated with ADHD and RTV. We examined RTV in adults with ADHD by modelling the reaction time distribution as an exponentially modified Gaussian (ex-Gaussian) function with the parameters μ, σ and τ, the latter of which has been attributed to lapses of attention. We assessed adults with ADHD and healthy controls using a sustained attention task. Diffusion tensor imaging-derived fractional anisotropy (FA) values were determined to quantify bilateral microstructural integrity of the tracts of interest. We included 100 adults with ADHD and 96 controls in our study. Increased τ was associated with ADHD diagnosis and was linked to symptoms of inattention. An inverse correlation of τ with mean FA was seen in the right SLF of patients with ADHD, but no direct association between the mean FA of the 6 regions of interest with ADHD could be observed. Regions of interest were defined a priori based on the attentional network model for ADHD and thus we might have missed effects in other networks. This study suggests that reduced microstructural integrity of the right SLF is associated with elevated τ in patients with ADHD.

  4. Lower white matter microstructure in the superior longitudinal fasciculus is associated with increased response time variability in adults with attention-deficit/hyperactivity disorder

    PubMed Central

    Wolfers, Thomas; Onnink, A. Marten H.; Zwiers, Marcel P.; Arias-Vasquez, Alejandro; Hoogman, Martine; Mostert, Jeanette C.; Kan, Cornelis C.; Slaats-Willemse, Dorine; Buitelaar, Jan K.; Franke, Barbara

    2015-01-01

    Background Response time variability (RTV) is consistently increased in patients with attention-deficit/hyperactivity disorder (ADHD). A right-hemispheric frontoparietal attention network model has been implicated in these patients. The 3 main connecting fibre tracts in this network, the superior longitudinal fasciculus (SLF), inferior longitudinal fasciculus (ILF) and the cingulum bundle (CB), show microstructural abnormalities in patients with ADHD. We hypothesized that the microstructural integrity of the 3 white matter tracts of this network are associated with ADHD and RTV. Methods We examined RTV in adults with ADHD by modelling the reaction time distribution as an exponentially modified Gaussian (ex-Gaussian) function with the parameters μ, σ and τ, the latter of which has been attributed to lapses of attention. We assessed adults with ADHD and healthy controls using a sustained attention task. Diffusion tensor imaging–derived fractional anisotropy (FA) values were determined to quantify bilateral microstructural integrity of the tracts of interest. Results We included 100 adults with ADHD and 96 controls in our study. Increased τ was associated with ADHD diagnosis and was linked to symptoms of inattention. An inverse correlation of τ with mean FA was seen in the right SLF of patients with ADHD, but no direct association between the mean FA of the 6 regions of interest with ADHD could be observed. Limitations Regions of interest were defined a priori based on the attentional network model for ADHD and thus we might have missed effects in other networks. Conclusion This study suggests that reduced microstructural integrity of the right SLF is associated with elevated τ in patients with ADHD. PMID:26079698

  5. Approximating high angular resolution apparent diffusion coefficient profiles using spherical harmonics under BiGaussian assumption

    NASA Astrophysics Data System (ADS)

    Cao, Ning; Liang, Xuwei; Zhuang, Qi; Zhang, Jun

    2009-02-01

    Magnetic Resonance Imaging (MRI) techniques have achieved much importance in providing visual and quantitative information of human body. Diffusion MRI is the only non-invasive tool to obtain information of the neural fiber networks of the human brain. The traditional Diffusion Tensor Imaging (DTI) is only capable of characterizing Gaussian diffusion. High Angular Resolution Diffusion Imaging (HARDI) extends its ability to model more complex diffusion processes. Spherical harmonic series truncated to a certain degree is used in recent studies to describe the measured non-Gaussian Apparent Diffusion Coefficient (ADC) profile. In this study, we use the sampling theorem on band-limited spherical harmonics to choose a suitable degree to truncate the spherical harmonic series in the sense of Signal-to-Noise Ratio (SNR), and use Monte Carlo integration to compute the spherical harmonic transform of human brain data obtained from icosahedral schema.

  6. Application of Modified Particle Swarm Optimization Method for Parameter Extraction of 2-D TEC Mapping

    NASA Astrophysics Data System (ADS)

    Toker, C.; Gokdag, Y. E.; Arikan, F.; Arikan, O.

    2012-04-01

    Ionosphere is a very important part of Space Weather. Modeling and monitoring of ionospheric variability is a major part of satellite communication, navigation and positioning systems. Total Electron Content (TEC), which is defined as the line integral of the electron density along a ray path, is one of the parameters to investigate the ionospheric variability. Dual-frequency GPS receivers, with their world wide availability and efficiency in TEC estimation, have become a major source of global and regional TEC modeling. When Global Ionospheric Maps (GIM) of International GPS Service (IGS) centers (http://iono.jpl.nasa.gov/gim.html) are investigated, it can be observed that regional ionosphere along the midlatitude regions can be modeled as a constant, linear or a quadratic surface. Globally, especially around the magnetic equator, the TEC surfaces resemble twisted and dispersed single centered or double centered Gaussian functions. Particle Swarm Optimization (PSO) proved itself as a fast converging and an effective optimization tool in various diverse fields. Yet, in order to apply this optimization technique into TEC modeling, the method has to be modified for higher efficiency and accuracy in extraction of geophysical parameters such as model parameters of TEC surfaces. In this study, a modified PSO (mPSO) method is applied to regional and global synthetic TEC surfaces. The synthetic surfaces that represent the trend and small scale variability of various ionospheric states are necessary to compare the performance of mPSO over number of iterations, accuracy in parameter estimation and overall surface reconstruction. The Cramer-Rao bounds for each surface type and model are also investigated and performance of mPSO are tested with respect to these bounds. For global models, the sample points that are used in optimization are obtained using IGS receiver network. For regional TEC models, regional networks such as Turkish National Permanent GPS Network (TNPGN-Active) receiver sites are used. The regional TEC models are grouped into constant (one parameter), linear (two parameters), and quadratic (six parameters) surfaces which are functions of latitude and longitude. Global models require seven parameters for single centered Gaussian and 13 parameters for double centered Gaussian function. The error criterion is the normalized percentage error for both the surface and the parameters. It is observed that mPSO is very successful in parameter extraction of various regional and global models. The normalized reconstruction error varies from 10-4 for constant surfaces to 10-3 for quadratic surfaces in regional models, sampled with regional networks. Even for the cases of a severe geomagnetic storm that affects measurements globally, with IGS network, the reconstruction error is on the order of 10-1 even though individual parameters have higher normalized errors. The modified PSO technique proved itself to be a useful tool for parameter extraction of more complicated TEC models. This study is supported by TUBITAK EEEAG under Grant No: 109E055.

  7. Noise in Neural Networks: Thresholds, Hysteresis, and Neuromodulation of Signal-To-Noise

    NASA Astrophysics Data System (ADS)

    Keeler, James D.; Pichler, Elgar E.; Ross, John

    1989-03-01

    We study a neural-network model including Gaussian noise, higher-order neuronal interactions, and neuromodulation. For a first-order network, there is a threshold in the noise level (phase transition) above which the network displays only disorganized behavior and critical slowing down near the noise threshold. The network can tolerate more noise if it has higher-order feedback interactions, which also lead to hysteresis and multistability in the network dynamics. The signal-to-noise ratio can be adjusted in a biological neural network by neuromodulators such as norepinephrine. Comparisons are made to experimental results and further investigations are suggested to test the effects of hysteresis and neuromodulation in pattern recognition and learning. We propose that norepinephrine may ``quench'' the neural patterns of activity to enhance the ability to learn details.

  8. Response of Electrical Activity in an Improved Neuron Model under Electromagnetic Radiation and Noise

    PubMed Central

    Zhan, Feibiao; Liu, Shenquan

    2017-01-01

    Electrical activities are ubiquitous neuronal bioelectric phenomena, which have many different modes to encode the expression of biological information, and constitute the whole process of signal propagation between neurons. Therefore, we focus on the electrical activities of neurons, which is also causing widespread concern among neuroscientists. In this paper, we mainly investigate the electrical activities of the Morris-Lecar (M-L) model with electromagnetic radiation or Gaussian white noise, which can restore the authenticity of neurons in realistic neural network. First, we explore dynamical response of the whole system with electromagnetic induction (EMI) and Gaussian white noise. We find that there are slight differences in the discharge behaviors via comparing the response of original system with that of improved system, and electromagnetic induction can transform bursting or spiking state to quiescent state and vice versa. Furthermore, we research bursting transition mode and the corresponding periodic solution mechanism for the isolated neuron model with electromagnetic induction by using one-parameter and bi-parameters bifurcation analysis. Finally, we analyze the effects of Gaussian white noise on the original system and coupled system, which is conducive to understand the actual discharge properties of realistic neurons. PMID:29209192

  9. Response of Electrical Activity in an Improved Neuron Model under Electromagnetic Radiation and Noise.

    PubMed

    Zhan, Feibiao; Liu, Shenquan

    2017-01-01

    Electrical activities are ubiquitous neuronal bioelectric phenomena, which have many different modes to encode the expression of biological information, and constitute the whole process of signal propagation between neurons. Therefore, we focus on the electrical activities of neurons, which is also causing widespread concern among neuroscientists. In this paper, we mainly investigate the electrical activities of the Morris-Lecar (M-L) model with electromagnetic radiation or Gaussian white noise, which can restore the authenticity of neurons in realistic neural network. First, we explore dynamical response of the whole system with electromagnetic induction (EMI) and Gaussian white noise. We find that there are slight differences in the discharge behaviors via comparing the response of original system with that of improved system, and electromagnetic induction can transform bursting or spiking state to quiescent state and vice versa. Furthermore, we research bursting transition mode and the corresponding periodic solution mechanism for the isolated neuron model with electromagnetic induction by using one-parameter and bi-parameters bifurcation analysis. Finally, we analyze the effects of Gaussian white noise on the original system and coupled system, which is conducive to understand the actual discharge properties of realistic neurons.

  10. Optimization of the kernel functions in a probabilistic neural network analyzing the local pattern distribution.

    PubMed

    Galleske, I; Castellanos, J

    2002-05-01

    This article proposes a procedure for the automatic determination of the elements of the covariance matrix of the gaussian kernel function of probabilistic neural networks. Two matrices, a rotation matrix and a matrix of variances, can be calculated by analyzing the local environment of each training pattern. The combination of them will form the covariance matrix of each training pattern. This automation has two advantages: First, it will free the neural network designer from indicating the complete covariance matrix, and second, it will result in a network with better generalization ability than the original model. A variation of the famous two-spiral problem and real-world examples from the UCI Machine Learning Repository will show a classification rate not only better than the original probabilistic neural network but also that this model can outperform other well-known classification techniques.

  11. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.

    PubMed

    Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros

    2018-05-01

    We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.

  12. Deep neural network and noise classification-based speech enhancement

    NASA Astrophysics Data System (ADS)

    Shi, Wenhua; Zhang, Xiongwei; Zou, Xia; Han, Wei

    2017-07-01

    In this paper, a speech enhancement method using noise classification and Deep Neural Network (DNN) was proposed. Gaussian mixture model (GMM) was employed to determine the noise type in speech-absent frames. DNN was used to model the relationship between noisy observation and clean speech. Once the noise type was determined, the corresponding DNN model was applied to enhance the noisy speech. GMM was trained with mel-frequency cepstrum coefficients (MFCC) and the parameters were estimated with an iterative expectation-maximization (EM) algorithm. Noise type was updated by spectrum entropy-based voice activity detection (VAD). Experimental results demonstrate that the proposed method could achieve better objective speech quality and smaller distortion under stationary and non-stationary conditions.

  13. Wavelet decomposition and radial basis function networks for system monitoring

    NASA Astrophysics Data System (ADS)

    Ikonomopoulos, A.; Endou, A.

    1998-10-01

    Two approaches are coupled to develop a novel collection of black box models for monitoring operational parameters in a complex system. The idea springs from the intention of obtaining multiple predictions for each system variable and fusing them before they are used to validate the actual measurement. The proposed architecture pairs the analytical abilities of the discrete wavelet decomposition with the computational power of radial basis function networks. Members of a wavelet family are constructed in a systematic way and chosen through a statistical selection criterion that optimizes the structure of the network. Network parameters are further optimized through a quasi-Newton algorithm. The methodology is demonstrated utilizing data obtained during two transients of the Monju fast breeder reactor. The models developed are benchmarked with respect to similar regressors based on Gaussian basis functions.

  14. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    PubMed

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franović, Igor, E-mail: franovic@ipb.ac.rs; Todorović, Kristina; Burić, Nikola

    We use the mean-field approach to analyze the collective dynamics in macroscopic networks of stochastic Fitzhugh-Nagumo units with delayed couplings. The conditions for validity of the two main approximations behind the model, called the Gaussian approximation and the Quasi-independence approximation, are examined. It is shown that the dynamics of the mean-field model may indicate in a self-consistent fashion the parameter domains where the Quasi-independence approximation fails. Apart from a network of globally coupled units, we also consider the paradigmatic setup of two interacting assemblies to demonstrate how our framework may be extended to hierarchical and modular networks. In both cases,more » the mean-field model can be used to qualitatively analyze the stability of the system, as well as the scenarios for the onset and the suppression of the collective mode. In quantitative terms, the mean-field model is capable of predicting the average oscillation frequency corresponding to the global variables of the exact system.« less

  16. JRmGRN: Joint reconstruction of multiple gene regulatory networks with common hub genes using data from multiple tissues or conditions.

    PubMed

    Deng, Wenping; Zhang, Kui; Liu, Sanzhen; Zhao, Patrick; Xu, Shizhong; Wei, Hairong

    2018-04-30

    Joint reconstruction of multiple gene regulatory networks (GRNs) using gene expression data from multiple tissues/conditions is very important for understanding common and tissue/condition-specific regulation. However, there are currently no computational models and methods available for directly constructing such multiple GRNs that not only share some common hub genes but also possess tissue/condition-specific regulatory edges. In this paper, we proposed a new graphic Gaussian model for joint reconstruction of multiple gene regulatory networks (JRmGRN), which highlighted hub genes, using gene expression data from several tissues/conditions. Under the framework of Gaussian graphical model, JRmGRN method constructs the GRNs through maximizing a penalized log likelihood function. We formulated it as a convex optimization problem, and then solved it with an alternating direction method of multipliers (ADMM) algorithm. The performance of JRmGRN was first evaluated with synthetic data and the results showed that JRmGRN outperformed several other methods for reconstruction of GRNs. We also applied our method to real Arabidopsis thaliana RNA-seq data from two light regime conditions in comparison with other methods, and both common hub genes and some conditions-specific hub genes were identified with higher accuracy and precision. JRmGRN is available as a R program from: https://github.com/wenpingd. hairong@mtu.edu. Proof of theorem, derivation of algorithm and supplementary data are available at Bioinformatics online.

  17. Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud.

    PubMed

    Zia Ullah, Qazi; Hassan, Shahzad; Khan, Gul Muhammad

    2017-01-01

    Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers.

  18. Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud

    PubMed Central

    Hassan, Shahzad; Khan, Gul Muhammad

    2017-01-01

    Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers. PMID:28811819

  19. Computational Stimulation of the Basal Ganglia Neurons with Cost Effective Delayed Gaussian Waveforms

    PubMed Central

    Daneshzand, Mohammad; Faezipour, Miad; Barkana, Buket D.

    2017-01-01

    Deep brain stimulation (DBS) has compelling results in the desynchronization of the basal ganglia neuronal activities and thus, is used in treating the motor symptoms of Parkinson's disease (PD). Accurate definition of DBS waveform parameters could avert tissue or electrode damage, increase the neuronal activity and reduce energy cost which will prolong the battery life, hence avoiding device replacement surgeries. This study considers the use of a charge balanced Gaussian waveform pattern as a method to disrupt the firing patterns of neuronal cell activity. A computational model was created to simulate ganglia cells and their interactions with thalamic neurons. From the model, we investigated the effects of modified DBS pulse shapes and proposed a delay period between the cathodic and anodic parts of the charge balanced Gaussian waveform to desynchronize the firing patterns of the GPe and GPi cells. The results of the proposed Gaussian waveform with delay outperformed that of rectangular DBS waveforms used in in-vivo experiments. The Gaussian Delay Gaussian (GDG) waveforms achieved lower number of misses in eliciting action potential while having a lower amplitude and shorter length of delay compared to numerous different pulse shapes. The amount of energy consumed in the basal ganglia network due to GDG waveforms was dropped by 22% in comparison with charge balanced Gaussian waveforms without any delay between the cathodic and anodic parts and was also 60% lower than a rectangular charged balanced pulse with a delay between the cathodic and anodic parts of the waveform. Furthermore, by defining a Synchronization Level metric, we observed that the GDG waveform was able to reduce the synchronization of GPi neurons more effectively than any other waveform. The promising results of GDG waveforms in terms of eliciting action potential, desynchronization of the basal ganglia neurons and reduction of energy consumption can potentially enhance the performance of DBS devices. PMID:28848417

  20. Computational Stimulation of the Basal Ganglia Neurons with Cost Effective Delayed Gaussian Waveforms.

    PubMed

    Daneshzand, Mohammad; Faezipour, Miad; Barkana, Buket D

    2017-01-01

    Deep brain stimulation (DBS) has compelling results in the desynchronization of the basal ganglia neuronal activities and thus, is used in treating the motor symptoms of Parkinson's disease (PD). Accurate definition of DBS waveform parameters could avert tissue or electrode damage, increase the neuronal activity and reduce energy cost which will prolong the battery life, hence avoiding device replacement surgeries. This study considers the use of a charge balanced Gaussian waveform pattern as a method to disrupt the firing patterns of neuronal cell activity. A computational model was created to simulate ganglia cells and their interactions with thalamic neurons. From the model, we investigated the effects of modified DBS pulse shapes and proposed a delay period between the cathodic and anodic parts of the charge balanced Gaussian waveform to desynchronize the firing patterns of the GPe and GPi cells. The results of the proposed Gaussian waveform with delay outperformed that of rectangular DBS waveforms used in in-vivo experiments. The Gaussian Delay Gaussian (GDG) waveforms achieved lower number of misses in eliciting action potential while having a lower amplitude and shorter length of delay compared to numerous different pulse shapes. The amount of energy consumed in the basal ganglia network due to GDG waveforms was dropped by 22% in comparison with charge balanced Gaussian waveforms without any delay between the cathodic and anodic parts and was also 60% lower than a rectangular charged balanced pulse with a delay between the cathodic and anodic parts of the waveform. Furthermore, by defining a Synchronization Level metric, we observed that the GDG waveform was able to reduce the synchronization of GPi neurons more effectively than any other waveform. The promising results of GDG waveforms in terms of eliciting action potential, desynchronization of the basal ganglia neurons and reduction of energy consumption can potentially enhance the performance of DBS devices.

  1. Dynamic Socialized Gaussian Process Models for Human Behavior Prediction in a Health Social Network

    PubMed Central

    Shen, Yelong; Phan, NhatHai; Xiao, Xiao; Jin, Ruoming; Sun, Junfeng; Piniewski, Brigitte; Kil, David; Dou, Dejing

    2016-01-01

    Modeling and predicting human behaviors, such as the level and intensity of physical activity, is a key to preventing the cascade of obesity and helping spread healthy behaviors in a social network. In our conference paper, we have developed a social influence model, named Socialized Gaussian Process (SGP), for socialized human behavior modeling. Instead of explicitly modeling social influence as individuals' behaviors influenced by their friends' previous behaviors, SGP models the dynamic social correlation as the result of social influence. The SGP model naturally incorporates personal behavior factor and social correlation factor (i.e., the homophily principle: Friends tend to perform similar behaviors) into a unified model. And it models the social influence factor (i.e., an individual's behavior can be affected by his/her friends) implicitly in dynamic social correlation schemes. The detailed experimental evaluation has shown the SGP model achieves better prediction accuracy compared with most of baseline methods. However, a Socialized Random Forest model may perform better at the beginning compared with the SGP model. One of the main reasons is the dynamic social correlation function is purely based on the users' sequential behaviors without considering other physical activity-related features. To address this issue, we further propose a novel “multi-feature SGP model” (mfSGP) which improves the SGP model by using multiple physical activity-related features in the dynamic social correlation learning. Extensive experimental results illustrate that the mfSGP model clearly outperforms all other models in terms of prediction accuracy and running time. PMID:27746515

  2. Phase diagram of restricted Boltzmann machines and generalized Hopfield networks with arbitrary priors.

    PubMed

    Barra, Adriano; Genovese, Giuseppe; Sollich, Peter; Tantari, Daniele

    2018-02-01

    Restricted Boltzmann machines are described by the Gibbs measure of a bipartite spin glass, which in turn can be seen as a generalized Hopfield network. This equivalence allows us to characterize the state of these systems in terms of their retrieval capabilities, both at low and high load, of pure states. We study the paramagnetic-spin glass and the spin glass-retrieval phase transitions, as the pattern (i.e., weight) distribution and spin (i.e., unit) priors vary smoothly from Gaussian real variables to Boolean discrete variables. Our analysis shows that the presence of a retrieval phase is robust and not peculiar to the standard Hopfield model with Boolean patterns. The retrieval region becomes larger when the pattern entries and retrieval units get more peaked and, conversely, when the hidden units acquire a broader prior and therefore have a stronger response to high fields. Moreover, at low load retrieval always exists below some critical temperature, for every pattern distribution ranging from the Boolean to the Gaussian case.

  3. Characterizing Mafic, Clay, and Carbonate Components found in MRO/CRISM Images in Libya Montes, Mars, using Advances in Automated Gaussian Modeling of Spectral Features

    NASA Astrophysics Data System (ADS)

    Makarewicz, H. D.; Parente, M.; Perry, K. A.; McKeown, N. K.; Bishop, J. L.

    2009-12-01

    Aqueous processes have been inferred at the Libya Montes rim/terrace complex of the southern Isidis Basin due to the dense concentration of valley networks [1]. Coordinated CRISM-HiRISE investigations of this region characterized discrete units of ancient phyllosilicate deposits covered by an olivine-rich material and a pyroxene caprock [2]. CRISM mapping data show minor phyllosilicate abundances widespread throughout the Southern Highlands [3], which are dominated by low-Ca pyroxene bearing material [4,5]. The carbonate magnesite has also been located throughout this area [6] and at Libya Montes [7]. Our current study involves detailed characterization of the minerals present at Libya Montes through implementation of improved automated Gaussian modeling methods. We have developed an automated procedure for modeling spectral features using Gaussians that has been successfully applied to laboratory studies and hyperspectral analyses of Mars [8,9,10,11]. Several studies are being conducted to improve and validate these models. These include a comparison of initialization methods, continuum methods, optimization algorithms, and modeled functions. The modeled functions compared include Gaussians, saturated Gaussians, and Lorentzians. This algorithm and the modeling studies are currently being applied towards analyses of CRISM hyperspectral images of Libya Montes and laboratory spectra of mineral mixtures. Specifically, olivine, pyroxene, phyllosilicate, and carbonate deposits are being modeled and classified by composition in CRISM images. References [1]Crumpler, L. S., and K. L. Tanaka (2003) J. Geophys. Res., 108, DOI: 8010.1029/2002JE002040. [2]Bishop, J. L., et al. (2007) 7th Int'l Mars Conf. [3]Mustard, J. F., et al. (2008) Nature, 454, 07305. [4]Bibring, J.-P., et al. (2005) Science, 307,1576. [5]Mustard, J. F., et al.(2005) Science, 307, 1594. [6]Ehlmann, B. L., et al. (2008) Science, 322, 1828. [7]Perry, K., et al. (2009) AGU Fall Mtng. [8]Makarewicz, H. D., et al. (2009) IEEE Whispers Wkshp. [9]Makarewicz, H. D., et al. (2008) AGU Fall Mtng. [10]Makarewicz, H. D., et al. (2009) LPSC. [11]Makarewicz, H. D., et al. (2009) Lunar Sci Forum.

  4. Exploring the roles of cannot-link constraint in community detection via Multi-variance Mixed Gaussian Generative Model.

    PubMed

    Yang, Liang; Ge, Meng; Jin, Di; He, Dongxiao; Fu, Huazhu; Wang, Jing; Cao, Xiaochun

    2017-01-01

    Due to the demand for performance improvement and the existence of prior information, semi-supervised community detection with pairwise constraints becomes a hot topic. Most existing methods have been successfully encoding the must-link constraints, but neglect the opposite ones, i.e., the cannot-link constraints, which can force the exclusion between nodes. In this paper, we are interested in understanding the role of cannot-link constraints and effectively encoding pairwise constraints. Towards these goals, we define an integral generative process jointly considering the network topology, must-link and cannot-link constraints. We propose to characterize this process as a Multi-variance Mixed Gaussian Generative (MMGG) Model to address diverse degrees of confidences that exist in network topology and pairwise constraints and formulate it as a weighted nonnegative matrix factorization problem. The experiments on artificial and real-world networks not only illustrate the superiority of our proposed MMGG, but also, most importantly, reveal the roles of pairwise constraints. That is, though the must-link is more important than cannot-link when either of them is available, both must-link and cannot-link are equally important when both of them are available. To the best of our knowledge, this is the first work on discovering and exploring the importance of cannot-link constraints in semi-supervised community detection.

  5. Exploring the roles of cannot-link constraint in community detection via Multi-variance Mixed Gaussian Generative Model

    PubMed Central

    Ge, Meng; Jin, Di; He, Dongxiao; Fu, Huazhu; Wang, Jing; Cao, Xiaochun

    2017-01-01

    Due to the demand for performance improvement and the existence of prior information, semi-supervised community detection with pairwise constraints becomes a hot topic. Most existing methods have been successfully encoding the must-link constraints, but neglect the opposite ones, i.e., the cannot-link constraints, which can force the exclusion between nodes. In this paper, we are interested in understanding the role of cannot-link constraints and effectively encoding pairwise constraints. Towards these goals, we define an integral generative process jointly considering the network topology, must-link and cannot-link constraints. We propose to characterize this process as a Multi-variance Mixed Gaussian Generative (MMGG) Model to address diverse degrees of confidences that exist in network topology and pairwise constraints and formulate it as a weighted nonnegative matrix factorization problem. The experiments on artificial and real-world networks not only illustrate the superiority of our proposed MMGG, but also, most importantly, reveal the roles of pairwise constraints. That is, though the must-link is more important than cannot-link when either of them is available, both must-link and cannot-link are equally important when both of them are available. To the best of our knowledge, this is the first work on discovering and exploring the importance of cannot-link constraints in semi-supervised community detection. PMID:28678864

  6. A path integral approach to the Hodgkin-Huxley model

    NASA Astrophysics Data System (ADS)

    Baravalle, Roman; Rosso, Osvaldo A.; Montani, Fernando

    2017-11-01

    To understand how single neurons process sensory information, it is necessary to develop suitable stochastic models to describe the response variability of the recorded spike trains. Spikes in a given neuron are produced by the synergistic action of sodium and potassium of the voltage-dependent channels that open or close the gates. Hodgkin and Huxley (HH) equations describe the ionic mechanisms underlying the initiation and propagation of action potentials, through a set of nonlinear ordinary differential equations that approximate the electrical characteristics of the excitable cell. Path integral provides an adequate approach to compute quantities such as transition probabilities, and any stochastic system can be expressed in terms of this methodology. We use the technique of path integrals to determine the analytical solution driven by a non-Gaussian colored noise when considering the HH equations as a stochastic system. The different neuronal dynamics are investigated by estimating the path integral solutions driven by a non-Gaussian colored noise q. More specifically we take into account the correlational structures of the complex neuronal signals not just by estimating the transition probability associated to the Gaussian approach of the stochastic HH equations, but instead considering much more subtle processes accounting for the non-Gaussian noise that could be induced by the surrounding neural network and by feedforward correlations. This allows us to investigate the underlying dynamics of the neural system when different scenarios of noise correlations are considered.

  7. Why noise is useful in functional and neural mechanisms of interval timing?

    PubMed Central

    2013-01-01

    Background The ability to estimate durations in the seconds-to-minutes range - interval timing - is essential for survival, adaptation and its impairment leads to severe cognitive and/or motor dysfunctions. The response rate near a memorized duration has a Gaussian shape centered on the to-be-timed interval (criterion time). The width of the Gaussian-like distribution of responses increases linearly with the criterion time, i.e., interval timing obeys the scalar property. Results We presented analytical and numerical results based on the striatal beat frequency (SBF) model showing that parameter variability (noise) mimics behavioral data. A key functional block of the SBF model is the set of oscillators that provide the time base for the entire timing network. The implementation of the oscillators block as simplified phase (cosine) oscillators has the additional advantage that is analytically tractable. We also checked numerically that the scalar property emerges in the presence of memory variability by using biophysically realistic Morris-Lecar oscillators. First, we predicted analytically and tested numerically that in a noise-free SBF model the output function could be approximated by a Gaussian. However, in a noise-free SBF model the width of the Gaussian envelope is independent of the criterion time, which violates the scalar property. We showed analytically and verified numerically that small fluctuations of the memorized criterion time leads to scalar property of interval timing. Conclusions Noise is ubiquitous in the form of small fluctuations of intrinsic frequencies of the neural oscillators, the errors in recording/retrieving stored information related to criterion time, fluctuation in neurotransmitters’ concentration, etc. Our model suggests that the biological noise plays an essential functional role in the SBF interval timing. PMID:23924391

  8. Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    1999-01-01

    A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.

  9. Structural features that predict real-value fluctuations of globular proteins

    PubMed Central

    Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke

    2012-01-01

    It is crucial to consider dynamics for understanding the biological function of proteins. We used a large number of molecular dynamics trajectories of non-homologous proteins as references and examined static structural features of proteins that are most relevant to fluctuations. We examined correlation of individual structural features with fluctuations and further investigated effective combinations of features for predicting the real-value of residue fluctuations using the support vector regression. It was found that some structural features have higher correlation than crystallographic B-factors with fluctuations observed in molecular dynamics trajectories. Moreover, support vector regression that uses combinations of static structural features showed accurate prediction of fluctuations with an average Pearson’s correlation coefficient of 0.669 and a root mean square error of 1.04 Å. This correlation coefficient is higher than the one observed for the prediction by the Gaussian network model. An advantage of the developed method over the Gaussian network models is that the former predicts the real-value of fluctuation. The results help improve our understanding of relationships between protein structure and fluctuation. Furthermore, the developed method provides a convienient practial way to predict fluctuations of proteins using easily computed static structural features of proteins. PMID:22328193

  10. Surveying traffic congestion based on the concept of community structure of complex networks

    NASA Astrophysics Data System (ADS)

    Ma, Lili; Zhang, Zhanli; Li, Meng

    2016-07-01

    In this paper, taking the traffic of Beijing city as an instance, we study city traffic states, especially traffic congestion, based on the concept of network community structure. Concretely, using the floating car data (FCD) information of vehicles gained from the intelligent transport system (ITS) of the city, we construct a new traffic network model which is with floating cars as network nodes and time-varying. It shows that this traffic network has Gaussian degree distributions at different time points. Furthermore, compared with free traffic situations, our simulations show that the traffic network generally has more obvious community structures with larger values of network fitness for congested traffic situations, and through the GPSspg web page, we show that all of our results are consistent with the reality. Then, it indicates that network community structure should be an available way for investigating city traffic congestion problems.

  11. Probing the cross-effect of strains in non-linear elasticity of nearly regular polymer networks by pure shear deformation.

    PubMed

    Katashima, Takuya; Urayama, Kenji; Chung, Ung-il; Sakai, Takamasa

    2015-05-07

    The pure shear deformation of the Tetra-polyethylene glycol gels reveals the presence of an explicit cross-effect of strains in the strain energy density function even for the polymer networks with nearly regular structure including no appreciable amount of structural defect such as trapped entanglement. This result is in contrast to the expectation of the classical Gaussian network model (Neo Hookean model), i.e., the vanishing of the cross effect in regular networks with no trapped entanglement. The results show that (1) the cross effect of strains is not dependent on the network-strand length; (2) the cross effect is not affected by the presence of non-network strands; (3) the cross effect is proportional to the network polymer concentration including both elastically effective and ineffective strands; (4) no cross effect is expected exclusively in zero limit of network concentration in real polymer networks. These features indicate that the real polymer networks with regular network structures have an explicit cross-effect of strains, which originates from some interaction between network strands (other than entanglement effect) such as nematic interaction, topological interaction, and excluded volume interaction.

  12. Application of multivariate Gaussian detection theory to known non-Gaussian probability density functions

    NASA Astrophysics Data System (ADS)

    Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.

    1995-06-01

    A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.

  13. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less

  14. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    DOE PAGES

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; ...

    2017-03-06

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less

  15. Instantaneous global spatial interaction? Exploring the Gaussian inequality, distance and Internet pings in a global network

    NASA Astrophysics Data System (ADS)

    Baker, R. G. V.

    2005-12-01

    The Internet has been publicly portrayed as a new technological horizon yielding instantaneous interaction to a point where geography no longer matters. This research aims to dispel this impression by applying a dynamic form of trip modelling to investigate pings in a global computer network compiled by the Stanford Linear Accelerator Centre (SLAC) from 1998 to 2004. Internet flows have been predicted to have the same mathematical operators as trips to a supermarket, since they are both periodic and constrained by a distance metric. Both actual and virtual trips are part of a spectrum of origin-destination pairs in the time-space convergence of trip time-lines. Internet interaction is very near to the convergence of these time-lines (at a very small time scale in milliseconds, but with interactions over thousands of kilometres). There is a lag effect and this is formalised by the derivation of Gaussian and gravity inequalities between the time taken (Δ t) and the partitioning of distance (Δ x). This inequality seems to be robust for a regression of Δ t to Δ x in the SLAC data set for each year (1998 to 2004). There is a constant ‘forbidden zone’ in the interaction, underpinned by the fact that pings do not travel faster than the speed of light. Superimposed upon this zone is the network capacity where a linear regression of Δ t to Δ x is a proxy summarising global Internet connectivity for that year. The results suggest that there has been a substantial improvement in connectivity over the period with R 2 increasing steadily from 0.39 to 0.65 from less Gaussian spreading of the ping latencies. Further, the regression line shifts towards the inequality boundary from 1998 to 2004, where the increased slope shows a greater proportional rise in local connectivity over global connectivity. A conclusion is that national geography still does matter in spatial interaction modelling of the Internet.

  16. Effect of threshold disorder on the quorum percolation model

    NASA Astrophysics Data System (ADS)

    Monceau, Pascal; Renault, Renaud; Métens, Stéphane; Bottani, Samuel

    2016-07-01

    We study the modifications induced in the behavior of the quorum percolation model on neural networks with Gaussian in-degree by taking into account an uncorrelated Gaussian thresholds variability. We derive a mean-field approach and show its relevance by carrying out explicit Monte Carlo simulations. It turns out that such a disorder shifts the position of the percolation transition, impacts the size of the giant cluster, and can even destroy the transition. Moreover, we highlight the occurrence of disorder independent fixed points above the quorum critical value. The mean-field approach enables us to interpret these effects in terms of activation probability. A finite-size analysis enables us to show that the order parameter is weakly self-averaging with an exponent independent on the thresholds disorder. Last, we show that the effects of the thresholds and connectivity disorders cannot be easily discriminated from the measured averaged physical quantities.

  17. F-MAP: A Bayesian approach to infer the gene regulatory network using external hints

    PubMed Central

    Shahdoust, Maryam; Mahjub, Hossein; Sadeghi, Mehdi

    2017-01-01

    The Common topological features of related species gene regulatory networks suggest reconstruction of the network of one species by using the further information from gene expressions profile of related species. We present an algorithm to reconstruct the gene regulatory network named; F-MAP, which applies the knowledge about gene interactions from related species. Our algorithm sets a Bayesian framework to estimate the precision matrix of one species microarray gene expressions dataset to infer the Gaussian Graphical model of the network. The conjugate Wishart prior is used and the information from related species is applied to estimate the hyperparameters of the prior distribution by using the factor analysis. Applying the proposed algorithm on six related species of drosophila shows that the precision of reconstructed networks is improved considerably compared to the precision of networks constructed by other Bayesian approaches. PMID:28938012

  18. An Equivalent Fracture Modeling Method

    NASA Astrophysics Data System (ADS)

    Li, Shaohua; Zhang, Shujuan; Yu, Gaoming; Xu, Aiyun

    2017-12-01

    3D fracture network model is built based on discrete fracture surfaces, which are simulated based on fracture length, dip, aperture, height and so on. The interesting area of Wumishan Formation of Renqiu buried hill reservoir is about 57 square kilometer and the thickness of target strata is more than 2000 meters. In addition with great fracture density, the fracture simulation and upscaling of discrete fracture network model of Wumishan Formation are very intense computing. In order to solve this problem, a method of equivalent fracture modeling is proposed. First of all, taking the fracture interpretation data obtained from imaging logging and conventional logging as the basic data, establish the reservoir level model, and then under the constraint of reservoir level model, take fault distance analysis model as the second variable, establish fracture density model by Sequential Gaussian Simulation method. Increasing the width, height and length of fracture, at the same time decreasing its density in order to keep the similar porosity and permeability after upscaling discrete fracture network model. In this way, the fracture model of whole interesting area can be built within an accepted time.

  19. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  20. Predicting Market Impact Costs Using Nonparametric Machine Learning Models

    PubMed Central

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  1. Non-Gaussian analysis of diffusion weighted imaging in head and neck at 3T: a pilot study in patients with nasopharyngeal carcinoma.

    PubMed

    Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S P; Bhatia, Kunwar S; Wang, Yi-Xiang J; Ahuja, Anil T; King, Ann D

    2014-01-01

    To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm(2). DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization.

  2. Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy

    NASA Astrophysics Data System (ADS)

    Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris

    2018-04-01

    We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.

  3. Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy.

    PubMed

    Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris

    2018-04-06

    We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.

  4. Airborne Detection and Tracking of Geologic Leakage Sites

    NASA Astrophysics Data System (ADS)

    Jacob, Jamey; Allamraju, Rakshit; Axelrod, Allan; Brown, Calvin; Chowdhary, Girish; Mitchell, Taylor

    2014-11-01

    Safe storage of CO2 to reduce greenhouse gas emissions without adversely affecting energy use or hindering economic growth requires development of monitoring technology that is capable of validating storage permanence while ensuring the integrity of sequestration operations. Soil gas monitoring has difficulty accurately distinguishing gas flux signals related to leakage from those associated with meteorologically driven changes of soil moisture and temperature. Integrated ground and airborne monitoring systems are being deployed capable of directly detecting CO2 concentration in storage sites. Two complimentary approaches to detecting leaks in the carbon sequestration fields are presented. The first approach focuses on reducing the requisite network communication for fusing individual Gaussian Process (GP) CO2 sensing models into a global GP CO2 model. The GP fusion approach learns how to optimally allocate the static and mobile sensors. The second approach leverages a hierarchical GP-Sigmoidal Gaussian Cox Process for airborne predictive mission planning to optimally reducing the entropy of the global CO2 model. Results from the approaches will be presented.

  5. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks

    PubMed Central

    Vakanski, A; Ferguson, JM; Lee, S

    2016-01-01

    Objective The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons. PMID:28111643

  6. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks.

    PubMed

    Vakanski, A; Ferguson, J M; Lee, S

    2016-12-01

    The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons.

  7. Particle Filtering for Model-Based Anomaly Detection in Sensor Networks

    NASA Technical Reports Server (NTRS)

    Solano, Wanda; Banerjee, Bikramjit; Kraemer, Landon

    2012-01-01

    A novel technique has been developed for anomaly detection of rocket engine test stand (RETS) data. The objective was to develop a system that postprocesses a csv file containing the sensor readings and activities (time-series) from a rocket engine test, and detects any anomalies that might have occurred during the test. The output consists of the names of the sensors that show anomalous behavior, and the start and end time of each anomaly. In order to reduce the involvement of domain experts significantly, several data-driven approaches have been proposed where models are automatically acquired from the data, thus bypassing the cost and effort of building system models. Many supervised learning methods can efficiently learn operational and fault models, given large amounts of both nominal and fault data. However, for domains such as RETS data, the amount of anomalous data that is actually available is relatively small, making most supervised learning methods rather ineffective, and in general met with limited success in anomaly detection. The fundamental problem with existing approaches is that they assume that the data are iid, i.e., independent and identically distributed, which is violated in typical RETS data. None of these techniques naturally exploit the temporal information inherent in time series data from the sensor networks. There are correlations among the sensor readings, not only at the same time, but also across time. However, these approaches have not explicitly identified and exploited such correlations. Given these limitations of model-free methods, there has been renewed interest in model-based methods, specifically graphical methods that explicitly reason temporally. The Gaussian Mixture Model (GMM) in a Linear Dynamic System approach assumes that the multi-dimensional test data is a mixture of multi-variate Gaussians, and fits a given number of Gaussian clusters with the help of the wellknown Expectation Maximization (EM) algorithm. The parameters thus learned are used for calculating the joint distribution of the observations. However, this GMM assumption is essentially an approximation and signals the potential viability of non-parametric density estimators. This is the key idea underlying the new approach.

  8. Numerical modeling of macrodispersion in heterogeneous media: a comparison of multi-Gaussian and non-multi-Gaussian models

    NASA Astrophysics Data System (ADS)

    Wen, Xian-Huan; Gómez-Hernández, J. Jaime

    1998-03-01

    The macrodispersion of an inert solute in a 2-D heterogeneous porous media is estimated numerically in a series of fields of varying heterogeneity. Four different random function (RF) models are used to model log-transmissivity (ln T) spatial variability, and for each of these models, ln T variance is varied from 0.1 to 2.0. The four RF models share the same univariate Gaussian histogram and the same isotropic covariance, but differ from one another in terms of the spatial connectivity patterns at extreme transmissivity values. More specifically, model A is a multivariate Gaussian model for which, by definition, extreme values (both high and low) are spatially uncorrelated. The other three models are non-multi-Gaussian: model B with high connectivity of high extreme values, model C with high connectivity of low extreme values, and model D with high connectivities of both high and low extreme values. Residence time distributions (RTDs) and macrodispersivities (longitudinal and transverse) are computed on ln T fields corresponding to the different RF models, for two different flow directions and at several scales. They are compared with each other, as well as with predicted values based on first-order analytical results. Numerically derived RTDs and macrodispersivities for the multi-Gaussian model are in good agreement with analytically derived values using first-order theories for log-transmissivity variance up to 2.0. The results from the non-multi-Gaussian models differ from each other and deviate largely from the multi-Gaussian results even when ln T variance is small. RTDs in non-multi-Gaussian realizations with high connectivity at high extreme values display earlier breakthrough than in multi-Gaussian realizations, whereas later breakthrough and longer tails are observed for RTDs from non-multi-Gaussian realizations with high connectivity at low extreme values. Longitudinal macrodispersivities in the non-multi-Gaussian realizations are, in general, larger than in the multi-Gaussian ones, while transverse macrodispersivities in the non-multi-Gaussian realizations can be larger or smaller than in the multi-Gaussian ones depending on the type of connectivity at extreme values. Comparing the numerical results for different flow directions, it is confirmed that macrodispersivities in multi-Gaussian realizations with isotropic spatial correlation are not flow direction-dependent. Macrodispersivities in the non-multi-Gaussian realizations, however, are flow direction-dependent although the covariance of ln T is isotropic (the same for all four models). It is important to account for high connectivities at extreme transmissivity values, a likely situation in some geological formations. Some of the discrepancies between first-order-based analytical results and field-scale tracer test data may be due to the existence of highly connected paths of extreme conductivity values.

  9. Non-Gaussian Analysis of Diffusion Weighted Imaging in Head and Neck at 3T: A Pilot Study in Patients with Nasopharyngeal Carcinoma

    PubMed Central

    Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S. P.; Bhatia, Kunwar S.; Wang, Yi-Xiang J.; Ahuja, Anil T.; King, Ann D.

    2014-01-01

    Purpose To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). Materials and Methods After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm2. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Results Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Conclusion Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization. PMID:24466318

  10. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  11. Stress-induced electric current fluctuations in rocks: a superstatistical model

    NASA Astrophysics Data System (ADS)

    Cartwright-Taylor, Alexis; Vallianatos, Filippos; Sammonds, Peter

    2017-04-01

    We recorded spontaneous electric current flow in non-piezoelectric Carrara marble samples during triaxial deformation. Mechanical data, ultrasonic velocities and acoustic emissions were acquired simultaneously with electric current to constrain the relationship between electric current flow, differential stress and damage. Under strain-controlled loading, spontaneous electric current signals (nA) were generated and sustained under all conditions tested. In dry samples, a detectable electric current arises only during dilatancy and the overall signal is correlated with the damage induced by microcracking. Our results show that fracture plays a key role in the generation of electric currents in deforming rocks (Cartwright-Taylor et al., in prep). We also analysed the high-frequency fluctuations of these electric current signals and found that they are not normally distributed - they exhibit power-law tails (Cartwright-Taylor et al., 2014). We modelled these distributions with q-Gaussian statistics, derived by maximising the Tsallis entropy. This definition of entropy is particularly applicable to systems which are strongly correlated and far from equilibrium. Good agreement, at all experimental conditions, between the distributions of electric current fluctuations and the q-Gaussian function with q-values far from one, illustrates the highly correlated, fractal nature of the electric source network within the samples and provides further evidence that the source of the electric signals is the developing fractal network of cracks. It has been shown (Beck, 2001) that q-Gaussian distributions can arise from the superposition of local relaxations in the presence of a slowly varying driving force, thus providing a dynamic reason for the appearance of Tsallis statistics in systems with a fluctuating energy dissipation rate. So, the probability distribution for a dynamic variable, u under some external slow forcing, β, can be obtained as a superposition of temporary local equilibrium processes whose variance fluctuates over time. The appearance of q-Gaussian statistics are caused by the fluctuating β parameter, which effectively models the fluctuating energy dissipation rate in the system. This concept is known as superstatistics and is physically relevant for modelling driven non-equilibrium systems where the environmental conditions fluctuate on a large scale. The idea is that the environmental variable, such as temperature or pressure, changes so slowly that a rapidly fluctuating variable within that environment has time to relax back to equilibrium between each change in the environment. The application of superstatistical techniques to our experimental electric current fluctuations show that they can indeed be described, to good approximation, by the superposition of local Gaussian processes with fluctuating variance. We conclude, then, that the measured electric current fluctuates in response to intermittent energy dissipation and is driven to varying temporary local equilibria during deformation by the variations in stress intensity. The advantage of this technique is that, once the model has been established to be a good description of the system in question, the average β parameter (a measure of the average energy dissipation rate) for the system can be obtained simply from the macroscopic q-Gaussian distribution parameters.

  12. Innovation flow through social networks: productivity distribution in France and Italy

    NASA Astrophysics Data System (ADS)

    di Matteo, T.; Aste, T.; Gallegati, M.

    2005-10-01

    From a detailed empirical analysis of the productivity of non financial firms across several countries and years we show that productivity follows a non-Gaussian distribution with `fat tails' in the large productivity region which are well mimicked by power law behaviors. We discuss how these empirical findings can be linked to a mechanism of exchanges in a social network where firms improve their productivity by direct innovation and/or by imitation of other firm's technological and organizational solutions. The type of network-connectivity determines how fast and how efficiently information can diffuse and how quickly innovation will permeate or behaviors will be imitated. From a model for innovation flow through a complex network we show that the expectation values of the productivity of each firm are proportional to its connectivity in the network of links between firms. The comparison with the empirical distributions in France and Italy reveals that in this model, such a network must be of a scale-free type with a power-law degree distribution in the large connectivity range.

  13. A Hierarchical Convolutional Neural Network for vesicle fusion event classification.

    PubMed

    Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke

    2017-09-01

    Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  15. Discovering Condition-Specific Gene Co-Expression Patterns Using Gaussian Mixture Models: A Cancer Case Study.

    PubMed

    Ficklin, Stephen P; Dunwoodie, Leland J; Poehlman, William L; Watson, Christopher; Roche, Kimberly E; Feltus, F Alex

    2017-08-17

    A gene co-expression network (GCN) describes associations between genes and points to genetic coordination of biochemical pathways. However, genetic correlations in a GCN are only detectable if they are present in the sampled conditions. With the increasing quantity of gene expression samples available in public repositories, there is greater potential for discovery of genetic correlations from a variety of biologically interesting conditions. However, even if gene correlations are present, their discovery can be masked by noise. Noise is introduced from natural variation (intrinsic and extrinsic), systematic variation (caused by sample measurement protocols and instruments), and algorithmic and statistical variation created by selection of data processing tools. A variety of published studies, approaches and methods attempt to address each of these contributions of variation to reduce noise. Here we describe an approach using Gaussian Mixture Models (GMMs) to address natural extrinsic (condition-specific) variation during network construction from mixed input conditions. To demonstrate utility, we build and analyze a condition-annotated GCN from a compendium of 2,016 mixed gene expression data sets from five tumor subtypes obtained from The Cancer Genome Atlas. Our results show that GMMs help discover tumor subtype specific gene co-expression patterns (modules) that are significantly enriched for clinical attributes.

  16. oGNM: online computation of structural dynamics using the Gaussian Network Model

    PubMed Central

    Yang, Lee-Wei; Rader, A. J.; Liu, Xiong; Jursa, Cristopher Jon; Chen, Shann Ching; Karimi, Hassan A.; Bahar, Ivet

    2006-01-01

    An assessment of the equilibrium dynamics of biomolecular systems, and in particular their most cooperative fluctuations accessible under native state conditions, is a first step towards understanding molecular mechanisms relevant to biological function. We present a web-based system, oGNM that enables users to calculate online the shape and dispersion of normal modes of motion for proteins, oligonucleotides and their complexes, or associated biological units, using the Gaussian Network Model (GNM). Computations with the new engine are 5–6 orders of magnitude faster than those using conventional normal mode analyses. Two cases studies illustrate the utility of oGNM. The first shows that the thermal fluctuations predicted for 1250 non-homologous proteins correlate well with X-ray crystallographic data over a broad range [7.3–15 Å] of inter-residue interaction cutoff distances and the correlations improve with increasing observation temperatures. The second study, focused on 64 oligonucleotides and oligonucleotide–protein complexes, shows that good agreement with experiments is achieved by representing each nucleotide by three GNM nodes (as opposed to one-node-per-residue in proteins) along with uniform interaction ranges for all components of the complexes. These results open the way to a rapid assessment of the dynamics of DNA/RNA-containing complexes. The server can be accessed at . PMID:16845002

  17. NGMIX: Gaussian mixture models for 2D images

    NASA Astrophysics Data System (ADS)

    Sheldon, Erin

    2015-08-01

    NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

  18. Investigation of non-Gaussian effects in the Brazilian option market

    NASA Astrophysics Data System (ADS)

    Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.

    2018-04-01

    An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.

  19. An equal force theory for network models of soft materials with arbitrary molecular weight distribution

    NASA Astrophysics Data System (ADS)

    Verron, E.; Gros, A.

    2017-09-01

    Most network models for soft materials, e.g. elastomers and gels, are dedicated to idealized materials: all chains admit the same number of Kuhn segments. Nevertheless, such standard models are not appropriate for materials involving multiple networks, and some specific constitutive equations devoted to these materials have been derived in the last few years. In nearly all cases, idealized networks of different chain lengths are assembled following an equal strain assumption; only few papers adopt an equal stress assumption, although some authors argue that such hypothesis would reflect the equilibrium of the different networks in contact. In this work, a full-network model with an arbitrary chain length distribution is derived by considering that chains of different lengths satisfy the equal force assumption in each direction of the unit sphere. The derivation is restricted to non-Gaussian freely jointed chains and to affine deformation of the sphere. Firstly, after a proper definition of the undeformed configuration of the network, we demonstrate that the equal force assumption leads to the equality of a normalized stretch in chains of different lengths. Secondly, we establish that the network with chain length distribution behaves as an idealized full-network of which both chain length and density of are provided by the chain length distribution. This approach is finally illustrated with two examples: the derivation of a new expression for the Young modulus of bimodal interpenetrated polymer networks, and the prediction of the change in fluorescence during deformation of mechanochemically responsive elastomers.

  20. Consistency of biological networks inferred from microarray and sequencing data.

    PubMed

    Vinciotti, Veronica; Wit, Ernst C; Jansen, Rick; de Geus, Eco J C N; Penninx, Brenda W J H; Boomsma, Dorret I; 't Hoen, Peter A C

    2016-06-24

    Sparse Gaussian graphical models are popular for inferring biological networks, such as gene regulatory networks. In this paper, we investigate the consistency of these models across different data platforms, such as microarray and next generation sequencing, on the basis of a rich dataset containing samples that are profiled under both techniques as well as a large set of independent samples. Our analysis shows that individual node variances can have a remarkable effect on the connectivity of the resulting network. Their inconsistency across platforms and the fact that the variability level of a node may not be linked to its regulatory role mean that, failing to scale the data prior to the network analysis, leads to networks that are not reproducible across different platforms and that may be misleading. Moreover, we show how the reproducibility of networks across different platforms is significantly higher if networks are summarised in terms of enrichment amongst functional groups of interest, such as pathways, rather than at the level of individual edges. Careful pre-processing of transcriptional data and summaries of networks beyond individual edges can improve the consistency of network inference across platforms. However, caution is needed at this stage in the (over)interpretation of gene regulatory networks inferred from biological data.

  1. USING THE HERMITE POLYNOMIALS IN RADIOLOGICAL MONITORING NETWORKS.

    PubMed

    Benito, G; Sáez, J C; Blázquez, J B; Quiñones, J

    2018-03-15

    The most interesting events in Radiological Monitoring Network correspond to higher values of H*(10). The higher doses cause skewness in the probability density function (PDF) of the records, which there are not Gaussian anymore. Within this work the probability of having a dose >2 standard deviations is proposed as surveillance of higher doses. Such probability is estimated by using the Hermite polynomials for reconstructing the PDF. The result is that the probability is ~6 ± 1%, much >2.5% corresponding to Gaussian PDFs, which may be of interest in the design of alarm level for higher doses.

  2. Prediction of temperature and HAZ in thermal-based processes with Gaussian heat source by a hybrid GA-ANN model

    NASA Astrophysics Data System (ADS)

    Fazli Shahri, Hamid Reza; Mahdavinejad, Ramezanali

    2018-02-01

    Thermal-based processes with Gaussian heat source often produce excessive temperature which can impose thermally-affected layers in specimens. Therefore, the temperature distribution and Heat Affected Zone (HAZ) of materials are two critical factors which are influenced by different process parameters. Measurement of the HAZ thickness and temperature distribution within the processes are not only difficult but also expensive. This research aims at finding a valuable knowledge on these factors by prediction of the process through a novel combinatory model. In this study, an integrated Artificial Neural Network (ANN) and genetic algorithm (GA) was used to predict the HAZ and temperature distribution of the specimens. To end this, a series of full factorial design of experiments were conducted by applying a Gaussian heat flux on Ti-6Al-4 V at first, then the temperature of the specimen was measured by Infrared thermography. The HAZ width of each sample was investigated through measuring the microhardness. Secondly, the experimental data was used to create a GA-ANN model. The efficiency of GA in design and optimization of the architecture of ANN was investigated. The GA was used to determine the optimal number of neurons in hidden layer, learning rate and momentum coefficient of both output and hidden layers of ANN. Finally, the reliability of models was assessed according to the experimental results and statistical indicators. The results demonstrated that the combinatory model predicted the HAZ and temperature more effective than a trial-and-error ANN model.

  3. A non-gaussian model of continuous atmospheric turbulence for use in aircraft design

    NASA Technical Reports Server (NTRS)

    Reeves, P. M.; Joppa, R. G.; Ganzer, V. M.

    1976-01-01

    A non-Gaussian model of atmospheric turbulence is presented and analyzed. The model is restricted to the regions of the atmosphere where the turbulence is steady or continuous, and the assumptions of homogeneity and stationarity are justified. Also spatial distribution of turbulence is neglected, so the model consists of three independent, stationary stochastic processes which represent the vertical, lateral, and longitudinal gust components. The non-Gaussian and Gaussian models are compared with experimental data, and it is shown that the Gaussian model underestimates the number of high velocity gusts which occur in the atmosphere, while the non-Gaussian model can be adjusted to match the observed high velocity gusts more satisfactorily. Application of the proposed model to aircraft response is investigated, with particular attention to the response power spectral density, the probability distribution, and the level crossing frequency. A numerical example is presented which illustrates the application of the non-Gaussian model to the study of an aircraft autopilot system. Listings and sample results of a number of computer programs used in working with the model are included.

  4. Limit Theorems and Their Relation to Solute Transport in Simulated Fractured Media

    NASA Astrophysics Data System (ADS)

    Reeves, D. M.; Benson, D. A.; Meerschaert, M. M.

    2003-12-01

    Solute particles that travel through fracture networks are subject to wide velocity variations along a restricted set of directions. This may result in super-Fickian dispersion along a few primary scaling directions. The fractional advection-dispersion equation (FADE), a modification of the original advection-dispersion equation in which a fractional derivative replaces the integer-order dispersion term, has the ability to model rapid, non-Gaussian solute transport. The FADE assumes that solute particle motions converge to either α -stable or operator stable densities, which are modeled by spatial fractional derivatives. In multiple dimensions, the multi-fractional dispersion derivative dictates the order and weight of differentiation in all directions, which correspond to the statistics of large particle motions in all directions. This study numerically investigates the presence of super- Fickian solute transport through simulated two-dimensional fracture networks. An ensemble of networks is gen

  5. Dissecting psychiatric spectrum disorders by generative embedding☆☆☆

    PubMed Central

    Brodersen, Kay H.; Deserno, Lorenz; Schlagenhauf, Florian; Lin, Zhihao; Penny, Will D.; Buhmann, Joachim M.; Stephan, Klaas E.

    2013-01-01

    This proof-of-concept study examines the feasibility of defining subgroups in psychiatric spectrum disorders by generative embedding, using dynamical system models which infer neuronal circuit mechanisms from neuroimaging data. To this end, we re-analysed an fMRI dataset of 41 patients diagnosed with schizophrenia and 42 healthy controls performing a numerical n-back working-memory task. In our generative-embedding approach, we used parameter estimates from a dynamic causal model (DCM) of a visual–parietal–prefrontal network to define a model-based feature space for the subsequent application of supervised and unsupervised learning techniques. First, using a linear support vector machine for classification, we were able to predict individual diagnostic labels significantly more accurately (78%) from DCM-based effective connectivity estimates than from functional connectivity between (62%) or local activity within the same regions (55%). Second, an unsupervised approach based on variational Bayesian Gaussian mixture modelling provided evidence for two clusters which mapped onto patients and controls with nearly the same accuracy (71%) as the supervised approach. Finally, when restricting the analysis only to the patients, Gaussian mixture modelling suggested the existence of three patient subgroups, each of which was characterised by a different architecture of the visual–parietal–prefrontal working-memory network. Critically, even though this analysis did not have access to information about the patients' clinical symptoms, the three neurophysiologically defined subgroups mapped onto three clinically distinct subgroups, distinguished by significant differences in negative symptom severity, as assessed on the Positive and Negative Syndrome Scale (PANSS). In summary, this study provides a concrete example of how psychiatric spectrum diseases may be split into subgroups that are defined in terms of neurophysiological mechanisms specified by a generative model of network dynamics such as DCM. The results corroborate our previous findings in stroke patients that generative embedding, compared to analyses of more conventional measures such as functional connectivity or regional activity, can significantly enhance both the interpretability and performance of computational approaches to clinical classification. PMID:24363992

  6. Dynamical Casimir Effect for Gaussian Boson Sampling.

    PubMed

    Peropadre, Borja; Huh, Joonsuk; Sabín, Carlos

    2018-02-28

    We show that the Dynamical Casimir Effect (DCE), realized on two multimode coplanar waveg-uide resonators, implements a gaussian boson sampler (GBS). The appropriate choice of the mirror acceleration that couples both resonators translates into the desired initial gaussian state and many-boson interference in a boson sampling network. In particular, we show that the proposed quantum simulator naturally performs a classically hard task, known as scattershot boson sampling. Our result unveils an unprecedented computational power of DCE, and paves the way for using DCE as a resource for quantum simulation.

  7. Non-Gaussianity from self-ordering scalar fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Daniel G.; Instituto de Fisica Teorica UAM/CSIC, Universidad Autonoma de Madrid, 28049 Madrid; Caldwell, Robert R.

    The Universe may harbor relics of the post-inflationary epoch in the form of a network of self-ordered scalar fields. Such fossils, while consistent with current cosmological data at trace levels, may leave too weak an imprint on the cosmic microwave background and the large-scale distribution of matter to allow for direct detection. The non-Gaussian statistics of the density perturbations induced by these fields, however, permit a direct means to probe for these relics. Here we calculate the bispectrum that arises in models of self-ordered scalar fields. We find a compact analytic expression for the bispectrum, evaluate it numerically, and providemore » a simple approximation that may be useful for data analysis. The bispectrum is largest for triangles that are aligned (have edges k{sub 1{approx_equal}}2k{sub 2{approx_equal}}2k{sub 3}) as opposed to the local-model bispectrum, which peaks for squeezed triangles (k{sub 1{approx_equal}}k{sub 2}>>k{sub 3}), and the equilateral bispectrum, which peaks at k{sub 1{approx_equal}}k{sub 2{approx_equal}}k{sub 3}. We estimate that this non-Gaussianity should be detectable by the Planck satellite if the contribution from self-ordering scalar fields to primordial perturbations is near the current upper limit.« less

  8. Truncated Gaussians as tolerance sets

    NASA Technical Reports Server (NTRS)

    Cozman, Fabio; Krotkov, Eric

    1994-01-01

    This work focuses on the use of truncated Gaussian distributions as models for bounded data measurements that are constrained to appear between fixed limits. The authors prove that the truncated Gaussian can be viewed as a maximum entropy distribution for truncated bounded data, when mean and covariance are given. The characteristic function for the truncated Gaussian is presented; from this, algorithms are derived for calculation of mean, variance, summation, application of Bayes rule and filtering with truncated Gaussians. As an example of the power of their methods, a derivation of the disparity constraint (used in computer vision) from their models is described. The authors' approach complements results in Statistics, but their proposal is not only to use the truncated Gaussian as a model for selected data; they propose to model measurements as fundamentally in terms of truncated Gaussians.

  9. A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.

    PubMed

    Carreau, Julie; Bengio, Yoshua

    2009-07-01

    In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.

  10. Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia

    NASA Astrophysics Data System (ADS)

    Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg

    2013-03-01

    Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.

  11. Large-scale modeling of rain fields from a rain cell deterministic model

    NASA Astrophysics Data System (ADS)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  12. Adiabatic superconducting cells for ultra-low-power artificial neural networks.

    PubMed

    Schegolev, Andrey E; Klenov, Nikolay V; Soloviev, Igor I; Tereshonok, Maxim V

    2016-01-01

    We propose the concept of using superconducting quantum interferometers for the implementation of neural network algorithms with extremely low power dissipation. These adiabatic elements are Josephson cells with sigmoid- and Gaussian-like activation functions. We optimize their parameters for application in three-layer perceptron and radial basis function networks.

  13. Bayesian estimation inherent in a Mexican-hat-type neural network

    NASA Astrophysics Data System (ADS)

    Takiyama, Ken

    2016-05-01

    Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.

  14. The area of isodensity contours in cosmological models and galaxy surveys

    NASA Technical Reports Server (NTRS)

    Ryden, Barbara S.; Melott, Adrian L.; Craig, David A.; Gott, J. Richard, III; Weinberg, David H.

    1989-01-01

    The contour crossing statistic, defined as the mean number of times per unit length that a straight line drawn through the field crosses a given contour, is applied to model density fields and to smoothed samples of galaxies. Models in which the matter is in a bubble structure, in a filamentary net, or in clusters can be distinguished from Gaussian density distributions. The shape of the contour crossing curve in the initially Gaussian fields considered remains Gaussian after gravitational evolution and biasing, as long as the smoothing length is longer than the mass correlation length. With a smoothing length of 5/h Mpc, models containing cosmic strings are indistinguishable from Gaussian distributions. Cosmic explosion models are significantly non-Gaussian, having a bubbly structure. Samples from the CfA survey and the Haynes and Giovanelli (1986) survey are more strongly non-Gaussian at a smoothing length of 6/h Mpc than any of the models examined. At a smoothing length of 12/h Mpc, the Haynes and Giovanelli sample appears Gaussian.

  15. Connections between Graphical Gaussian Models and Factor Analysis

    ERIC Educational Resources Information Center

    Salgueiro, M. Fatima; Smith, Peter W. F.; McDonald, John W.

    2010-01-01

    Connections between graphical Gaussian models and classical single-factor models are obtained by parameterizing the single-factor model as a graphical Gaussian model. Models are represented by independence graphs, and associations between each manifest variable and the latent factor are measured by factor partial correlations. Power calculations…

  16. Recurrent Neural Network Applications for Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  17. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    PubMed

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  18. Comparisons of non-Gaussian statistical models in DNA methylation analysis.

    PubMed

    Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun

    2014-06-16

    As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.

  19. Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis

    PubMed Central

    Ma, Zhanyu; Teschendorff, Andrew E.; Yu, Hong; Taghia, Jalil; Guo, Jun

    2014-01-01

    As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance. PMID:24937687

  20. Modeling Array Stations in SIG-VISA

    NASA Astrophysics Data System (ADS)

    Ding, N.; Moore, D.; Russell, S.

    2013-12-01

    We add support for array stations to SIG-VISA, a system for nuclear monitoring using probabilistic inference on seismic signals. Array stations comprise a large portion of the IMS network; they can provide increased sensitivity and more accurate directional information compared to single-component stations. Our existing model assumed that signals were independent at each station, which is false when lots of stations are close together, as in an array. The new model removes that assumption by jointly modeling signals across array elements. This is done by extending our existing Gaussian process (GP) regression models, also known as kriging, from a 3-dimensional single-component space of events to a 6-dimensional space of station-event pairs. For each array and each event attribute (including coda decay, coda height, amplitude transfer and travel time), we model the joint distribution across array elements using a Gaussian process that learns the correlation lengthscale across the array, thereby incorporating information of array stations into the probabilistic inference framework. To evaluate the effectiveness of our model, we perform ';probabilistic beamforming' on new events using our GP model, i.e., we compute the event azimuth having highest posterior probability under the model, conditioned on the signals at array elements. We compare the results from our probabilistic inference model to the beamforming currently performed by IMS station processing.

  1. Neural network modeling and an uncertainty analysis in Bayesian framework: A case study from the KTB borehole site

    NASA Astrophysics Data System (ADS)

    Maiti, Saumen; Tiwari, Ram Krishna

    2010-10-01

    A new probabilistic approach based on the concept of Bayesian neural network (BNN) learning theory is proposed for decoding litho-facies boundaries from well-log data. We show that how a multi-layer-perceptron neural network model can be employed in Bayesian framework to classify changes in litho-log successions. The method is then applied to the German Continental Deep Drilling Program (KTB) well-log data for classification and uncertainty estimation in the litho-facies boundaries. In this framework, a posteriori distribution of network parameter is estimated via the principle of Bayesian probabilistic theory, and an objective function is minimized following the scaled conjugate gradient optimization scheme. For the model development, we inflict a suitable criterion, which provides probabilistic information by emulating different combinations of synthetic data. Uncertainty in the relationship between the data and the model space is appropriately taken care by assuming a Gaussian a priori distribution of networks parameters (e.g., synaptic weights and biases). Prior to applying the new method to the real KTB data, we tested the proposed method on synthetic examples to examine the sensitivity of neural network hyperparameters in prediction. Within this framework, we examine stability and efficiency of this new probabilistic approach using different kinds of synthetic data assorted with different level of correlated noise. Our data analysis suggests that the designed network topology based on the Bayesian paradigm is steady up to nearly 40% correlated noise; however, adding more noise (˜50% or more) degrades the results. We perform uncertainty analyses on training, validation, and test data sets with and devoid of intrinsic noise by making the Gaussian approximation of the a posteriori distribution about the peak model. We present a standard deviation error-map at the network output corresponding to the three types of the litho-facies present over the entire litho-section of the KTB. The comparisons of maximum a posteriori geological sections constructed here, based on the maximum a posteriori probability distribution, with the available geological information and the existing geophysical findings suggest that the BNN results reveal some additional finer details in the KTB borehole data at certain depths, which appears to be of some geological significance. We also demonstrate that the proposed BNN approach is superior to the conventional artificial neural network in terms of both avoiding "over-fitting" and aiding uncertainty estimation, which are vital for meaningful interpretation of geophysical records. Our analyses demonstrate that the BNN-based approach renders a robust means for the classification of complex changes in the litho-facies successions and thus could provide a useful guide for understanding the crustal inhomogeneity and the structural discontinuity in many other tectonically complex regions.

  2. Estimation of continuous multi-DOF finger joint kinematics from surface EMG using a multi-output Gaussian Process.

    PubMed

    Ngeo, Jimson; Tamei, Tomoya; Shibata, Tomohiro

    2014-01-01

    Surface electromyographic (EMG) signals have often been used in estimating upper and lower limb dynamics and kinematics for the purpose of controlling robotic devices such as robot prosthesis and finger exoskeletons. However, in estimating multiple and a high number of degrees-of-freedom (DOF) kinematics from EMG, output DOFs are usually estimated independently. In this study, we estimate finger joint kinematics from EMG signals using a multi-output convolved Gaussian Process (Multi-output Full GP) that considers dependencies between outputs. We show that estimation of finger joints from muscle activation inputs can be improved by using a regression model that considers inherent coupling or correlation within the hand and finger joints. We also provide a comparison of estimation performance between different regression methods, such as Artificial Neural Networks (ANN) which is used by many of the related studies. We show that using a multi-output GP gives improved estimation compared to multi-output ANN and even dedicated or independent regression models.

  3. A comparative assessment of preclinical chemotherapeutic response of tumors using quantitative non-Gaussian diffusion MRI

    PubMed Central

    Xu, Junzhong; Li, Ke; Smith, R. Adam; Waterton, John C.; Zhao, Ping; Ding, Zhaohua; Does, Mark D.; Manning, H. Charles; Gore, John C.

    2016-01-01

    Background Diffusion-weighted MRI (DWI) signal attenuation is often not mono-exponential (i.e. non-Gaussian diffusion) with stronger diffusion weighting. Several non-Gaussian diffusion models have been developed and may provide new information or higher sensitivity compared with the conventional apparent diffusion coefficient (ADC) method. However the relative merits of these models to detect tumor therapeutic response is not fully clear. Methods Conventional ADC, and three widely-used non-Gaussian models, (bi-exponential, stretched exponential, and statistical model), were implemented and compared for assessing SW620 human colon cancer xenografts responding to barasertib, an agent known to induce apoptosis via polyploidy. Bayesian Information Criterion (BIC) was used for model selection among all three non-Gaussian models. Results All of tumor volume, histology, conventional ADC, and three non-Gaussian DWI models could show significant differences between control and treatment groups after four days of treatment. However, only the non-Gaussian models detected significant changes after two days of treatment. For any treatment or control group, over 65.7% of tumor voxels indicate the bi-exponential model is strongly or very strongly preferred. Conclusion Non-Gaussian DWI model-derived biomarkers are capable of detecting tumor earlier chemotherapeutic response of tumors compared with conventional ADC and tumor volume. The bi-exponential model provides better fitting compared with statistical and stretched exponential models for the tumor and treatment models used in the current work. PMID:27919785

  4. Mixed Effects Models for Resampled Network Statistics Improves Statistical Power to Find Differences in Multi-Subject Functional Connectivity

    PubMed Central

    Narayan, Manjari; Allen, Genevera I.

    2016-01-01

    Many complex brain disorders, such as autism spectrum disorders, exhibit a wide range of symptoms and disability. To understand how brain communication is impaired in such conditions, functional connectivity studies seek to understand individual differences in brain network structure in terms of covariates that measure symptom severity. In practice, however, functional connectivity is not observed but estimated from complex and noisy neural activity measurements. Imperfect subject network estimates can compromise subsequent efforts to detect covariate effects on network structure. We address this problem in the case of Gaussian graphical models of functional connectivity, by proposing novel two-level models that treat both subject level networks and population level covariate effects as unknown parameters. To account for imperfectly estimated subject level networks when fitting these models, we propose two related approaches—R2 based on resampling and random effects test statistics, and R3 that additionally employs random adaptive penalization. Simulation studies using realistic graph structures reveal that R2 and R3 have superior statistical power to detect covariate effects compared to existing approaches, particularly when the number of within subject observations is comparable to the size of subject networks. Using our novel models and methods to study parts of the ABIDE dataset, we find evidence of hypoconnectivity associated with symptom severity in autism spectrum disorders, in frontoparietal and limbic systems as well as in anterior and posterior cingulate cortices. PMID:27147940

  5. A novel optimization algorithm for MIMO Hammerstein model identification under heavy-tailed noise.

    PubMed

    Jin, Qibing; Wang, Hehe; Su, Qixin; Jiang, Beiyan; Liu, Qie

    2018-01-01

    In this paper, we study the system identification of multi-input multi-output (MIMO) Hammerstein processes under the typical heavy-tailed noise. To the best of our knowledge, there is no general analytical method to solve this identification problem. Motivated by this, we propose a general identification method to solve this problem based on a Gaussian-Mixture Distribution intelligent optimization algorithm (GMDA). The nonlinear part of Hammerstein process is modeled by a Radial Basis Function (RBF) neural network, and the identification problem is converted to an optimization problem. To overcome the drawbacks of analytical identification method in the presence of heavy-tailed noise, a meta-heuristic optimization algorithm, Cuckoo search (CS) algorithm is used. To improve its performance for this identification problem, the Gaussian-mixture Distribution (GMD) and the GMD sequences are introduced to improve the performance of the standard CS algorithm. Numerical simulations for different MIMO Hammerstein models are carried out, and the simulation results verify the effectiveness of the proposed GMDA. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Demonstration of Monogamy Relations for Einstein-Podolsky-Rosen Steering in Gaussian Cluster States.

    PubMed

    Deng, Xiaowei; Xiang, Yu; Tian, Caixing; Adesso, Gerardo; He, Qiongyi; Gong, Qihuang; Su, Xiaolong; Xie, Changde; Peng, Kunchi

    2017-06-09

    Understanding how quantum resources can be quantified and distributed over many parties has profound applications in quantum communication. As one of the most intriguing features of quantum mechanics, Einstein-Podolsky-Rosen (EPR) steering is a useful resource for secure quantum networks. By reconstructing the covariance matrix of a continuous variable four-mode square Gaussian cluster state subject to asymmetric loss, we quantify the amount of bipartite steering with a variable number of modes per party, and verify recently introduced monogamy relations for Gaussian steerability, which establish quantitative constraints on the security of information shared among different parties. We observe a very rich structure for the steering distribution, and demonstrate one-way EPR steering of the cluster state under Gaussian measurements, as well as one-to-multimode steering. Our experiment paves the way for exploiting EPR steering in Gaussian cluster states as a valuable resource for multiparty quantum information tasks.

  7. Demonstration of Monogamy Relations for Einstein-Podolsky-Rosen Steering in Gaussian Cluster States

    NASA Astrophysics Data System (ADS)

    Deng, Xiaowei; Xiang, Yu; Tian, Caixing; Adesso, Gerardo; He, Qiongyi; Gong, Qihuang; Su, Xiaolong; Xie, Changde; Peng, Kunchi

    2017-06-01

    Understanding how quantum resources can be quantified and distributed over many parties has profound applications in quantum communication. As one of the most intriguing features of quantum mechanics, Einstein-Podolsky-Rosen (EPR) steering is a useful resource for secure quantum networks. By reconstructing the covariance matrix of a continuous variable four-mode square Gaussian cluster state subject to asymmetric loss, we quantify the amount of bipartite steering with a variable number of modes per party, and verify recently introduced monogamy relations for Gaussian steerability, which establish quantitative constraints on the security of information shared among different parties. We observe a very rich structure for the steering distribution, and demonstrate one-way EPR steering of the cluster state under Gaussian measurements, as well as one-to-multimode steering. Our experiment paves the way for exploiting EPR steering in Gaussian cluster states as a valuable resource for multiparty quantum information tasks.

  8. Inferring transcriptional gene regulation network of starch metabolism in Arabidopsis thaliana leaves using graphical Gaussian model

    PubMed Central

    2012-01-01

    Background Starch serves as a temporal storage of carbohydrates in plant leaves during day/night cycles. To study transcriptional regulatory modules of this dynamic metabolic process, we conducted gene regulation network analysis based on small-sample inference of graphical Gaussian model (GGM). Results Time-series significant analysis was applied for Arabidopsis leaf transcriptome data to obtain a set of genes that are highly regulated under a diurnal cycle. A total of 1,480 diurnally regulated genes included 21 starch metabolic enzymes, 6 clock-associated genes, and 106 transcription factors (TF). A starch-clock-TF gene regulation network comprising 117 nodes and 266 edges was constructed by GGM from these 133 significant genes that are potentially related to the diurnal control of starch metabolism. From this network, we found that β-amylase 3 (b-amy3: At4g17090), which participates in starch degradation in chloroplast, is the most frequently connected gene (a hub gene). The robustness of gene-to-gene regulatory network was further analyzed by TF binding site prediction and by evaluating global co-expression of TFs and target starch metabolic enzymes. As a result, two TFs, indeterminate domain 5 (AtIDD5: At2g02070) and constans-like (COL: At2g21320), were identified as positive regulators of starch synthase 4 (SS4: At4g18240). The inference model of AtIDD5-dependent positive regulation of SS4 gene expression was experimentally supported by decreased SS4 mRNA accumulation in Atidd5 mutant plants during the light period of both short and long day conditions. COL was also shown to positively control SS4 mRNA accumulation. Furthermore, the knockout of AtIDD5 and COL led to deformation of chloroplast and its contained starch granules. This deformity also affected the number of starch granules per chloroplast, which increased significantly in both knockout mutant lines. Conclusions In this study, we utilized a systematic approach of microarray analysis to discover the transcriptional regulatory network of starch metabolism in Arabidopsis leaves. With this inference method, the starch regulatory network of Arabidopsis was found to be strongly associated with clock genes and TFs, of which AtIDD5 and COL were evidenced to control SS4 gene expression and starch granule formation in chloroplasts. PMID:22898356

  9. Gaussian-modulated coherent-state measurement-device-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Ma, Xiang-Chun; Sun, Shi-Hai; Jiang, Mu-Sheng; Gui, Ming; Liang, Lin-Mei

    2014-04-01

    Measurement-device-independent quantum key distribution (MDI-QKD), leaving the detection procedure to the third partner and thus being immune to all detector side-channel attacks, is very promising for the construction of high-security quantum information networks. We propose a scheme to implement MDI-QKD, but with continuous variables instead of discrete ones, i.e., with the source of Gaussian-modulated coherent states, based on the principle of continuous-variable entanglement swapping. This protocol not only can be implemented with current telecom components but also has high key rates compared to its discrete counterpart; thus it will be highly compatible with quantum networks.

  10. Comparison of different modelling approaches of drive train temperature for the purposes of wind turbine failure detection

    NASA Astrophysics Data System (ADS)

    Tautz-Weinert, J.; Watson, S. J.

    2016-09-01

    Effective condition monitoring techniques for wind turbines are needed to improve maintenance processes and reduce operational costs. Normal behaviour modelling of temperatures with information from other sensors can help to detect wear processes in drive trains. In a case study, modelling of bearing and generator temperatures is investigated with operational data from the SCADA systems of more than 100 turbines. The focus is here on automated training and testing on a farm level to enable an on-line system, which will detect failures without human interpretation. Modelling based on linear combinations, artificial neural networks, adaptive neuro-fuzzy inference systems, support vector machines and Gaussian process regression is compared. The selection of suitable modelling inputs is discussed with cross-correlation analyses and a sensitivity study, which reveals that the investigated modelling techniques react in different ways to an increased number of inputs. The case study highlights advantages of modelling with linear combinations and artificial neural networks in a feedforward configuration.

  11. Spectral statistics of random geometric graphs

    NASA Astrophysics Data System (ADS)

    Dettmann, C. P.; Georgiou, O.; Knight, G.

    2017-04-01

    We use random matrix theory to study the spectrum of random geometric graphs, a fundamental model of spatial networks. Considering ensembles of random geometric graphs we look at short-range correlations in the level spacings of the spectrum via the nearest-neighbour and next-nearest-neighbour spacing distribution and long-range correlations via the spectral rigidity Δ3 statistic. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find a parameter-dependent transition between Poisson and Gaussian orthogonal ensemble statistics. That is the spectral statistics of spatial random geometric graphs fits the universality of random matrix theory found in other models such as Erdős-Rényi, Barabási-Albert and Watts-Strogatz random graphs.

  12. Einstein-Podolsky-Rosen-steering swapping between two Gaussian multipartite entangled states

    NASA Astrophysics Data System (ADS)

    Wang, Meihong; Qin, Zhongzhong; Wang, Yu; Su, Xiaolong

    2017-08-01

    Multipartite Einstein-Podolsky-Rosen (EPR) steering is a useful quantum resource for quantum communication in quantum networks. It has potential applications in secure quantum communication, such as one-sided device-independent quantum key distribution and quantum secret sharing. By distributing optical modes of a multipartite entangled state to space-separated quantum nodes, a local quantum network can be established. Based on the existing multipartite EPR steering in a local quantum network, secure quantum communication protocol can be accomplished. In this manuscript, we present swapping schemes for EPR steering between two space-separated Gaussian multipartite entangled states, which can be used to connect two space-separated quantum networks. Two swapping schemes, including the swapping between a tripartite Greenberger-Horne-Zeilinger (GHZ) entangled state and an EPR entangled state and that between two tripartite GHZ entangled states, are analyzed. Various types of EPR steering are presented after the swapping of two space-separated independent multipartite entanglement states without direct interaction, which can be used to implement quantum communication between two quantum networks. The presented schemes provide technical reference for more complicated quantum networks with EPR steering.

  13. The role of retinal bipolar cell in early vision: an implication with analogue networks and regularization theory.

    PubMed

    Yagi, T; Ohshima, S; Funahashi, Y

    1997-09-01

    A linear analogue network model is proposed to describe the neuronal circuit of the outer retina consisting of cones, horizontal cells, and bipolar cells. The model reflects previous physiological findings on the spatial response properties of these neurons to dim illumination and is expressed by physiological mechanisms, i.e., membrane conductances, gap-junctional conductances, and strengths of chemical synaptic interactions. Using the model, we characterized the spatial filtering properties of the bipolar cell receptive field with the standard regularization theory, in which the early vision problems are attributed to minimization of a cost function. The cost function accompanying the present characterization is derived from the linear analogue network model, and one can gain intuitive insights on how physiological mechanisms contribute to the spatial filtering properties of the bipolar cell receptive field. We also elucidated a quantitative relation between the Laplacian of Gaussian operator and the bipolar cell receptive field. From the computational point of view, the dopaminergic modulation of the gap-junctional conductance between horizontal cells is inferred to be a suitable neural adaptation mechanism for transition between photopic and mesopic vision.

  14. Multi-subject hierarchical inverse covariance modelling improves estimation of functional brain networks.

    PubMed

    Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M

    2018-05-07

    A Bayesian model for sparse, hierarchical, inver-covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fMRI, MEG and EEG data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in MEG beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.

  15. Applications of spatial statistical network models to stream data

    USGS Publications Warehouse

    Isaak, Daniel J.; Peterson, Erin E.; Ver Hoef, Jay M.; Wenger, Seth J.; Falke, Jeffrey A.; Torgersen, Christian E.; Sowder, Colin; Steel, E. Ashley; Fortin, Marie-Josée; Jordan, Chris E.; Ruesch, Aaron S.; Som, Nicholas; Monestiez, Pascal

    2014-01-01

    Streams and rivers host a significant portion of Earth's biodiversity and provide important ecosystem services for human populations. Accurate information regarding the status and trends of stream resources is vital for their effective conservation and management. Most statistical techniques applied to data measured on stream networks were developed for terrestrial applications and are not optimized for streams. A new class of spatial statistical model, based on valid covariance structures for stream networks, can be used with many common types of stream data (e.g., water quality attributes, habitat conditions, biological surveys) through application of appropriate distributions (e.g., Gaussian, binomial, Poisson). The spatial statistical network models account for spatial autocorrelation (i.e., nonindependence) among measurements, which allows their application to databases with clustered measurement locations. Large amounts of stream data exist in many areas where spatial statistical analyses could be used to develop novel insights, improve predictions at unsampled sites, and aid in the design of efficient monitoring strategies at relatively low cost. We review the topic of spatial autocorrelation and its effects on statistical inference, demonstrate the use of spatial statistics with stream datasets relevant to common research and management questions, and discuss additional applications and development potential for spatial statistics on stream networks. Free software for implementing the spatial statistical network models has been developed that enables custom applications with many stream databases.

  16. Non-Gaussian Analysis of Turbulent Boundary Layer Fluctuating Pressure on Aircraft Skin Panels

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Steinwolf, Alexander

    2005-01-01

    The purpose of the study is to investigate the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the outer sidewall of a supersonic transport aircraft and to approximate these PDFs by analytical models. Experimental flight results show that the fluctuating pressure PDFs differ from the Gaussian distribution even for standard smooth surface conditions. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations in front of forward-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. There is a certain spatial pattern of the skewness and kurtosis behavior depending on the distance upstream from the step. All characteristics related to non-Gaussian behavior are highly dependent upon the distance from the step and the step height, less dependent on aircraft speed, and not dependent on the fuselage location. A Hermite polynomial transform model and a piecewise-Gaussian model fit the flight data well both for the smooth and stepped conditions. The piecewise-Gaussian approximation can be additionally regarded for convenience in usage after the model is constructed.

  17. The probabilistic neural network architecture for high speed classification of remotely sensed imagery

    NASA Technical Reports Server (NTRS)

    Chettri, Samir R.; Cromp, Robert F.

    1993-01-01

    In this paper we discuss a neural network architecture (the Probabilistic Neural Net or the PNN) that, to the best of our knowledge, has not previously been applied to remotely sensed data. The PNN is a supervised non-parametric classification algorithm as opposed to the Gaussian maximum likelihood classifier (GMLC). The PNN works by fitting a Gaussian kernel to each training point. The width of the Gaussian is controlled by a tuning parameter called the window width. If very small widths are used, the method is equivalent to the nearest neighbor method. For large windows, the PNN behaves like the GMLC. The basic implementation of the PNN requires no training time at all. In this respect it is far better than the commonly used backpropagation neural network which can be shown to take O(N6) time for training where N is the dimensionality of the input vector. In addition the PNN can be implemented in a feed forward mode in hardware. The disadvantage of the PNN is that it requires all the training data to be stored. Some solutions to this problem are discussed in the paper. Finally, we discuss the accuracy of the PNN with respect to the GMLC and the backpropagation neural network (BPNN). The PNN is shown to be better than GMLC and not as good as the BPNN with regards to classification accuracy.

  18. Three dimensional indoor positioning based on visible light with Gaussian mixture sigma-point particle filter technique

    NASA Astrophysics Data System (ADS)

    Gu, Wenjun; Zhang, Weizhi; Wang, Jin; Amini Kashani, M. R.; Kavehrad, Mohsen

    2015-01-01

    Over the past decade, location based services (LBS) have found their wide applications in indoor environments, such as large shopping malls, hospitals, warehouses, airports, etc. Current technologies provide wide choices of available solutions, which include Radio-frequency identification (RFID), Ultra wideband (UWB), wireless local area network (WLAN) and Bluetooth. With the rapid development of light-emitting-diodes (LED) technology, visible light communications (VLC) also bring a practical approach to LBS. As visible light has a better immunity against multipath effect than radio waves, higher positioning accuracy is achieved. LEDs are utilized both for illumination and positioning purpose to realize relatively lower infrastructure cost. In this paper, an indoor positioning system using VLC is proposed, with LEDs as transmitters and photo diodes as receivers. The algorithm for estimation is based on received-signalstrength (RSS) information collected from photo diodes and trilateration technique. By appropriately making use of the characteristics of receiver movements and the property of trilateration, estimation on three-dimensional (3-D) coordinates is attained. Filtering technique is applied to enable tracking capability of the algorithm, and a higher accuracy is reached compare to raw estimates. Gaussian mixture Sigma-point particle filter (GM-SPPF) is proposed for this 3-D system, which introduces the notion of Gaussian Mixture Model (GMM). The number of particles in the filter is reduced by approximating the probability distribution with Gaussian components.

  19. Gaussian Process Regression for Uncertainty Estimation on Ecosystem Data

    NASA Astrophysics Data System (ADS)

    Menzer, O.; Moffat, A.; Lasslop, G.; Reichstein, M.

    2011-12-01

    The flow of carbon between terrestrial ecosystems and the atmosphere is mainly driven by nonlinear, complex and time-lagged processes. Understanding the associated ecosystem responses and climatic feedbacks is a key challenge regarding climate change questions such as increasing atmospheric CO2 levels. Usually, the underlying relationships are implemented in models as prescribed functions which interlink numerous meteorological, radiative and gas exchange variables. In contrast, supervised Machine Learning algorithms, such as Artificial Neural Networks or Gaussian Processes, allow for an insight into the relationships directly from a data perspective. Micrometeorological, high resolution measurements at flux towers of the FLUXNET observational network are an essential tool for obtaining quantifications of the ecosystem variables, as they continuously record e.g. CO2 exchange, solar radiation and air temperature. In order to facilitate the investigation of the interactions and feedbacks between these variables, several challenging data properties need to be taken into account: noisy, multidimensional and incomplete (Moffat, Accepted). The task of estimating uncertainties in such micrometeorological measurements can be addressed by Gaussian Processes (GPs), a modern nonparametric method for nonlinear regression. The GP approach has recently been shown to be a powerful modeling tool, regardless of the input dimensionality, the degree of nonlinearity and the noise level (Rasmussen and Williams, 2006). Heteroscedastic Gaussian Processes (HGPs) are a specialized GP method for data with a varying, inhomogeneous noise variance (Goldberg et al., 1998; Kersting et al., 2007), as usually observed in CO2 flux measurements (Richardson et al., 2006). Here, we showed by an evaluation of the HGP performance in several artificial experiments and a comparison to existing nonlinear regression methods, that their outstanding ability is to capture measurement noise levels, concurrently providing reasonable data fits under relatively few assumptions. On the basis of incomplete, half-hourly measured ecosystem data, a HGP was trained to model NEP (Net Ecosystem Production), only with the drivers PPFD (Photosynthetic Photon Flux Density) and Air Temperature. Time information was added to account for the autocorrelation in the flux measurements. Provided with a gap-filled, meteorological time series, NEP and the corresponding random error estimates can then be predicted empirically at high temporal resolution. We report uncertainties in annual sums of CO2 exchange at two flux tower sites in Hainich, Germany and Hesse, France. Similar noise patterns, but different magnitudes between sites were detected, with annual random error estimates of +/- 14.1 gCm^-2yr^-1 and +/- 23.5 gCm^-2yr^-1, respectively, for the year 2001. Existing models calculate uncertainties by evaluating the standard deviation of the model residuals. A comparison to the methods of Reichstein et al. (2005) and Lasslop et al. (2008) showed confidence both in the predictive uncertainties and the annual sums modeled with the HGP approach.

  20. A nonuniform popularity-similarity optimization (nPSO) model to efficiently generate realistic complex networks with communities

    NASA Astrophysics Data System (ADS)

    Muscoloni, Alessandro; Vittorio Cannistraci, Carlo

    2018-05-01

    The investigation of the hidden metric space behind complex network topologies is a fervid topic in current network science and the hyperbolic space is one of the most studied, because it seems associated to the structural organization of many real complex systems. The popularity-similarity-optimization (PSO) model simulates how random geometric graphs grow in the hyperbolic space, generating realistic networks with clustering, small-worldness, scale-freeness and rich-clubness. However, it misses to reproduce an important feature of real complex networks, which is the community organization. The geometrical-preferential-attachment (GPA) model was recently developed in order to confer to the PSO also a soft community structure, which is obtained by forcing different angular regions of the hyperbolic disk to have a variable level of attractiveness. However, the number and size of the communities cannot be explicitly controlled in the GPA, which is a clear limitation for real applications. Here, we introduce the nonuniform PSO (nPSO) model. Differently from GPA, the nPSO generates synthetic networks in the hyperbolic space where heterogeneous angular node attractiveness is forced by sampling the angular coordinates from a tailored nonuniform probability distribution (for instance a mixture of Gaussians). The nPSO differs from GPA in other three aspects: it allows one to explicitly fix the number and size of communities; it allows one to tune their mixing property by means of the network temperature; it is efficient to generate networks with high clustering. Several tests on the detectability of the community structure in nPSO synthetic networks and wide investigations on their structural properties confirm that the nPSO is a valid and efficient model to generate realistic complex networks with communities.

  1. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    PubMed

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  2. Temporal self-splitting of optical pulses

    NASA Astrophysics Data System (ADS)

    Ding, Chaoliang; Koivurova, Matias; Turunen, Jari; Pan, Liuzhan

    2018-05-01

    We present mathematical models for temporally and spectrally partially coherent pulse trains with Laguerre-Gaussian and Hermite-Gaussian Schell-model statistics as extensions of the standard Gaussian Schell model for pulse trains. We derive propagation formulas of both classes of pulsed fields in linearly dispersive media and in temporal optical systems. It is found that, in general, both types of fields exhibit time-domain self-splitting upon propagation. The Laguerre-Gaussian model leads to multiply peaked pulses, while the Hermite-Gaussian model leads to doubly peaked pulses, in the temporal far field (in dispersive media) or at the Fourier plane of a temporal system. In both model fields the character of the self-splitting phenomenon depends both on the degree of temporal and spectral coherence and on the power spectrum of the field.

  3. Linear velocity fields in non-Gaussian models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  4. Stochastic resonance in a piecewise nonlinear model driven by multiplicative non-Gaussian noise and additive white noise

    NASA Astrophysics Data System (ADS)

    Guo, Yongfeng; Shen, Yajun; Tan, Jianguo

    2016-09-01

    The phenomenon of stochastic resonance (SR) in a piecewise nonlinear model driven by a periodic signal and correlated noises for the cases of a multiplicative non-Gaussian noise and an additive Gaussian white noise is investigated. Applying the path integral approach, the unified colored noise approximation and the two-state model theory, the analytical expression of the signal-to-noise ratio (SNR) is derived. It is found that conventional stochastic resonance exists in this system. From numerical computations we obtain that: (i) As a function of the non-Gaussian noise intensity, the SNR is increased when the non-Gaussian noise deviation parameter q is increased. (ii) As a function of the Gaussian noise intensity, the SNR is decreased when q is increased. This demonstrates that the effect of the non-Gaussian noise on SNR is different from that of the Gaussian noise in this system. Moreover, we further discuss the effect of the correlation time of the non-Gaussian noise, cross-correlation strength, the amplitude and frequency of the periodic signal on SR.

  5. Skewness in large-scale structure and non-Gaussian initial conditions

    NASA Technical Reports Server (NTRS)

    Fry, J. N.; Scherrer, Robert J.

    1994-01-01

    We compute the skewness of the galaxy distribution arising from the nonlinear evolution of arbitrary non-Gaussian intial conditions to second order in perturbation theory including the effects of nonlinear biasing. The result contains a term identical to that for a Gaussian initial distribution plus terms which depend on the skewness and kurtosis of the initial conditions. The results are model dependent; we present calculations for several toy models. At late times, the leading contribution from the initial skewness decays away relative to the other terms and becomes increasingly unimportant, but the contribution from initial kurtosis, previously overlooked, has the same time dependence as the Gaussian terms. Observations of a linear dependence of the normalized skewness on the rms density fluctuation therefore do not necessarily rule out initially non-Gaussian models. We also show that with non-Gaussian initial conditions the first correction to linear theory for the mean square density fluctuation is larger than for Gaussian models.

  6. Arbitrage with fractional Gaussian processes

    NASA Astrophysics Data System (ADS)

    Zhang, Xili; Xiao, Weilin

    2017-04-01

    While the arbitrage opportunity in the Black-Scholes model driven by fractional Brownian motion has a long history, the arbitrage strategy in the Black-Scholes model driven by general fractional Gaussian processes is in its infancy. The development of stochastic calculus with respect to fractional Gaussian processes allowed us to study such models. In this paper, following the idea of Shiryaev (1998), an arbitrage strategy is constructed for the Black-Scholes model driven by fractional Gaussian processes, when the stochastic integral is interpreted in the Riemann-Stieltjes sense. Arbitrage opportunities in some fractional Gaussian processes, including fractional Brownian motion, sub-fractional Brownian motion, bi-fractional Brownian motion, weighted-fractional Brownian motion and tempered fractional Brownian motion, are also investigated.

  7. Optimizing placements of ground-based snow sensors for areal snow cover estimation using a machine-learning algorithm and melt-season snow-LiDAR data

    NASA Astrophysics Data System (ADS)

    Oroza, C.; Zheng, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2016-12-01

    We present a structured, analytical approach to optimize ground-sensor placements based on time-series remotely sensed (LiDAR) data and machine-learning algorithms. We focused on catchments within the Merced and Tuolumne river basins, covered by the JPL Airborne Snow Observatory LiDAR program. First, we used a Gaussian mixture model to identify representative sensor locations in the space of independent variables for each catchment. Multiple independent variables that govern the distribution of snow depth were used, including elevation, slope, and aspect. Second, we used a Gaussian process to estimate the areal distribution of snow depth from the initial set of measurements. This is a covariance-based model that also estimates the areal distribution of model uncertainty based on the independent variable weights and autocorrelation. The uncertainty raster was used to strategically add sensors to minimize model uncertainty. We assessed the temporal accuracy of the method using LiDAR-derived snow-depth rasters collected in water-year 2014. In each area, optimal sensor placements were determined using the first available snow raster for the year. The accuracy in the remaining LiDAR surveys was compared to 100 configurations of sensors selected at random. We found the accuracy of the model from the proposed placements to be higher and more consistent in each remaining survey than the average random configuration. We found that a relatively small number of sensors can be used to accurately reproduce the spatial patterns of snow depth across the basins, when placed using spatial snow data. Our approach also simplifies sensor placement. At present, field surveys are required to identify representative locations for such networks, a process that is labor intensive and provides limited guarantees on the networks' representation of catchment independent variables.

  8. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  9. Gaussian fitting for carotid and radial artery pressure waveforms: comparison between normal subjects and heart failure patients.

    PubMed

    Liu, Chengyu; Zheng, Dingchang; Zhao, Lina; Liu, Changchun

    2014-01-01

    It has been reported that Gaussian functions could accurately and reliably model both carotid and radial artery pressure waveforms (CAPW and RAPW). However, the physiological relevance of the characteristic features from the modeled Gaussian functions has been little investigated. This study thus aimed to determine characteristic features from the Gaussian functions and to make comparisons of them between normal subjects and heart failure patients. Fifty-six normal subjects and 51 patients with heart failure were studied with the CAPW and RAPW signals recorded simultaneously. The two signals were normalized first and then modeled by three positive Gaussian functions, with their peak amplitude, peak time, and half-width determined. Comparisons of these features were finally made between the two groups. Results indicated that the peak amplitude of the first Gaussian curve was significantly decreased in heart failure patients compared with normal subjects (P<0.001). Significantly increased peak amplitude of the second Gaussian curves (P<0.001) and significantly shortened peak times of the second and third Gaussian curves (both P<0.001) were also presented in heart failure patients. These results were true for both CAPW and RAPW signals, indicating the clinical significance of the Gaussian modeling, which should provide essential tools for further understanding the underlying physiological mechanisms of the artery pressure waveform.

  10. Consumers don’t play dice, influence of social networks and advertisements

    NASA Astrophysics Data System (ADS)

    Groot, Robert D.

    2006-05-01

    Empirical data of supermarket sales show stylised facts that are similar to stock markets, with a broad (truncated) Lévy distribution of weekly sales differences in the baseline sales [R.D. Groot, Physica A 353 (2005) 501]. To investigate the cause of this, the influence of social interactions and advertisements are studied in an agent-based model of consumers in a social network. The influence of network topology was varied by using a small-world network, a random network and a Barabási-Albert network. The degree to which consumers value the opinion of their peers was also varied. On a small-world and random network we find a phase transition between an open market and a locked-in market that is similar to condensation in liquids. At the critical point, fluctuations become large and buying behaviour is strongly correlated. However, on the small world network the noise distribution at the critical point is Gaussian, and critical slowing down occurs which is not observed in supermarket sales. On a scale-free network, the model shows a transition between a gas-like phase and a glassy state, but at the transition point the noise amplitude is much larger than what is seen in supermarket sales. To explore the role of advertisements, a model is studied where imprints are placed on the minds of consumers that ripen when a decision for a product is made. The correct distribution of weekly sales returns follows naturally from this model, as well as the noise amplitude, the correlation time and cross-correlation of sales fluctuations. For particular parameter values, simulated sales correlation shows power-law decay in time. The model predicts that social interaction helps to prevent aversion, and that products are viewed more positively when their consumption rate is higher.

  11. Coded Cooperation for Multiway Relaying in Wireless Sensor Networks †

    PubMed Central

    Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar

    2015-01-01

    Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels. PMID:26131675

  12. A Semi-Supervised Learning Algorithm for Predicting Four Types MiRNA-Disease Associations by Mutual Information in a Heterogeneous Network.

    PubMed

    Zhang, Xiaotian; Yin, Jian; Zhang, Xu

    2018-03-02

    Increasing evidence suggests that dysregulation of microRNAs (miRNAs) may lead to a variety of diseases. Therefore, identifying disease-related miRNAs is a crucial problem. Currently, many computational approaches have been proposed to predict binary miRNA-disease associations. In this study, in order to predict underlying miRNA-disease association types, a semi-supervised model called the network-based label propagation algorithm is proposed to infer multiple types of miRNA-disease associations (NLPMMDA) by mutual information derived from the heterogeneous network. The NLPMMDA method integrates disease semantic similarity, miRNA functional similarity, and Gaussian interaction profile kernel similarity information of miRNAs and diseases to construct a heterogeneous network. NLPMMDA is a semi-supervised model which does not require verified negative samples. Leave-one-out cross validation (LOOCV) was implemented for four known types of miRNA-disease associations and demonstrated the reliable performance of our method. Moreover, case studies of lung cancer and breast cancer confirmed effective performance of NLPMMDA to predict novel miRNA-disease associations and their association types.

  13. Coded Cooperation for Multiway Relaying in Wireless Sensor Networks.

    PubMed

    Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar

    2015-06-29

    Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels.

  14. Deep RNNs for video denoising

    NASA Astrophysics Data System (ADS)

    Chen, Xinyuan; Song, Li; Yang, Xiaokang

    2016-09-01

    Video denoising can be described as the problem of mapping from a specific length of noisy frames to clean one. We propose a deep architecture based on Recurrent Neural Network (RNN) for video denoising. The model learns a patch-based end-to-end mapping between the clean and noisy video sequences. It takes the corrupted video sequences as the input and outputs the clean one. Our deep network, which we refer to as deep Recurrent Neural Networks (deep RNNs or DRNNs), stacks RNN layers where each layer receives the hidden state of the previous layer as input. Experiment shows (i) the recurrent architecture through temporal domain extracts motion information and does favor to video denoising, and (ii) deep architecture have large enough capacity for expressing mapping relation between corrupted videos as input and clean videos as output, furthermore, (iii) the model has generality to learned different mappings from videos corrupted by different types of noise (e.g., Poisson-Gaussian noise). By training on large video databases, we are able to compete with some existing video denoising methods.

  15. Bootstrapping Least Squares Estimates in Biochemical Reaction Networks

    PubMed Central

    Linder, Daniel F.

    2015-01-01

    The paper proposes new computational methods of computing confidence bounds for the least squares estimates (LSEs) of rate constants in mass-action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large volume limit of a reaction network, to network’s partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769

  16. Effects of scale-dependent non-Gaussianity on cosmological structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LoVerde, Marilena; Miller, Amber; Shandera, Sarah

    2008-04-15

    The detection of primordial non-Gaussianity could provide a powerful means to test various inflationary scenarios. Although scale-invariant non-Gaussianity (often described by the f{sub NL} formalism) is currently best constrained by the CMB, single-field models with changing sound speed can have strongly scale-dependent non-Gaussianity. Such models could evade the CMB constraints but still have important effects at scales responsible for the formation of cosmological objects such as clusters and galaxies. We compute the effect of scale-dependent primordial non-Gaussianity on cluster number counts as a function of redshift, using a simple ansatz to model scale-dependent features. We forecast constraints on these modelsmore » achievable with forthcoming datasets. We also examine consequences for the galaxy bispectrum. Our results are relevant for the Dirac-Born-Infeld model of brane inflation, where the scale dependence of the non-Gaussianity is directly related to the geometry of the extra dimensions.« less

  17. Gaussian statistics of the cosmic microwave background: Correlation of temperature extrema in the COBE DMR two-year sky maps

    NASA Technical Reports Server (NTRS)

    Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.

    1995-01-01

    We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.

  18. Comparison of non-Gaussian and Gaussian diffusion models of diffusion weighted imaging of rectal cancer at 3.0 T MRI.

    PubMed

    Zhang, Guangwen; Wang, Shuangshuang; Wen, Didi; Zhang, Jing; Wei, Xiaocheng; Ma, Wanling; Zhao, Weiwei; Wang, Mian; Wu, Guosheng; Zhang, Jinsong

    2016-12-09

    Water molecular diffusion in vivo tissue is much more complicated. We aimed to compare non-Gaussian diffusion models of diffusion-weighted imaging (DWI) including intra-voxel incoherent motion (IVIM), stretched-exponential model (SEM) and Gaussian diffusion model at 3.0 T MRI in patients with rectal cancer, and to determine the optimal model for investigating the water diffusion properties and characterization of rectal carcinoma. Fifty-nine consecutive patients with pathologically confirmed rectal adenocarcinoma underwent DWI with 16 b-values at a 3.0 T MRI system. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models (IVIM-mono, IVIM-bi and SEM) on primary tumor and adjacent normal rectal tissue. Parameters of standard apparent diffusion coefficient (ADC), slow- and fast-ADC, fraction of fast ADC (f), α value and distributed diffusion coefficient (DDC) were generated and compared between the tumor and normal tissues. The SEM exhibited the best fitting results of actual DWI signal in rectal cancer and the normal rectal wall (R 2  = 0.998, 0.999 respectively). The DDC achieved relatively high area under the curve (AUC = 0.980) in differentiating tumor from normal rectal wall. Non-Gaussian diffusion models could assess tissue properties more accurately than the ADC derived Gaussian diffusion model. SEM may be used as a potential optimal model for characterization of rectal cancer.

  19. Text Authorship Identified Using the Dynamics of Word Co-Occurrence Networks

    PubMed Central

    Akimushkin, Camilo; Amancio, Diego Raphael; Oliveira, Osvaldo Novais

    2017-01-01

    Automatic identification of authorship in disputed documents has benefited from complex network theory as this approach does not require human expertise or detailed semantic knowledge. Networks modeling entire books can be used to discriminate texts from different sources and understand network growth mechanisms, but only a few studies have probed the suitability of networks in modeling small chunks of text to grasp stylistic features. In this study, we introduce a methodology based on the dynamics of word co-occurrence networks representing written texts to classify a corpus of 80 texts by 8 authors. The texts were divided into sections with equal number of linguistic tokens, from which time series were created for 12 topological metrics. Since 73% of all series were stationary (ARIMA(p, 0, q)) and the remaining were integrable of first order (ARIMA(p, 1, q)), probability distributions could be obtained for the global network metrics. The metrics exhibit bell-shaped non-Gaussian distributions, and therefore distribution moments were used as learning attributes. With an optimized supervised learning procedure based on a nonlinear transformation performed by Isomap, 71 out of 80 texts were correctly classified using the K-nearest neighbors algorithm, i.e. a remarkable 88.75% author matching success rate was achieved. Hence, purely dynamic fluctuations in network metrics can characterize authorship, thus paving the way for a robust description of large texts in terms of small evolving networks. PMID:28125703

  20. Evaluation of non-Gaussian diffusion in cardiac MRI.

    PubMed

    McClymont, Darryl; Teh, Irvin; Carruth, Eric; Omens, Jeffrey; McCulloch, Andrew; Whittington, Hannah J; Kohl, Peter; Grau, Vicente; Schneider, Jürgen E

    2017-09-01

    The diffusion tensor model assumes Gaussian diffusion and is widely applied in cardiac diffusion MRI. However, diffusion in biological tissue deviates from a Gaussian profile as a result of hindrance and restriction from cell and tissue microstructure, and may be quantified better by non-Gaussian modeling. The aim of this study was to investigate non-Gaussian diffusion in healthy and hypertrophic hearts. Thirteen rat hearts (five healthy, four sham, four hypertrophic) were imaged ex vivo. Diffusion-weighted images were acquired at b-values up to 10,000 s/mm 2 . Models of diffusion were fit to the data and ranked based on the Akaike information criterion. The diffusion tensor was ranked best at b-values up to 2000 s/mm 2 but reflected the signal poorly in the high b-value regime, in which the best model was a non-Gaussian "beta distribution" model. Although there was considerable overlap in apparent diffusivities between the healthy, sham, and hypertrophic hearts, diffusion kurtosis and skewness in the hypertrophic hearts were more than 20% higher in the sheetlet and sheetlet-normal directions. Non-Gaussian diffusion models have a higher sensitivity for the detection of hypertrophy compared with the Gaussian model. In particular, diffusion kurtosis may serve as a useful biomarker for characterization of disease and remodeling in the heart. Magn Reson Med 78:1174-1186, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  1. The Gaussian atmospheric transport model and its sensitivity to the joint frequency distribution and parametric variability.

    PubMed

    Hamby, D M

    2002-01-01

    Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.

  2. Fat-tailed fluctuations in the size of organizations: the role of social influence.

    PubMed

    Mondani, Hernan; Holme, Petter; Liljeros, Fredrik

    2014-01-01

    Organizational growth processes have consistently been shown to exhibit a fatter-than-Gaussian growth-rate distribution in a variety of settings. Long periods of relatively small changes are interrupted by sudden changes in all size scales. This kind of extreme events can have important consequences for the development of biological and socio-economic systems. Existing models do not derive this aggregated pattern from agent actions at the micro level. We develop an agent-based simulation model on a social network. We take our departure in a model by a Schwarzkopf et al. on a scale-free network. We reproduce the fat-tailed pattern out of internal dynamics alone, and also find that it is robust with respect to network topology. Thus, the social network and the local interactions are a prerequisite for generating the pattern, but not the network topology itself. We further extend the model with a parameter δ that weights the relative fraction of an individual's neighbours belonging to a given organization, representing a contextual aspect of social influence. In the lower limit of this parameter, the fraction is irrelevant and choice of organization is random. In the upper limit of the parameter, the largest fraction quickly dominates, leading to a winner-takes-all situation. We recover the real pattern as an intermediate case between these two extremes.

  3. Reconstruction of Complex Network based on the Noise via QR Decomposition and Compressed Sensing.

    PubMed

    Li, Lixiang; Xu, Dafei; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian

    2017-11-08

    It is generally known that the states of network nodes are stable and have strong correlations in a linear network system. We find that without the control input, the method of compressed sensing can not succeed in reconstructing complex networks in which the states of nodes are generated through the linear network system. However, noise can drive the dynamics between nodes to break the stability of the system state. Therefore, a new method integrating QR decomposition and compressed sensing is proposed to solve the reconstruction problem of complex networks under the assistance of the input noise. The state matrix of the system is decomposed by QR decomposition. We construct the measurement matrix with the aid of Gaussian noise so that the sparse input matrix can be reconstructed by compressed sensing. We also discover that noise can build a bridge between the dynamics and the topological structure. Experiments are presented to show that the proposed method is more accurate and more efficient to reconstruct four model networks and six real networks by the comparisons between the proposed method and only compressed sensing. In addition, the proposed method can reconstruct not only the sparse complex networks, but also the dense complex networks.

  4. Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.

    PubMed

    Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip

    2015-11-01

    The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.

  5. Detection of whale calls in noise: performance comparison between a beluga whale, human listeners, and a neural network.

    PubMed

    Erbe, C

    2000-07-01

    This article examines the masking by anthropogenic noise of beluga whale calls. Results from human masking experiments and a software backpropagation neural network are compared to the performance of a trained beluga whale. The goal was to find an accurate, reliable, and fast model to replace lengthy and expensive animal experiments. A beluga call was masked by three types of noise, an icebreaker's bubbler system and propeller noise, and ambient arctic ice-cracking noise. Both the human experiment and the neural network successfully modeled the beluga data in the sense that they classified the noises in the same order from strongest to weakest masking as the whale and with similar call-detection thresholds. The neural network slightly outperformed the humans. Both models were then used to predict the masking of a fourth type of noise, Gaussian white noise. Their prediction ability was judged by returning to the aquarium to measure masked-hearing thresholds of a beluga in white noise. Both models and the whale identified bubbler noise as the strongest masker, followed by ramming, then white noise. Natural ice-cracking noise masked the least. However, the humans and the neural network slightly overpredicted the amount of masking for white noise. This is neglecting individual variation in belugas, because only one animal could be trained. Comparing the human model to the neural network model, the latter has the advantage of objectivity, reproducibility of results, and efficiency, particularly if the interference of a large number of signals and noise is to be examined.

  6. Dispersion in Rectangular Networks: Effective Diffusivity and Large-Deviation Rate Function

    NASA Astrophysics Data System (ADS)

    Tzella, Alexandra; Vanneste, Jacques

    2016-09-01

    The dispersion of a diffusive scalar in a fluid flowing through a network has many applications including to biological flows, porous media, water supply, and urban pollution. Motivated by this, we develop a large-deviation theory that predicts the evolution of the concentration of a scalar released in a rectangular network in the limit of large time t ≫1 . This theory provides an approximation for the concentration that remains valid for large distances from the center of mass, specifically for distances up to O (t ) and thus much beyond the O (t1 /2) range where a standard Gaussian approximation holds. A byproduct of the approach is a closed-form expression for the effective diffusivity tensor that governs this Gaussian approximation. Monte Carlo simulations of Brownian particles confirm the large-deviation results and demonstrate their effectiveness in describing the scalar distribution when t is only moderately large.

  7. Multiscale virtual particle based elastic network model (MVP-ENM) for normal mode analysis of large-sized biomolecules.

    PubMed

    Xia, Kelin

    2017-12-20

    In this paper, a multiscale virtual particle based elastic network model (MVP-ENM) is proposed for the normal mode analysis of large-sized biomolecules. The multiscale virtual particle (MVP) model is proposed for the discretization of biomolecular density data. With this model, large-sized biomolecular structures can be coarse-grained into virtual particles such that a balance between model accuracy and computational cost can be achieved. An elastic network is constructed by assuming "connections" between virtual particles. The connection is described by a special harmonic potential function, which considers the influence from both the mass distributions and distance relations of the virtual particles. Two independent models, i.e., the multiscale virtual particle based Gaussian network model (MVP-GNM) and the multiscale virtual particle based anisotropic network model (MVP-ANM), are proposed. It has been found that in the Debye-Waller factor (B-factor) prediction, the results from our MVP-GNM with a high resolution are as good as the ones from GNM. Even with low resolutions, our MVP-GNM can still capture the global behavior of the B-factor very well with mismatches predominantly from the regions with large B-factor values. Further, it has been demonstrated that the low-frequency eigenmodes from our MVP-ANM are highly consistent with the ones from ANM even with very low resolutions and a coarse grid. Finally, the great advantage of MVP-ANM model for large-sized biomolecules has been demonstrated by using two poliovirus virus structures. The paper ends with a conclusion.

  8. A Bernoulli Gaussian Watermark for Detecting Integrity Attacks in Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weerakkody, Sean; Ozel, Omur; Sinopoli, Bruno

    We examine the merit of Bernoulli packet drops in actively detecting integrity attacks on control systems. The aim is to detect an adversary who delivers fake sensor measurements to a system operator in order to conceal their effect on the plant. Physical watermarks, or noisy additive Gaussian inputs, have been previously used to detect several classes of integrity attacks in control systems. In this paper, we consider the analysis and design of Gaussian physical watermarks in the presence of packet drops at the control input. On one hand, this enables analysis in a more general network setting. On the othermore » hand, we observe that in certain cases, Bernoulli packet drops can improve detection performance relative to a purely Gaussian watermark. This motivates the joint design of a Bernoulli-Gaussian watermark which incorporates both an additive Gaussian input and a Bernoulli drop process. We characterize the effect of such a watermark on system performance as well as attack detectability in two separate design scenarios. Here, we consider a correlation detector for attack recognition. We then propose efficiently solvable optimization problems to intelligently select parameters of the Gaussian input and the Bernoulli drop process while addressing security and performance trade-offs. Finally, we provide numerical results which illustrate that a watermark with packet drops can indeed outperform a Gaussian watermark.« less

  9. 'A device for being able to book P&L': the organizational embedding of the Gaussian copula.

    PubMed

    MacKenzie, Donald; Spears, Taylor

    2014-06-01

    This article, the second of two articles on the Gaussian copula family of models, discusses the attitude of 'quants' (modellers) to these models, showing that contrary to some accounts, those quants were not 'model dopes' who uncritically accepted the outputs of the models. Although sometimes highly critical of Gaussian copulas - even 'othering' them as not really being models --they nevertheless nearly all kept using them, an outcome we explain with reference to the embedding of these models in inter- and intra-organizational processes: communication, risk control and especially the setting of bonuses. The article also examines the role of Gaussian copula models in the 2007-2008 global crisis and in a 2005 episode known as 'the correlation crisis'. We end with the speculation that all widely used derivatives models (and indeed the evaluation culture in which they are embedded) help generate inter-organizational co-ordination, and all that is special in this respect about the Gaussian copula is that its status as 'other' makes this role evident.

  10. Performance Evaluation of Satellite Communication Systems Operating in the Q/V/W Bands

    DTIC Science & Technology

    2013-06-30

    cloud liquid water content (blue line = original MODIS data, red line = underlying Gaussian process) and of rainfall ( NIMROD rain rate data) .. 3-22...correlation of rainfall as obtained from an extensive set of rain field collected by the NIMROD weather radar network [Luini and Capsoni, 2012] has been...underlying Gaussian process) Rain ( NIMROD data) Figure 3-21. Decorrelation with distance of the cloud liquid water content (blue line = original

  11. The effect of noise and lipid signals on determination of Gaussian and non-Gaussian diffusion parameters in skeletal muscle.

    PubMed

    Cameron, Donnie; Bouhrara, Mustapha; Reiter, David A; Fishbein, Kenneth W; Choi, Seongjin; Bergeron, Christopher M; Ferrucci, Luigi; Spencer, Richard G

    2017-07-01

    This work characterizes the effect of lipid and noise signals on muscle diffusion parameter estimation in several conventional and non-Gaussian models, the ultimate objectives being to characterize popular fat suppression approaches for human muscle diffusion studies, to provide simulations to inform experimental work and to report normative non-Gaussian parameter values. The models investigated in this work were the Gaussian monoexponential and intravoxel incoherent motion (IVIM) models, and the non-Gaussian kurtosis and stretched exponential models. These were evaluated via simulations, and in vitro and in vivo experiments. Simulations were performed using literature input values, modeling fat contamination as an additive baseline to data, whereas phantom studies used a phantom containing aliphatic and olefinic fats and muscle-like gel. Human imaging was performed in the hamstring muscles of 10 volunteers. Diffusion-weighted imaging was applied with spectral attenuated inversion recovery (SPAIR), slice-select gradient reversal and water-specific excitation fat suppression, alone and in combination. Measurement bias (accuracy) and dispersion (precision) were evaluated, together with intra- and inter-scan repeatability. Simulations indicated that noise in magnitude images resulted in <6% bias in diffusion coefficients and non-Gaussian parameters (α, K), whereas baseline fitting minimized fat bias for all models, except IVIM. In vivo, popular SPAIR fat suppression proved inadequate for accurate parameter estimation, producing non-physiological parameter estimates without baseline fitting and large biases when it was used. Combining all three fat suppression techniques and fitting data with a baseline offset gave the best results of all the methods studied for both Gaussian diffusion and, overall, for non-Gaussian diffusion. It produced consistent parameter estimates for all models, except IVIM, and highlighted non-Gaussian behavior perpendicular to muscle fibers (α ~ 0.95, K ~ 3.1). These results show that effective fat suppression is crucial for accurate measurement of non-Gaussian diffusion parameters, and will be an essential component of quantitative studies of human muscle quality. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  12. Adaptive Filtering Using Recurrent Neural Networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  13. Moving target detection method based on improved Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.

  14. The properties of the anti-tumor model with coupling non-Gaussian noise and Gaussian colored noise

    NASA Astrophysics Data System (ADS)

    Guo, Qin; Sun, Zhongkui; Xu, Wei

    2016-05-01

    The anti-tumor model with correlation between multiplicative non-Gaussian noise and additive Gaussian-colored noise has been investigated in this paper. The behaviors of the stationary probability distribution demonstrate that the multiplicative non-Gaussian noise plays a dual role in the development of tumor and an appropriate additive Gaussian colored noise can lead to a minimum of the mean value of tumor cell population. The mean first passage time is calculated to quantify the effects of noises on the transition time of tumors between the stable states. An increase in both the non-Gaussian noise intensity and the departure from the Gaussian noise can accelerate the transition from the disease state to the healthy state. On the contrary, an increase in cross-correlated degree will slow down the transition. Moreover, the correlation time can enhance the stability of the disease state.

  15. Probability density and exceedance rate functions of locally Gaussian turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1989-01-01

    A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.

  16. Gaussian mixture models as flux prediction method for central receivers

    NASA Astrophysics Data System (ADS)

    Grobler, Annemarie; Gauché, Paul; Smit, Willie

    2016-05-01

    Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.

  17. Gaussian process regression for tool wear prediction

    NASA Astrophysics Data System (ADS)

    Kong, Dongdong; Chen, Yongjie; Li, Ning

    2018-05-01

    To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.

  18. Empirical research on complex networks modeling of combat SoS based on data from real war-game, Part I: Statistical characteristics

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kou, Yingxin; Li, Zhanwu; Xu, An; Wu, Cheng

    2018-01-01

    We build a complex networks model of combat System-of-Systems (SoS) based on empirical data from a real war-game, this model is a combination of command & control (C2) subnetwork, sensors subnetwork, influencers subnetwork and logistical support subnetwork, each subnetwork has idiographic components and statistical characteristics. The C2 subnetwork is the core of whole combat SoS, it has a hierarchical structure with no modularity, of which robustness is strong enough to maintain normal operation after any two nodes is destroyed; the sensors subnetwork and influencers subnetwork are like sense organ and limbs of whole combat SoS, they are both flat modular networks of which degree distribution obey GEV distribution and power-law distribution respectively. The communication network is the combination of all subnetworks, it is an assortative Small-World network with core-periphery structure, the Intelligence & Communication Stations/Command Center integrated with C2 nodes in the first three level act as the hub nodes in communication network, and all the fourth-level C2 nodes, sensors, influencers and logistical support nodes have communication capability, they act as the periphery nodes in communication network, its degree distribution obeys exponential distribution in the beginning, Gaussian distribution in the middle, and power-law distribution in the end, and its path length obeys GEV distribution. The betweenness centrality distribution, closeness centrality distribution and eigenvector centrality are also been analyzed to measure the vulnerability of nodes.

  19. Sparse network-based models for patient classification using fMRI

    PubMed Central

    Rosa, Maria J.; Portugal, Liana; Hahn, Tim; Fallgatter, Andreas J.; Garrido, Marta I.; Shawe-Taylor, John; Mourao-Miranda, Janaina

    2015-01-01

    Pattern recognition applied to whole-brain neuroimaging data, such as functional Magnetic Resonance Imaging (fMRI), has proved successful at discriminating psychiatric patients from healthy participants. However, predictive patterns obtained from whole-brain voxel-based features are difficult to interpret in terms of the underlying neurobiology. Many psychiatric disorders, such as depression and schizophrenia, are thought to be brain connectivity disorders. Therefore, pattern recognition based on network models might provide deeper insights and potentially more powerful predictions than whole-brain voxel-based approaches. Here, we build a novel sparse network-based discriminative modeling framework, based on Gaussian graphical models and L1-norm regularized linear Support Vector Machines (SVM). In addition, the proposed framework is optimized in terms of both predictive power and reproducibility/stability of the patterns. Our approach aims to provide better pattern interpretation than voxel-based whole-brain approaches by yielding stable brain connectivity patterns that underlie discriminative changes in brain function between the groups. We illustrate our technique by classifying patients with major depressive disorder (MDD) and healthy participants, in two (event- and block-related) fMRI datasets acquired while participants performed a gender discrimination and emotional task, respectively, during the visualization of emotional valent faces. PMID:25463459

  20. Multivariate Bayesian analysis of Gaussian, right censored Gaussian, ordered categorical and binary traits using Gibbs sampling

    PubMed Central

    Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just

    2003-01-01

    A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531

  1. Gaussian Mixture Model of Heart Rate Variability

    PubMed Central

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  2. Improved system identification using artificial neural networks and analysis of individual differences in responses of an identified neuron.

    PubMed

    Costalago Meruelo, Alicia; Simpson, David M; Veres, Sandor M; Newland, Philip L

    2016-03-01

    Mathematical modelling is used routinely to understand the coding properties and dynamics of responses of neurons and neural networks. Here we analyse the effectiveness of Artificial Neural Networks (ANNs) as a modelling tool for motor neuron responses. We used ANNs to model the synaptic responses of an identified motor neuron, the fast extensor motor neuron, of the desert locust in response to displacement of a sensory organ, the femoral chordotonal organ, which monitors movements of the tibia relative to the femur of the leg. The aim of the study was threefold: first to determine the potential value of ANNs as tools to model and investigate neural networks, second to understand the generalisation properties of ANNs across individuals and to different input signals and third, to understand individual differences in responses of an identified neuron. A metaheuristic algorithm was developed to design the ANN architectures. The performance of the models generated by the ANNs was compared with those generated through previous mathematical models of the same neuron. The results suggest that ANNs are significantly better than LNL and Wiener models in predicting specific neural responses to Gaussian White Noise, but not significantly different when tested with sinusoidal inputs. They are also able to predict responses of the same neuron in different individuals irrespective of which animal was used to develop the model, although notable differences between some individuals were evident. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Propagating waves can explain irregular neural dynamics.

    PubMed

    Keane, Adam; Gong, Pulin

    2015-01-28

    Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level. Copyright © 2015 the authors 0270-6474/15/351591-15$15.00/0.

  4. Continuous-variable entanglement distillation of non-Gaussian mixed states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong Ruifang; Lassen, Mikael; Department of Physics, Technical University of Denmark, Building 309, DK-2800 Lyngby

    2010-07-15

    Many different quantum-information communication protocols such as teleportation, dense coding, and entanglement-based quantum key distribution are based on the faithful transmission of entanglement between distant location in an optical network. The distribution of entanglement in such a network is, however, hampered by loss and noise that is inherent in all practical quantum channels. Thus, to enable faithful transmission one must resort to the protocol of entanglement distillation. In this paper we present a detailed theoretical analysis and an experimental realization of continuous variable entanglement distillation in a channel that is inflicted by different kinds of non-Gaussian noise. The continuous variablemore » entangled states are generated by exploiting the third order nonlinearity in optical fibers, and the states are sent through a free-space laboratory channel in which the losses are altered to simulate a free-space atmospheric channel with varying losses. We use linear optical components, homodyne measurements, and classical communication to distill the entanglement, and we find that by using this method the entanglement can be probabilistically increased for some specific non-Gaussian noise channels.« less

  5. Random medium model for cusping of plane waves.

    PubMed

    Li, Jia; Korotkova, Olga

    2017-09-01

    We introduce a model for a three-dimensional (3D) Schell-type stationary medium whose degree of potential's correlation satisfies the Fractional Multi-Gaussian (FMG) function. Compared with the scattered profile produced by the Gaussian Schell-model (GSM) medium, the Fractional Multi-Gaussian Schell-model (FMGSM) medium gives rise to a sharp concave intensity apex in the scattered field. This implies that the FMGSM medium also accounts for a larger than Gaussian's power in the bucket (PIB) in the forward scattering direction, hence being a better candidate than the GSM medium for generating highly-focused (cusp-like) scattered profiles in the far zone. Compared to other mathematical models for the medium's correlation function which can produce similar cusped scattered profiles the FMG function offers unprecedented tractability being the weighted superposition of Gaussian functions. Our results provide useful applications to energy counter problems and particle manipulation by weakly scattered fields.

  6. Dynamic heterogeneity and non-Gaussian statistics for acetylcholine receptors on live cell membrane

    NASA Astrophysics Data System (ADS)

    He, W.; Song, H.; Su, Y.; Geng, L.; Ackerson, B. J.; Peng, H. B.; Tong, P.

    2016-05-01

    The Brownian motion of molecules at thermal equilibrium usually has a finite correlation time and will eventually be randomized after a long delay time, so that their displacement follows the Gaussian statistics. This is true even when the molecules have experienced a complex environment with a finite correlation time. Here, we report that the lateral motion of the acetylcholine receptors on live muscle cell membranes does not follow the Gaussian statistics for normal Brownian diffusion. From a careful analysis of a large volume of the protein trajectories obtained over a wide range of sampling rates and long durations, we find that the normalized histogram of the protein displacements shows an exponential tail, which is robust and universal for cells under different conditions. The experiment indicates that the observed non-Gaussian statistics and dynamic heterogeneity are inherently linked to the slow-active remodelling of the underlying cortical actin network.

  7. Novel transform for image description and compression with implementation by neural architectures

    NASA Astrophysics Data System (ADS)

    Ben-Arie, Jezekiel; Rao, Raghunath K.

    1991-10-01

    A general method for signal representation using nonorthogonal basis functions that are composed of Gaussians are described. The Gaussians can be combined into groups with predetermined configuration that can approximate any desired basis function. The same configuration at different scales forms a set of self-similar wavelets. The general scheme is demonstrated by representing a natural signal employing an arbitrary basis function. The basic methodology is demonstrated by two novel schemes for efficient representation of 1-D and 2- D signals using Gaussian basis functions (BFs). Special methods are required here since the Gaussian functions are nonorthogonal. The first method employs a paradigm of maximum energy reduction interlaced with the A* heuristic search. The second method uses an adaptive lattice system to find the minimum-squared error of the BFs onto the signal, and a lateral-vertical suppression network to select the most efficient representation in terms of data compression.

  8. Orthogonal Gaussian process models

    DOE PAGES

    Plumlee, Matthew; Joseph, V. Roshan

    2017-01-01

    Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.

  9. Orthogonal Gaussian process models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plumlee, Matthew; Joseph, V. Roshan

    Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.

  10. Relaxation dynamics of a multihierarchical polymer network

    NASA Astrophysics Data System (ADS)

    Jurjiu, Aurel; Biter, Teodor Lucian; Turcu, Flaviu

    2017-01-01

    In this work, we study the relaxation dynamics of a multihierarchical polymer network built by replicating the Vicsek fractal in dendrimer shape. The relaxation dynamics is investigated in the framework of the generalized Gaussian structure model by employing both Rouse and Zimm approaches. In the Rouse-type approach, we show the iterative procedure whereby the whole eigenvalue spectrum of the connectivity matrix of the multihierarchical structure can be obtained. Remarkably, the general picture that emerges from both approaches, even though we have a mixed growth algorithm, is that the obtained multihierarchical structure preserves the individual relaxation behaviors of its components. The theoretical findings with respect to the splitting of the intermediate domain of the relaxation quantities are well supported by experimental results.

  11. Asynchronous sampled-data approach for event-triggered systems

    NASA Astrophysics Data System (ADS)

    Mahmoud, Magdi S.; Memon, Azhar M.

    2017-11-01

    While aperiodically triggered network control systems save a considerable amount of communication bandwidth, they also pose challenges such as coupling between control and event-condition design, optimisation of the available resources such as control, communication and computation power, and time-delays due to computation and communication network. With this motivation, the paper presents separate designs of control and event-triggering mechanism, thus simplifying the overall analysis, asynchronous linear quadratic Gaussian controller which tackles delays and aperiodic nature of transmissions, and a novel event mechanism which compares the cost of the aperiodic system against a reference periodic implementation. The proposed scheme is simulated on a linearised wind turbine model for pitch angle control and the results show significant improvement against the periodic counterpart.

  12. Reverse-engineering of gene networks for regulating early blood development from single-cell measurements.

    PubMed

    Wei, Jiangyong; Hu, Xiaohua; Zou, Xiufen; Tian, Tianhai

    2017-12-28

    Recent advances in omics technologies have raised great opportunities to study large-scale regulatory networks inside the cell. In addition, single-cell experiments have measured the gene and protein activities in a large number of cells under the same experimental conditions. However, a significant challenge in computational biology and bioinformatics is how to derive quantitative information from the single-cell observations and how to develop sophisticated mathematical models to describe the dynamic properties of regulatory networks using the derived quantitative information. This work designs an integrated approach to reverse-engineer gene networks for regulating early blood development based on singel-cell experimental observations. The wanderlust algorithm is initially used to develop the pseudo-trajectory for the activities of a number of genes. Since the gene expression data in the developed pseudo-trajectory show large fluctuations, we then use Gaussian process regression methods to smooth the gene express data in order to obtain pseudo-trajectories with much less fluctuations. The proposed integrated framework consists of both bioinformatics algorithms to reconstruct the regulatory network and mathematical models using differential equations to describe the dynamics of gene expression. The developed approach is applied to study the network regulating early blood cell development. A graphic model is constructed for a regulatory network with forty genes and a dynamic model using differential equations is developed for a network of nine genes. Numerical results suggests that the proposed model is able to match experimental data very well. We also examine the networks with more regulatory relations and numerical results show that more regulations may exist. We test the possibility of auto-regulation but numerical simulations do not support the positive auto-regulation. In addition, robustness is used as an importantly additional criterion to select candidate networks. The research results in this work shows that the developed approach is an efficient and effective method to reverse-engineer gene networks using single-cell experimental observations.

  13. Automatic selection of atomic fingerprints and reference configurations for machine-learning potentials

    NASA Astrophysics Data System (ADS)

    Imbalzano, Giulio; Anelli, Andrea; Giofré, Daniele; Klees, Sinja; Behler, Jörg; Ceriotti, Michele

    2018-06-01

    Machine learning of atomic-scale properties is revolutionizing molecular modeling, making it possible to evaluate inter-atomic potentials with first-principles accuracy, at a fraction of the costs. The accuracy, speed, and reliability of machine learning potentials, however, depend strongly on the way atomic configurations are represented, i.e., the choice of descriptors used as input for the machine learning method. The raw Cartesian coordinates are typically transformed in "fingerprints," or "symmetry functions," that are designed to encode, in addition to the structure, important properties of the potential energy surface like its invariances with respect to rotation, translation, and permutation of like atoms. Here we discuss automatic protocols to select a number of fingerprints out of a large pool of candidates, based on the correlations that are intrinsic to the training data. This procedure can greatly simplify the construction of neural network potentials that strike the best balance between accuracy and computational efficiency and has the potential to accelerate by orders of magnitude the evaluation of Gaussian approximation potentials based on the smooth overlap of atomic positions kernel. We present applications to the construction of neural network potentials for water and for an Al-Mg-Si alloy and to the prediction of the formation energies of small organic molecules using Gaussian process regression.

  14. Acoustic Sensor Network for Relative Positioning of Nodes

    PubMed Central

    De Marziani, Carlos; Ureña, Jesus; Hernandez, Álvaro; Mazo, Manuel; García, Juan Jesús; Jimenez, Ana; Rubio, María del Carmen Pérez; Álvarez, Fernando; Villadangos, José Manuel

    2009-01-01

    In this work, an acoustic sensor network for a relative localization system is analyzed by reporting the accuracy achieved in the position estimation. The proposed system has been designed for those applications where objects are not restricted to a particular environment and thus one cannot depend on any external infrastructure to compute their positions. The objects are capable of computing spatial relations among themselves using only acoustic emissions as a ranging mechanism. The object positions are computed by a multidimensional scaling (MDS) technique and, afterwards, a least-square algorithm, based on the Levenberg-Marquardt algorithm (LMA), is applied to refine results. Regarding the position estimation, all the parameters involved in the computation of the temporary relations with the proposed ranging mechanism have been considered. The obtained results show that a fine-grained localization can be achieved considering a Gaussian distribution error in the proposed ranging mechanism. Furthermore, since acoustic sensors require a line-of-sight to properly work, the system has been tested by modeling the lost of this line-of-sight as a non-Gaussian error. A suitable position estimation has been achieved even if it is considered a bias of up to 25 of the line-of-sight measurements among a set of nodes. PMID:22291520

  15. Determination of Spatio-Temporal Characteristics of D-region Electron Density during Annular Solar Eclipse from VLF Network Observations

    NASA Astrophysics Data System (ADS)

    Basak, T.; Hobara, Y.

    2015-12-01

    A major part of the path of the annular solar eclipse of May 20, 2012 (magnitude 0.9439) was over southern Japan. The D-region ionospheric changes associated with that eclipse, led to several degree of observable perturbations of sub-ionospheric very low frequency (VLF) radio signal. The University of Electro-Communications (UEC) operates VLF observation network over Japan. The solar eclipse associated signal changes were recorded in several receiving stations (Rx) simultaneously for the VLF signals coming from NWC/19.8kHz, JJI/22.2kHz, JJY/40.0kHz, NLK/24.8kHz and other VLF transmitters (Tx). These temporal dependences of VLF signal perturbation have been analyzed and the spatio-temporal characteristics of respective sub-ionospheric perturbations has already been studied by earlier workers using 2D-Finite Difference Time Domain method of simulation. In this work, we determine the spatial scale, depth and temporal dependence of lower ionospheric perturbation in consistence with umbral and penumbral motion. We considered the 2-parameter D-region ionospheric model with exponential electron density profile. To model the solar obscuration effect over it, we assumed a generalized space-time dependent 2-dimensional elliptical Gaussian distribution for ionospheric parameters, such as, effective reflection height (h') and sharpness factor (β). The depth (△hmax, △βmax), center of shadow (lato(t), lono(t)) and spatial scale (σlat,lon) of that Gaussian distribution are used as model parameters. In the vicinity of the eclipse zone, we compute the VLF signal perturbations using Long Wave Propagation Capability (LWPC) code for several signal propagation paths. The propagation path characteristics, such as, ground and water conductivity and geomagnetic effect on ionosphere are considered from standard LWPC prescriptions. The model parameters are tuned to set an optimum agreement between our computation and observed positive and negative type of VLF perturbations. Thus, appropriate set of parameters lead us to the possible determination of spatial scale, depth and temporal dependence of eclipse associated D-region electron density perturbation solely from the VLF-network observations coupled with theoretical modeling.

  16. Real-time antenna fault diagnosis experiments at DSS 13

    NASA Technical Reports Server (NTRS)

    Mellstrom, J.; Pierson, C.; Smyth, P.

    1992-01-01

    Experimental results obtained when a previously described fault diagnosis system was run online in real time at the 34-m beam waveguide antenna at Deep Space Station (DSS) 13 are described. Experimental conditions and the quality of results are described. A neural network model and a maximum-likelihood Gaussian classifier are compared with and without a Markov component to model temporal context. At the rate of a state update every 6.4 seconds, over a period of roughly 1 hour, the neural-Markov system had zero errors (incorrect state estimates) while monitoring both faulty and normal operations. The overall results indicate that the neural-Markov combination is the most accurate model and has significant practical potential.

  17. Rolling bearing fault feature learning using improved convolutional deep belief network with compressed sensing

    NASA Astrophysics Data System (ADS)

    Shao, Haidong; Jiang, Hongkai; Zhang, Haizhou; Duan, Wenjing; Liang, Tianchen; Wu, Shuaipeng

    2018-02-01

    The vibration signals collected from rolling bearing are usually complex and non-stationary with heavy background noise. Therefore, it is a great challenge to efficiently learn the representative fault features of the collected vibration signals. In this paper, a novel method called improved convolutional deep belief network (CDBN) with compressed sensing (CS) is developed for feature learning and fault diagnosis of rolling bearing. Firstly, CS is adopted for reducing the vibration data amount to improve analysis efficiency. Secondly, a new CDBN model is constructed with Gaussian visible units to enhance the feature learning ability for the compressed data. Finally, exponential moving average (EMA) technique is employed to improve the generalization performance of the constructed deep model. The developed method is applied to analyze the experimental rolling bearing vibration signals. The results confirm that the developed method is more effective than the traditional methods.

  18. Poly-Gaussian model of randomly rough surface in rarefied gas flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aksenova, Olga A.; Khalidov, Iskander A.

    2014-12-09

    Surface roughness is simulated by the model of non-Gaussian random process. Our results for the scattering of rarefied gas atoms from a rough surface using modified approach to the DSMC calculation of rarefied gas flow near a rough surface are developed and generalized applying the poly-Gaussian model representing probability density as the mixture of Gaussian densities. The transformation of the scattering function due to the roughness is characterized by the roughness operator. Simulating rough surface of the walls by the poly-Gaussian random field expressed as integrated Wiener process, we derive a representation of the roughness operator that can be appliedmore » in numerical DSMC methods as well as in analytical investigations.« less

  19. Non-Gaussian PDF Modeling of Turbulent Boundary Layer Fluctuating Pressure Excitation

    NASA Technical Reports Server (NTRS)

    Steinwolf, Alexander; Rizzi, Stephen A.

    2003-01-01

    The purpose of the study is to investigate properties of the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the exterior of a supersonic transport aircraft. It is shown that fluctuating pressure PDFs differ from the Gaussian distribution even for surface conditions having no significant discontinuities. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations upstream of forward-facing step discontinuities and downstream of aft-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. Various analytical PDF distributions are used and further developed to model this behavior.

  20. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    NASA Astrophysics Data System (ADS)

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  1. Non-Gaussianities in multifield DBI inflation with a waterfall phase transition

    NASA Astrophysics Data System (ADS)

    Kidani, Taichi; Koyama, Kazuya; Mizuno, Shuntaro

    2012-10-01

    We study multifield Dirac-Born-Infeld (DBI) inflation models with a waterfall phase transition. This transition happens for a D3 brane moving in the warped conifold if there is an instability along angular directions. The transition converts the angular perturbations into the curvature perturbation. Thanks to this conversion, multifield models can evade the stringent constraints that strongly disfavor single field ultraviolet (UV) DBI inflation models in string theory. We explicitly demonstrate that our model satisfies current observational constraints on the spectral index and equilateral non-Gaussianity as well as the bound on the tensor to scalar ratio imposed in string theory models. In addition, we show that large local type non-Gaussianity is generated together with equilateral non-Gaussianity in this model.

  2. Noise Tolerance of Attractor and Feedforward Memory Models

    PubMed Central

    Lim, Sukbin; Goldman, Mark S.

    2017-01-01

    In short-term memory networks, transient stimuli are represented by patterns of neural activity that persist long after stimulus offset. Here, we compare the performance of two prominent classes of memory networks, feedback-based attractor networks and feedforward networks, in conveying information about the amplitude of a briefly presented stimulus in the presence of gaussian noise. Using Fisher information as a metric of memory performance, we find that the optimal form of network architecture depends strongly on assumptions about the forms of nonlinearities in the network. For purely linear networks, we find that feedforward networks outperform attractor networks because noise is continually removed from feedforward networks when signals exit the network; as a result, feedforward networks can amplify signals they receive faster than noise accumulates over time. By contrast, attractor networks must operate in a signal-attenuating regime to avoid the buildup of noise. However, if the amplification of signals is limited by a finite dynamic range of neuronal responses or if noise is reset at the time of signal arrival, as suggested by recent experiments, we find that attractor networks can out-perform feedforward ones. Under a simple model in which neurons have a finite dynamic range, we find that the optimal attractor networks are forgetful if there is no mechanism for noise reduction with signal arrival but nonforgetful (perfect integrators) in the presence of a strong reset mechanism. Furthermore, we find that the maximal Fisher information for the feedforward and attractor networks exhibits power law decay as a function of time and scales linearly with the number of neurons. These results highlight prominent factors that lead to trade-offs in the memory performance of networks with different architectures and constraints, and suggest conditions under which attractor or feedforward networks may be best suited to storing information about previous stimuli. PMID:22091664

  3. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Spiral Wave in Small-World Networks of Hodgkin-Huxley Neurons

    NASA Astrophysics Data System (ADS)

    Ma, Jun; Yang, Li-Jian; Wu, Ying; Zhang, Cai-Rong

    2010-09-01

    The effect of small-world connection and noise on the formation and transition of spiral wave in the networks of Hodgkin-Huxley neurons are investigated in detail. Some interesting results are found in our numerical studies. i) The quiescent neurons are activated to propagate electric signal to others by generating and developing spiral wave from spiral seed in small area. ii) A statistical factor is defined to describe the collective properties and phase transition induced by the topology of networks and noise. iii) Stable rotating spiral wave can be generated and keeps robust when the rewiring probability is below certain threshold, otherwise, spiral wave can not be developed from the spiral seed and spiral wave breakup occurs for a stable rotating spiral wave. iv) Gaussian white noise is introduced on the membrane of neurons to study the noise-induced phase transition on spiral wave in small-world networks of neurons. It is confirmed that Gaussian white noise plays active role in supporting and developing spiral wave in the networks of neurons, and appearance of smaller factor of synchronization indicates high possibility to induce spiral wave.

  4. Statistical analysis of passenger-crowding in bus transport network of Harbin

    NASA Astrophysics Data System (ADS)

    Hu, Baoyu; Feng, Shumin; Li, Jinyang; Zhao, Hu

    2018-01-01

    Passenger flow data is indispensable but rare in the study of public transport networks. In this study, we focus on the passenger-crowding characteristics of the bus transport network of Harbin (BTN-H) based on passenger flow investigation. The three frequency histograms for all the uplinks and downlinks in Harbin are presented, including passengers on the bus at each section, crowding coefficients, and position parameters of crowded sections. The differences in crowding position are analyzed on each route. The distributions of degree and crowding degree (in directed space L) follow an exponential law. The new finding indicates that there are many stations with few crowded sections and a few stations with many crowded sections. The distributions of path length and crowded length (in directed space P) are presented based on the minimum transfer times, and it is found that they can be fitted by a composite Gaussian function and a Gaussian function, respectively. The stations and paths can be divided into three crowd levels. We conclude that BTN-H is crowded from a network-based perspective.

  5. Fault detection and diagnosis for non-Gaussian stochastic distribution systems with time delays via RBF neural networks.

    PubMed

    Yi, Qu; Zhan-ming, Li; Er-chao, Li

    2012-11-01

    A new fault detection and diagnosis (FDD) problem via the output probability density functions (PDFs) for non-gausian stochastic distribution systems (SDSs) is investigated. The PDFs can be approximated by radial basis functions (RBFs) neural networks. Different from conventional FDD problems, the measured information for FDD is the output stochastic distributions and the stochastic variables involved are not confined to Gaussian ones. A (RBFs) neural network technique is proposed so that the output PDFs can be formulated in terms of the dynamic weighings of the RBFs neural network. In this work, a nonlinear adaptive observer-based fault detection and diagnosis algorithm is presented by introducing the tuning parameter so that the residual is as sensitive as possible to the fault. Stability and Convergency analysis is performed in fault detection and fault diagnosis analysis for the error dynamic system. At last, an illustrated example is given to demonstrate the efficiency of the proposed algorithm, and satisfactory results have been obtained. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  6. A network of spiking neurons for computing sparse representations in an energy efficient way

    PubMed Central

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.

    2013-01-01

    Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853

  7. A network of spiking neurons for computing sparse representations in an energy-efficient way.

    PubMed

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B

    2012-11-01

    Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.

  8. Multiview road sign detection via self-adaptive color model and shape context matching

    NASA Astrophysics Data System (ADS)

    Liu, Chunsheng; Chang, Faliang; Liu, Chengyun

    2016-09-01

    The multiview appearance of road signs in uncontrolled environments has made the detection of road signs a challenging problem in computer vision. We propose a road sign detection method to detect multiview road signs. This method is based on several algorithms, including the classical cascaded detector, the self-adaptive weighted Gaussian color model (SW-Gaussian model), and a shape context matching method. The classical cascaded detector is used to detect the frontal road signs in video sequences and obtain the parameters for the SW-Gaussian model. The proposed SW-Gaussian model combines the two-dimensional Gaussian model and the normalized red channel together, which can largely enhance the contrast between the red signs and background. The proposed shape context matching method can match shapes with big noise, which is utilized to detect road signs in different directions. The experimental results show that compared with previous detection methods, the proposed multiview detection method can reach higher detection rate in detecting signs with different directions.

  9. Quantization and training of object detection networks with low-precision weights and activations

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie

    2018-01-01

    As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.

  10. Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes

    NASA Astrophysics Data System (ADS)

    Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.

    2016-12-01

    The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.

  11. Fluctuation Dynamics Analysis of gp120 Envelope Protein Reveals a Topologically Based Communication Network

    PubMed Central

    Shrivastava, Indira; LaLonde, Judith M.

    2012-01-01

    HIV infection is initiated by binding of the viral glycoprotein gp120, to the cellular receptor CD4. Upon CD4 binding, gp120 undergoes conformational change, permitting binding to the chemokine receptor. Crystal structures of gp120 ternary complex reveal the CD4 bound conformation of gp120. We report here the application of Gaussian Network Model (GNM) to the crystal structures of gp120 bound to CD4 or CD4 mimic and 17b, to study the collective motions of the gp120 core and determine the communication propensities of the residue network. The GNM fluctuation profiles identify residues in the inner domain and outer domain that may facilitate conformational change or stability, respectively. Communication propensities delineate a residue network that is topologically suited for signal propagation from the Phe43 cavity throughout the gp120 outer domain. . These results provide a new context for interpreting gp120 core envelope structure-function relationships. PMID:20718047

  12. A sparse structure learning algorithm for Gaussian Bayesian Network identification from high-dimensional data.

    PubMed

    Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric

    2013-06-01

    Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph--a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer's disease (AD) and reveal findings that could lead to advancements in AD research.

  13. A Sparse Structure Learning Algorithm for Gaussian Bayesian Network Identification from High-Dimensional Data

    PubMed Central

    Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric

    2014-01-01

    Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph (DAG)—a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer’s disease (AD) and reveal findings that could lead to advancements in AD research. PMID:22665720

  14. Flat-top beam for laser-stimulated pain

    NASA Astrophysics Data System (ADS)

    McCaughey, Ryan; Nadeau, Valerie; Dickinson, Mark

    2005-04-01

    One of the main problems during laser stimulation in human pain research is the risk of tissue damage caused by excessive heating of the skin. This risk has been reduced by using a laser beam with a flattop (or superGaussian) intensity profile, instead of the conventional Gaussian beam. A finite difference approximation to the heat conduction equation has been applied to model the temperature distribution in skin as a result of irradiation by flattop and Gaussian profile CO2 laser beams. The model predicts that a 15 mm diameter, 15 W, 100 ms CO2 laser pulse with an order 6 superGaussian profile produces a maximum temperature 6 oC less than a Gaussian beam with the same energy density. A superGaussian profile was created by passing a Gaussian beam through a pair of zinc selenide aspheric lenses which refract the more intense central region of the beam towards the less intense periphery. The profiles of the lenses were determined by geometrical optics. In human pain trials the superGaussian beam required more power than the Gaussian beam to reach sensory and pain thresholds.

  15. Gaussian-input Gaussian mixture model for representing density maps and atomic models.

    PubMed

    Kawabata, Takeshi

    2018-07-01

    A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  16. Seismic waveform inversion using neural networks

    NASA Astrophysics Data System (ADS)

    De Wit, R. W.; Trampert, J.

    2012-12-01

    Full waveform tomography aims to extract all available information on Earth structure and seismic sources from seismograms. The strongly non-linear nature of this inverse problem is often addressed through simplifying assumptions for the physical theory or data selection, thus potentially neglecting valuable information. Furthermore, the assessment of the quality of the inferred model is often lacking. This calls for the development of methods that fully appreciate the non-linear nature of the inverse problem, whilst providing a quantification of the uncertainties in the final model. We propose to invert seismic waveforms in a fully non-linear way by using artificial neural networks. Neural networks can be viewed as powerful and flexible non-linear filters. They are very common in speech, handwriting and pattern recognition. Mixture Density Networks (MDN) allow us to obtain marginal posterior probability density functions (pdfs) of all model parameters, conditioned on the data. An MDN can approximate an arbitrary conditional pdf as a linear combination of Gaussian kernels. Seismograms serve as input, Earth structure parameters are the so-called targets and network training aims to learn the relationship between input and targets. The network is trained on a large synthetic data set, which we construct by drawing many random Earth models from a prior model pdf and solving the forward problem for each of these models, thus generating synthetic seismograms. As a first step, we aim to construct a 1D Earth model. Training sets are constructed using the Mineos package, which computes synthetic seismograms in a spherically symmetric non-rotating Earth by summing normal modes. We train a network on the body waveforms present in these seismograms. Once the network has been trained, it can be presented with new unseen input data, in our case the body waves in real seismograms. We thus obtain the posterior pdf which represents our final state of knowledge given the information in the training set and the real data.

  17. 'The formula that killed Wall Street': the Gaussian copula and modelling practices in investment banking.

    PubMed

    MacKenzie, Donald; Spears, Taylor

    2014-06-01

    Drawing on documentary sources and 114 interviews with market participants, this and a companion article discuss the development and use in finance of the Gaussian copula family of models, which are employed to estimate the probability distribution of losses on a pool of loans or bonds, and which were centrally involved in the credit crisis. This article, which explores how and why the Gaussian copula family developed in the way it did, employs the concept of 'evaluation culture', a set of practices, preferences and beliefs concerning how to determine the economic value of financial instruments that is shared by members of multiple organizations. We identify an evaluation culture, dominant within the derivatives departments of investment banks, which we call the 'culture of no-arbitrage modelling', and explore its relation to the development of Gaussian copula models. The article suggests that two themes from the science and technology studies literature on models (modelling as 'impure' bricolage, and modelling as articulating with heterogeneous objectives and constraints) help elucidate the history of Gaussian copula models in finance.

  18. Broad distribution spectrum from Gaussian to power law appears in stochastic variations in RNA-seq data.

    PubMed

    Awazu, Akinori; Tanabe, Takahiro; Kamitani, Mari; Tezuka, Ayumi; Nagano, Atsushi J

    2018-05-29

    Gene expression levels exhibit stochastic variations among genetically identical organisms under the same environmental conditions. In many recent transcriptome analyses based on RNA sequencing (RNA-seq), variations in gene expression levels among replicates were assumed to follow a negative binomial distribution, although the physiological basis of this assumption remains unclear. In this study, RNA-seq data were obtained from Arabidopsis thaliana under eight conditions (21-27 replicates), and the characteristics of gene-dependent empirical probability density function (ePDF) profiles of gene expression levels were analyzed. For A. thaliana and Saccharomyces cerevisiae, various types of ePDF of gene expression levels were obtained that were classified as Gaussian, power law-like containing a long tail, or intermediate. These ePDF profiles were well fitted with a Gauss-power mixing distribution function derived from a simple model of a stochastic transcriptional network containing a feedback loop. The fitting function suggested that gene expression levels with long-tailed ePDFs would be strongly influenced by feedback regulation. Furthermore, the features of gene expression levels are correlated with their functions, with the levels of essential genes tending to follow a Gaussian-like ePDF while those of genes encoding nucleic acid-binding proteins and transcription factors exhibit long-tailed ePDF.

  19. Dynamical analysis of contrastive divergence learning: Restricted Boltzmann machines with Gaussian visible units.

    PubMed

    Karakida, Ryo; Okada, Masato; Amari, Shun-Ichi

    2016-07-01

    The restricted Boltzmann machine (RBM) is an essential constituent of deep learning, but it is hard to train by using maximum likelihood (ML) learning, which minimizes the Kullback-Leibler (KL) divergence. Instead, contrastive divergence (CD) learning has been developed as an approximation of ML learning and widely used in practice. To clarify the performance of CD learning, in this paper, we analytically derive the fixed points where ML and CDn learning rules converge in two types of RBMs: one with Gaussian visible and Gaussian hidden units and the other with Gaussian visible and Bernoulli hidden units. In addition, we analyze the stability of the fixed points. As a result, we find that the stable points of CDn learning rule coincide with those of ML learning rule in a Gaussian-Gaussian RBM. We also reveal that larger principal components of the input data are extracted at the stable points. Moreover, in a Gaussian-Bernoulli RBM, we find that both ML and CDn learning can extract independent components at one of stable points. Our analysis demonstrates that the same feature components as those extracted by ML learning are extracted simply by performing CD1 learning. Expanding this study should elucidate the specific solutions obtained by CD learning in other types of RBMs or in deep networks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Probabilistic neural networks for diagnosis of Alzheimer's disease using conventional and wavelet coherence.

    PubMed

    Sankari, Ziad; Adeli, Hojjat

    2011-04-15

    Recently, the authors presented an EEG (electroencephalogram) coherence study of the Alzheimer's disease (AD) and found statistically significant differences between AD and control groups. In this paper a probabilistic neural network (PNN) model is presented for classification of AD and healthy controls using features extracted in coherence and wavelet coherence studies on cortical connectivity in AD. The model is verified using EEGs obtained from 20 AD probable patients and 7 healthy/control subjects based on a standard 10-20 electrode configuration on the scalp. It is shown that extracting features from EEG sub-bands using coherence, as a measure of cortical connectivity, can discriminate AD patients from healthy controls effectively when a mixed band classification model is applied. For the data set used a classification accuracy of 100% is achieved using the conventional coherence and a spread parameter of the Gaussian function in a particular range found in this research. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Analysis and simulation of wireless signal propagation applying geostatistical interpolation techniques

    NASA Astrophysics Data System (ADS)

    Kolyaie, S.; Yaghooti, M.; Majidi, G.

    2011-12-01

    This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.

  2. Predictive control of intersegmental tarsal movements in an insect.

    PubMed

    Costalago-Meruelo, Alicia; Simpson, David M; Veres, Sandor M; Newland, Philip L

    2017-08-01

    In many animals intersegmental reflexes are important for postural and movement control but are still poorly undesrtood. Mathematical methods can be used to model the responses to stimulation, and thus go beyond a simple description of responses to specific inputs. Here we analyse an intersegmental reflex of the foot (tarsus) of the locust hind leg, which raises the tarsus when the tibia is flexed and depresses it when the tibia is extended. A novel method is described to measure and quantify the intersegmental responses of the tarsus to a stimulus to the femoro-tibial chordotonal organ. An Artificial Neural Network, the Time Delay Neural Network, was applied to understand the properties and dynamics of the reflex responses. The aim of this study was twofold: first to develop an accurate method to record and analyse the movement of an appendage and second, to apply methods to model the responses using Artificial Neural Networks. The results show that Artificial Neural Networks provide accurate predictions of tarsal movement when trained with an average reflex response to Gaussian White Noise stimulation compared to linear models. Furthermore, the Artificial Neural Network model can predict the individual responses of each animal and responses to others inputs such as a sinusoid. A detailed understanding of such a reflex response could be included in the design of orthoses or functional electrical stimulation treatments to improve walking in patients with neurological disorders as well as the bio/inspired design of robots.

  3. Driving profile modeling and recognition based on soft computing approach.

    PubMed

    Wahab, Abdul; Quek, Chai; Tan, Chin Keong; Takeda, Kazuya

    2009-04-01

    Advancements in biometrics-based authentication have led to its increasing prominence and are being incorporated into everyday tasks. Existing vehicle security systems rely only on alarms or smart card as forms of protection. A biometric driver recognition system utilizing driving behaviors is a highly novel and personalized approach and could be incorporated into existing vehicle security system to form a multimodal identification system and offer a greater degree of multilevel protection. In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. Feature extraction techniques based on Gaussian mixture models (GMMs) are proposed and implemented. Features extracted from the accelerator and brake pedal pressure were then used as inputs to a fuzzy neural network (FNN) system to ascertain the identity of the driver. Two fuzzy neural networks, namely, the evolving fuzzy neural network (EFuNN) and the adaptive network-based fuzzy inference system (ANFIS), are used to demonstrate the viability of the two proposed feature extraction techniques. The performances were compared against an artificial neural network (NN) implementation using the multilayer perceptron (MLP) network and a statistical method based on the GMM. Extensive testing was conducted and the results show great potential in the use of the FNN for real-time driver identification and verification. In addition, the profiling of driver behaviors has numerous other potential applications for use by law enforcement and companies dealing with buses and truck drivers.

  4. Gaussian temporal modulation for the behavior of multi-sinc Schell-model pulses in dispersive media

    NASA Astrophysics Data System (ADS)

    Liu, Xiayin; Zhao, Daomu; Tian, Kehan; Pan, Weiqing; Zhang, Kouwen

    2018-06-01

    A new class of pulse source with correlation being modeled by the convolution operation of two legitimate temporal correlation function is proposed. Particularly, analytical formulas for the Gaussian temporally modulated multi-sinc Schell-model (MSSM) pulses generated by such pulse source propagating in dispersive media are derived. It is demonstrated that the average intensity of MSSM pulses on propagation are reshaped from flat profile or a train to a distribution with a Gaussian temporal envelope by adjusting the initial correlation width of the Gaussian pulse. The effects of the Gaussian temporal modulation on the temporal degree of coherence of the MSSM pulse are also analyzed. The results presented here show the potential of coherence modulation for pulse shaping and pulsed laser material processing.

  5. Circuit theory and model-based inference for landscape connectivity

    USGS Publications Warehouse

    Hanks, Ephraim M.; Hooten, Mevin B.

    2013-01-01

    Circuit theory has seen extensive recent use in the field of ecology, where it is often applied to study functional connectivity. The landscape is typically represented by a network of nodes and resistors, with the resistance between nodes a function of landscape characteristics. The effective distance between two locations on a landscape is represented by the resistance distance between the nodes in the network. Circuit theory has been applied to many other scientific fields for exploratory analyses, but parametric models for circuits are not common in the scientific literature. To model circuits explicitly, we demonstrate a link between Gaussian Markov random fields and contemporary circuit theory using a covariance structure that induces the necessary resistance distance. This provides a parametric model for second-order observations from such a system. In the landscape ecology setting, the proposed model provides a simple framework where inference can be obtained for effects that landscape features have on functional connectivity. We illustrate the approach through a landscape genetics study linking gene flow in alpine chamois (Rupicapra rupicapra) to the underlying landscape.

  6. Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.

    2018-04-01

    Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.

  7. Diffusion weighted imaging in patients with rectal cancer: Comparison between Gaussian and non-Gaussian models

    PubMed Central

    Marias, Kostas; Lambregts, Doenja M. J.; Nikiforaki, Katerina; van Heeswijk, Miriam M.; Bakers, Frans C. H.; Beets-Tan, Regina G. H.

    2017-01-01

    Purpose The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Material and methods Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. Results All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. Conclusion No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior. PMID:28863161

  8. Diffusion weighted imaging in patients with rectal cancer: Comparison between Gaussian and non-Gaussian models.

    PubMed

    Manikis, Georgios C; Marias, Kostas; Lambregts, Doenja M J; Nikiforaki, Katerina; van Heeswijk, Miriam M; Bakers, Frans C H; Beets-Tan, Regina G H; Papanikolaou, Nikolaos

    2017-01-01

    The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.

  9. The interplay of intrinsic and extrinsic bounded noises in biomolecular networks.

    PubMed

    Caravagna, Giulio; Mauri, Giancarlo; d'Onofrio, Alberto

    2013-01-01

    After being considered as a nuisance to be filtered out, it became recently clear that biochemical noise plays a complex role, often fully functional, for a biomolecular network. The influence of intrinsic and extrinsic noises on biomolecular networks has intensively been investigated in last ten years, though contributions on the co-presence of both are sparse. Extrinsic noise is usually modeled as an unbounded white or colored gaussian stochastic process, even though realistic stochastic perturbations are clearly bounded. In this paper we consider Gillespie-like stochastic models of nonlinear networks, i.e. the intrinsic noise, where the model jump rates are affected by colored bounded extrinsic noises synthesized by a suitable biochemical state-dependent Langevin system. These systems are described by a master equation, and a simulation algorithm to analyze them is derived. This new modeling paradigm should enlarge the class of systems amenable at modeling. We investigated the influence of both amplitude and autocorrelation time of a extrinsic Sine-Wiener noise on: (i) the Michaelis-Menten approximation of noisy enzymatic reactions, which we show to be applicable also in co-presence of both intrinsic and extrinsic noise, (ii) a model of enzymatic futile cycle and (iii) a genetic toggle switch. In (ii) and (iii) we show that the presence of a bounded extrinsic noise induces qualitative modifications in the probability densities of the involved chemicals, where new modes emerge, thus suggesting the possible functional role of bounded noises.

  10. The effects of noise on binocular rivalry waves: a stochastic neural field model

    NASA Astrophysics Data System (ADS)

    Webber, Matthew A.; Bressloff, Paul C.

    2013-03-01

    We analyze the effects of extrinsic noise on traveling waves of visual perception in a competitive neural field model of binocular rivalry. The model consists of two one-dimensional excitatory neural fields, whose activity variables represent the responses to left-eye and right-eye stimuli, respectively. The two networks mutually inhibit each other, and slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We first show how, in the absence of any noise, the system supports a propagating composite wave consisting of an invading activity front in one network co-moving with a retreating front in the other network. Using a separation of time scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the composite wave from its uniformly translating position at long time scales, and fluctuations in the wave profile around its instantaneous position at short time scales. We use our analysis to calculate the first-passage-time distribution for a stochastic rivalry wave to travel a fixed distance, which we find to be given by an inverse Gaussian. Finally, we investigate the effects of noise in the depression variables, which under an adiabatic approximation lead to quenched disorder in the neural fields during propagation of a wave.

  11. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    PubMed

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  12. Influence of current pulse shape on directly modulated system performance in metro area optical networks

    NASA Astrophysics Data System (ADS)

    Campos, Carmina del Rio; Horche, Paloma R.; Martin-Minguez, Alfredo

    2011-03-01

    Due to the fact that a metro network market is very cost sensitive, direct modulated schemes appear attractive. In this paper a CWDM (Coarse Wavelength Division Multiplexing) system is studied in detail by means of an Optical Communication System Design Software; a detailed study of the modulated current shape (exponential, sine and gaussian) for 2.5 Gb/s CWDM Metropolitan Area Networks is performed to evaluate its tolerance to linear impairments such as signal-to-noise-ratio degradation and dispersion. Point-to-point links are investigated and optimum design parameters are obtained. Through extensive sets of simulation results, it is shown that some of these shape pulses are more tolerant to dispersion when compared with conventional gaussian shape pulses. In order to achieve a low Bit Error Rate (BER), different types of optical transmitters are considered including strongly adiabatic and transient chirp dominated Directly Modulated Lasers (DMLs). We have used fibers with different dispersion characteristics, showing that the system performance depends, strongly, on the chosen DML-fiber couple.

  13. Noise-sustained synchronization between electrically coupled FitzHugh-Nagumo networks

    NASA Astrophysics Data System (ADS)

    Cascallares, Guadalupe; Sánchez, Alejandro D.; dell'Erba, Matías G.; Izús, Gonzalo G.

    2015-09-01

    We investigate the capability of electrical synapses to transmit the noise-sustained network activity from one network to another. The particular setup we consider is two identical rings with excitable FitzHugh-Nagumo cell dynamics and nearest-neighbor antiphase intra-ring coupling, electrically coupled between corresponding nodes. The whole system is submitted to independent local additive Gaussian white noises with common intensity η, but only one ring is externally forced by a global adiabatic subthreshold harmonic signal. We then seek conditions for a particular noise level to promote synchronized stable firing patterns. By running numerical integrations with increasing η, we observe the excitation activity to become spatiotemporally self-organized, until η is so strong that spoils sync between networks for a given value of the electric coupling strength. By means of a four-cell model and calculating the stationary probability distribution, we obtain a (signal-dependent) non-equilibrium potential landscape which explains qualitatively the observed regimes, and whose barrier heights give a good estimate of the optimal noise intensity for the sync between networks.

  14. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  15. Classification of Time Series Gene Expression in Clinical Studies via Integration of Biological Network

    PubMed Central

    Qian, Liwei; Zheng, Haoran; Zhou, Hong; Qin, Ruibin; Li, Jinlong

    2013-01-01

    The increasing availability of time series expression datasets, although promising, raises a number of new computational challenges. Accordingly, the development of suitable classification methods to make reliable and sound predictions is becoming a pressing issue. We propose, here, a new method to classify time series gene expression via integration of biological networks. We evaluated our approach on 2 different datasets and showed that the use of a hidden Markov model/Gaussian mixture models hybrid explores the time-dependence of the expression data, thereby leading to better prediction results. We demonstrated that the biclustering procedure identifies function-related genes as a whole, giving rise to high accordance in prognosis prediction across independent time series datasets. In addition, we showed that integration of biological networks into our method significantly improves prediction performance. Moreover, we compared our approach with several state-of–the-art algorithms and found that our method outperformed previous approaches with regard to various criteria. Finally, our approach achieved better prediction results on early-stage data, implying the potential of our method for practical prediction. PMID:23516469

  16. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Michael

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less

  17. A reanalysis of "Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons".

    PubMed

    Engelken, Rainer; Farkhooi, Farzad; Hansel, David; van Vreeswijk, Carl; Wolf, Fred

    2016-01-01

    Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF) neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.

  18. ADAPTable Sensor Systems Phase 2. Topic 2: Reusable Core Software. Distributed Synchronization Software for the Sensor Nodes

    DTIC Science & Technology

    2015-03-01

    Wireless Sensor Network Using Unreliable GPS Signals Daniel R. Fuhrmann*, Joshua Stomberg§, Saeid Nooshabadi*§ Dustin McIntire†, William Merill... wireless sensor network , when the timing jitter is subject to a empirically determined bimodal non-Gaussian distribution. Specifically, we 1) estimate the...over a nominal 19.2 MHz frequency with an adjustment made every four hours. Index Terms— clock synchronization, GPS, wireless sensor networks , Kalman

  19. Topology of large-scale structure in seeded hot dark matter models

    NASA Technical Reports Server (NTRS)

    Beaky, Matthew M.; Scherrer, Robert J.; Villumsen, Jens V.

    1992-01-01

    The topology of the isodensity surfaces in seeded hot dark matter models, in which static seed masses provide the density perturbations in a universe dominated by massive neutrinos is examined. When smoothed with a Gaussian window, the linear initial conditions in these models show no trace of non-Gaussian behavior for r0 equal to or greater than 5 Mpc (h = 1/2), except for very low seed densities, which show a shift toward isolated peaks. An approximate analytic expression is given for the genus curve expected in linear density fields from randomly distributed seed masses. The evolved models have a Gaussian topology for r0 = 10 Mpc, but show a shift toward a cellular topology with r0 = 5 Mpc; Gaussian models with an identical power spectrum show the same behavior.

  20. Fast genomic predictions via Bayesian G-BLUP and multilocus models of threshold traits including censored Gaussian data.

    PubMed

    Kärkkäinen, Hanni P; Sillanpää, Mikko J

    2013-09-04

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.

  1. Fast Genomic Predictions via Bayesian G-BLUP and Multilocus Models of Threshold Traits Including Censored Gaussian Data

    PubMed Central

    Kärkkäinen, Hanni P.; Sillanpää, Mikko J.

    2013-01-01

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618

  2. Finite-time scaling at the Anderson transition for vibrations in solids

    NASA Astrophysics Data System (ADS)

    Beltukov, Y. M.; Skipetrov, S. E.

    2017-11-01

    A model in which a three-dimensional elastic medium is represented by a network of identical masses connected by springs of random strengths and allowed to vibrate only along a selected axis of the reference frame exhibits an Anderson localization transition. To study this transition, we assume that the dynamical matrix of the network is given by a product of a sparse random matrix with real, independent, Gaussian-distributed nonzero entries and its transpose. A finite-time scaling analysis of the system's response to an initial excitation allows us to estimate the critical parameters of the localization transition. The critical exponent is found to be ν =1.57 ±0.02 , in agreement with previous studies of the Anderson transition belonging to the three-dimensional orthogonal universality class.

  3. Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues

    NASA Astrophysics Data System (ADS)

    Adams, W. H.; Iyengar, Giridharan; Lin, Ching-Yung; Naphade, Milind Ramesh; Neti, Chalapathy; Nock, Harriet J.; Smith, John R.

    2003-12-01

    We present a learning-based approach to the semantic indexing of multimedia content using cues derived from audio, visual, and text features. We approach the problem by developing a set of statistical models for a predefined lexicon. Novel concepts are then mapped in terms of the concepts in the lexicon. To achieve robust detection of concepts, we exploit features from multiple modalities, namely, audio, video, and text. Concept representations are modeled using Gaussian mixture models (GMM), hidden Markov models (HMM), and support vector machines (SVM). Models such as Bayesian networks and SVMs are used in a late-fusion approach to model concepts that are not explicitly modeled in terms of features. Our experiments indicate promise in the proposed classification and fusion methodologies: our proposed fusion scheme achieves more than 10% relative improvement over the best unimodal concept detector.

  4. Non-gaussianity versus nonlinearity of cosmological perturbations.

    PubMed

    Verde, L

    2001-06-01

    Following the discovery of the cosmic microwave background, the hot big-bang model has become the standard cosmological model. In this theory, small primordial fluctuations are subsequently amplified by gravity to form the large-scale structure seen today. Different theories for unified models of particle physics, lead to different predictions for the statistical properties of the primordial fluctuations, that can be divided in two classes: gaussian and non-gaussian. Convincing evidence against or for gaussian initial conditions would rule out many scenarios and point us toward a physical theory for the origin of structures. The statistical distribution of cosmological perturbations, as we observe them, can deviate from the gaussian distribution in several different ways. Even if perturbations start off gaussian, nonlinear gravitational evolution can introduce non-gaussian features. Additionally, our knowledge of the Universe comes principally from the study of luminous material such as galaxies, but galaxies might not be faithful tracers of the underlying mass distribution. The relationship between fluctuations in the mass and in the galaxies distribution (bias), is often assumed to be local, but could well be nonlinear. Moreover, galaxy catalogues use the redshift as third spatial coordinate: the resulting redshift-space map of the galaxy distribution is nonlinearly distorted by peculiar velocities. Nonlinear gravitational evolution, biasing, and redshift-space distortion introduce non-gaussianity, even in an initially gaussian fluctuation field. I investigate the statistical tools that allow us, in principle, to disentangle the above different effects, and the observational datasets we require to do so in practice.

  5. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids

    PubMed Central

    Ali, S. M.; Mehmood, C. A; Khan, B.; Jawad, M.; Farid, U; Jadoon, J. K.; Ali, M.; Tareen, N. K.; Usman, S.; Majid, M.; Anwar, S. M.

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion. PMID:27314229

  6. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids.

    PubMed

    Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion.

  7. A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario.

    PubMed

    Li, Bing; Cui, Wei; Wang, Bin

    2015-09-16

    Localization algorithms based on received signal strength indication (RSSI) are widely used in the field of target localization due to its advantages of convenient application and independent from hardware devices. Unfortunately, the RSSI values are susceptible to fluctuate under the influence of non-line-of-sight (NLOS) in indoor space. Existing algorithms often produce unreliable estimated distances, leading to low accuracy and low effectiveness in indoor target localization. Moreover, these approaches require extra prior knowledge about the propagation model. As such, we focus on the problem of localization in mixed LOS/NLOS scenario and propose a novel localization algorithm: Gaussian mixed model based non-metric Multidimensional (GMDS). In GMDS, the RSSI is estimated using a Gaussian mixed model (GMM). The dissimilarity matrix is built to generate relative coordinates of nodes by a multi-dimensional scaling (MDS) approach. Finally, based on the anchor nodes' actual coordinates and target's relative coordinates, the target's actual coordinates can be computed via coordinate transformation. Our algorithm could perform localization estimation well without being provided with prior knowledge. The experimental verification shows that GMDS effectively reduces NLOS error and is of higher accuracy in indoor mixed LOS/NLOS localization and still remains effective when we extend single NLOS to multiple NLOS.

  8. The series product for gaussian quantum input processes

    NASA Astrophysics Data System (ADS)

    Gough, John E.; James, Matthew R.

    2017-02-01

    We present a theory for connecting quantum Markov components into a network with quantum input processes in a Gaussian state (including thermal and squeezed). One would expect on physical grounds that the connection rules should be independent of the state of the input to the network. To compute statistical properties, we use a version of Wicks' theorem involving fictitious vacuum fields (Fock space based representation of the fields) and while this aids computation, and gives a rigorous formulation, the various representations need not be unitarily equivalent. In particular, a naive application of the connection rules would lead to the wrong answer. We establish the correct interconnection rules, and show that while the quantum stochastic differential equations of motion display explicitly the covariances (thermal and squeezing parameters) of the Gaussian input fields we introduce the Wick-Stratonovich form which leads to a way of writing these equations that does not depend on these covariances and so corresponds to the universal equations written in terms of formal quantum input processes. We show that a wholly consistent theory of quantum open systems in series can be developed in this way, and as required physically, is universal and in particular representation-free.

  9. Radial restricted solid-on-solid and etching interface-growth models

    NASA Astrophysics Data System (ADS)

    Alves, Sidiney G.

    2018-03-01

    An approach to generate radial interfaces is presented. A radial network recursively obtained is used to implement discrete model rules designed originally for the investigation in flat substrates. I used the restricted solid-on-solid and etching models as to test the proposed scheme. The results indicate the Kardar, Parisi, and Zhang conjecture is completely verified leading to a good agreement between the interface radius fluctuation distribution and the Gaussian unitary ensemble. The evolution of the radius agrees well with the generalized conjecture, and the two-point correlation function exhibits also a good agreement with the covariance of the Airy2 process. The approach can be used to investigate radial interfaces evolution for many other classes of universality.

  10. Radial restricted solid-on-solid and etching interface-growth models.

    PubMed

    Alves, Sidiney G

    2018-03-01

    An approach to generate radial interfaces is presented. A radial network recursively obtained is used to implement discrete model rules designed originally for the investigation in flat substrates. I used the restricted solid-on-solid and etching models as to test the proposed scheme. The results indicate the Kardar, Parisi, and Zhang conjecture is completely verified leading to a good agreement between the interface radius fluctuation distribution and the Gaussian unitary ensemble. The evolution of the radius agrees well with the generalized conjecture, and the two-point correlation function exhibits also a good agreement with the covariance of the Airy_{2} process. The approach can be used to investigate radial interfaces evolution for many other classes of universality.

  11. Modeling of a VMJ PV array under Gaussian high intensity laser power beam condition

    NASA Astrophysics Data System (ADS)

    Eom, Jeongsook; Kim, Gunzung; Park, Yongwan

    2018-02-01

    The high intensity laser power beaming (HILPB) system is one of the most promising systems in the long-rang wireless power transfer field. The vertical multi-junction photovoltaic (VMJ PV) array converts the HILPB into electricity to power the load or charges a battery. The output power of a VMJ PV array depends mainly on irradiance values of each VMJ PV cells. For simulating an entire VMJ PV array, the irradiance profile of the Gaussian HILPB and the irradiance level of the VMJ PV cell are mathematically modeled first. The VMJ PV array is modeled as a network with dimension m*n, where m represents the number of VMJ PV cells in a column, and n represents the number of VMJ PV cells in a row. In order to validate the results obtained in modeling and simulation, a laboratory setup was developed using 55 VMJ PV array. By using the output power model of VMJ PV array, we can establish an optimal power transmission path by the receiver based on the received signal strength. When the laser beam from multiple transmitters aimed at a VMJ PV array at the same time, the received power is the sum of all energy at a VMJ PV array. The transmitter sends its power characteristics as optically coded laser pulses and powers as HILPB. Using the attenuated power model and output power model of VMJ PV array, the receiver can estimate the maximum receivable powers from the transmitters and select optimal transmitters.

  12. Bayesian nonparametric adaptive control using Gaussian processes.

    PubMed

    Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A

    2015-03-01

    Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

  13. The Cramér-Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations.

    PubMed

    Wang, Zhiguo; Shen, Xiaojing; Wang, Ping; Zhu, Yunmin

    2018-04-05

    This paper considers the problems of the posterior Cramér-Rao bound and sensor selection for multi-sensor nonlinear systems with uncertain observations. In order to effectively overcome the difficulties caused by uncertainty, we investigate two methods to derive the posterior Cramér-Rao bound. The first method is based on the recursive formula of the Cramér-Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state. The computation burden of this method is relatively high, especially in large sensor networks. Inspired by the idea of the expectation maximization algorithm, the second method is to introduce some 0-1 latent variables to deal with the Gaussian mixture model. Since the regular condition of the posterior Cramér-Rao bound is unsatisfied for the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables. Then, a new Cramér-Rao bound can be achieved by a limiting process of the Cramér-Rao bound of the continuous system. It avoids the complex integral, which can reduce the computation burden. Based on the new posterior Cramér-Rao bound, the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection of a large-scale sensor networks. Two typical numerical examples verify the effectiveness of the proposed methods.

  14. Analytical performance specifications for changes in assay bias (Δbias) for data with logarithmic distributions as assessed by effects on reference change values.

    PubMed

    Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György

    2016-11-01

    Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.

  15. GaussianCpG: a Gaussian model for detection of CpG island in human genome sequences.

    PubMed

    Yu, Ning; Guo, Xuan; Zelikovsky, Alexander; Pan, Yi

    2017-05-24

    As crucial markers in identifying biological elements and processes in mammalian genomes, CpG islands (CGI) play important roles in DNA methylation, gene regulation, epigenetic inheritance, gene mutation, chromosome inactivation and nuclesome retention. The generally accepted criteria of CGI rely on: (a) %G+C content is ≥ 50%, (b) the ratio of the observed CpG content and the expected CpG content is ≥ 0.6, and (c) the general length of CGI is greater than 200 nucleotides. Most existing computational methods for the prediction of CpG island are programmed on these rules. However, many experimentally verified CpG islands deviate from these artificial criteria. Experiments indicate that in many cases %G+C is < 50%, CpG obs /CpG exp varies, and the length of CGI ranges from eight nucleotides to a few thousand of nucleotides. It implies that CGI detection is not just a straightly statistical task and some unrevealed rules probably are hidden. A novel Gaussian model, GaussianCpG, is developed for detection of CpG islands on human genome. We analyze the energy distribution over genomic primary structure for each CpG site and adopt the parameters from statistics of Human genome. The evaluation results show that the new model can predict CpG islands efficiently by balancing both sensitivity and specificity over known human CGI data sets. Compared with other models, GaussianCpG can achieve better performance in CGI detection. Our Gaussian model aims to simplify the complex interaction between nucleotides. The model is computed not by the linear statistical method but by the Gaussian energy distribution and accumulation. The parameters of Gaussian function are not arbitrarily designated but deliberately chosen by optimizing the biological statistics. By using the pseudopotential analysis on CpG islands, the novel model is validated on both the real and artificial data sets.

  16. TH-C-BRD-04: Beam Modeling and Validation with Triple and Double Gaussian Dose Kernel for Spot Scanning Proton Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Takayanagi, T; Fujii, Y

    2014-06-15

    Purpose: To present the validity of our beam modeling with double and triple Gaussian dose kernels for spot scanning proton beams in Nagoya Proton Therapy Center. This study investigates the conformance between the measurements and calculation results in absolute dose with two types of beam kernel. Methods: A dose kernel is one of the important input data required for the treatment planning software. The dose kernel is the 3D dose distribution of an infinitesimal pencil beam of protons in water and consists of integral depth doses and lateral distributions. We have adopted double and triple Gaussian model as lateral distributionmore » in order to take account of the large angle scattering due to nuclear reaction by fitting simulated inwater lateral dose profile for needle proton beam at various depths. The fitted parameters were interpolated as a function of depth in water and were stored as a separate look-up table for the each beam energy. The process of beam modeling is based on the method of MDACC [X.R.Zhu 2013]. Results: From the comparison results between the absolute doses calculated by double Gaussian model and those measured at the center of SOBP, the difference is increased up to 3.5% in the high-energy region because the large angle scattering due to nuclear reaction is not sufficiently considered at intermediate depths in the double Gaussian model. In case of employing triple Gaussian dose kernels, the measured absolute dose at the center of SOBP agrees with calculation within ±1% regardless of the SOBP width and maximum range. Conclusion: We have demonstrated the beam modeling results of dose distribution employing double and triple Gaussian dose kernel. Treatment planning system with the triple Gaussian dose kernel has been successfully verified and applied to the patient treatment with a spot scanning technique in Nagoya Proton Therapy Center.« less

  17. Variational Gaussian approximation for Poisson data

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  18. Extinction time of a stochastic predator-prey model by the generalized cell mapping method

    NASA Astrophysics Data System (ADS)

    Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao

    2018-03-01

    The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.

  19. Modeling Multi-Variate Gaussian Distributions and Analysis of Higgs Boson Couplings with the ATLAS Detector

    NASA Astrophysics Data System (ADS)

    Krohn, Olivia; Armbruster, Aaron; Gao, Yongsheng; Atlas Collaboration

    2017-01-01

    Software tools developed for the purpose of modeling CERN LHC pp collision data to aid in its interpretation are presented. Some measurements are not adequately described by a Gaussian distribution; thus an interpretation assuming Gaussian uncertainties will inevitably introduce bias, necessitating analytical tools to recreate and evaluate non-Gaussian features. One example is the measurements of Higgs boson production rates in different decay channels, and the interpretation of these measurements. The ratios of data to Standard Model expectations (μ) for five arbitrary signals were modeled by building five Poisson distributions with mixed signal contributions such that the measured values of μ are correlated. Algorithms were designed to recreate probability distribution functions of μ as multi-variate Gaussians, where the standard deviation (σ) and correlation coefficients (ρ) are parametrized. There was good success with modeling 1-D likelihood contours of μ, and the multi-dimensional distributions were well modeled within 1- σ but the model began to diverge after 2- σ due to unmerited assumptions in developing ρ. Future plans to improve the algorithms and develop a user-friendly analysis package will also be discussed. NSF International Research Experiences for Students

  20. Kalman filter with a linear state model for PDR+WLAN positioning and its application to assisting a particle filter

    NASA Astrophysics Data System (ADS)

    Raitoharju, Matti; Nurminen, Henri; Piché, Robert

    2015-12-01

    Indoor positioning based on wireless local area network (WLAN) signals is often enhanced using pedestrian dead reckoning (PDR) based on an inertial measurement unit. The state evolution model in PDR is usually nonlinear. We present a new linear state evolution model for PDR. In simulated-data and real-data tests of tightly coupled WLAN-PDR positioning, the positioning accuracy with this linear model is better than with the traditional models when the initial heading is not known, which is a common situation. The proposed method is computationally light and is also suitable for smoothing. Furthermore, we present modifications to WLAN positioning based on Gaussian coverage areas and show how a Kalman filter using the proposed model can be used for integrity monitoring and (re)initialization of a particle filter.

  1. Evaluation of the influence of double and triple Gaussian proton kernel models on accuracy of dose calculations for spot scanning technique.

    PubMed

    Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki

    2016-03-01

    The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.

  2. Exploring the Common Dynamics of Homologous Proteins. Application to the Globin Family

    PubMed Central

    Maguid, Sandra; Fernandez-Alberti, Sebastian; Ferrelli, Leticia; Echave, Julian

    2005-01-01

    We present a procedure to explore the global dynamics shared between members of the same protein family. The method allows the comparison of patterns of vibrational motion obtained by Gaussian network model analysis. After the identification of collective coordinates that were conserved during evolution, we quantify the common dynamics within a family. Representative vectors that describe these dynamics are defined using a singular value decomposition approach. As a test case, the globin heme-binding family is considered. The two lowest normal modes are shown to be conserved within this family. Our results encourage the development of models for protein evolution that take into account the conservation of dynamical features. PMID:15749782

  3. An Observing System Simulation Experiment Approach to Meteorological Network Assessment

    NASA Astrophysics Data System (ADS)

    Abbasnezhadi, K.; Rasmussen, P. F.; Stadnyk, T.; Boluwade, A.

    2016-12-01

    A proper knowledge of the spatiotemporal distribution of rainfall is important in order to conduct a mindful investigation of water movement and storage throughout a catchment. Currently, the most accurate precipitation information available for the remote Boreal ecozones of northern Manitoba is coming from the Canadian Precipitation Analysis (CaPA) data assimilation system. Throughout the Churchill River Basin (CRB), CaPA still does not have the proper skill due to the limited number of weather stations. A new approach to experimental network design was investigated based on the concept of Observing System Simulation Experiment (OSSE). The OSSE-based network assessment procedure which simulates the CaPA system provides a scientific and hydrologically significant tool to assess the sensitivity of CaPA precipitation analysis to observation network density throughout the CRB. To simulate CaPA system, synthetic background and station data were simulated, respectively, by adding spatially uncorrelated and correlated Gaussian noises to an assumingly true daily weather field synthesized by a gridded precipitation generator which simulates CaPA data. Given the true reference field on one hand, and a set of pseudo-CaPA analyses associated with different network realizations on the other hand, a WATFLOOD hydrological model was employed to compare the modeled runoff. The simulations showed that as network density increases, the accuracy of CaPA precipitation products improves up to a certain limit beyond which adding more stations to the network does not result in further accuracy.

  4. Bayesian sensitivity analysis of bifurcating nonlinear models

    NASA Astrophysics Data System (ADS)

    Becker, W.; Worden, K.; Rowson, J.

    2013-01-01

    Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.

  5. A New Quality Control Method base on IRMCD for Wind Profiler Observation towards Future Assimilation Application

    NASA Astrophysics Data System (ADS)

    Chen, Min; Zhang, Yu

    2017-04-01

    A wind profiler network with a total of 65 profiling radars was operated by the MOC/CMA in China until July 2015. In this study, a quality control procedure is constructed to incorporate the profiler data from the wind-profiling network into the local data assimilation and forecasting system (BJRUC). The procedure applies a blacklisting check that removes stations with gross errors and an outlier check that rejects data with large deviations from the background. Instead of the bi-weighting method, which has been commonly implemented in outlier elimination for one-dimensional scalar observations, an outlier elimination method is developed based on the iterated reweighted minimum covariance determinant (IRMCD) for multi-variate observations such as wind profiler data. A quality control experiment is separately performed for subsets containing profiler data tagged in parallel with/without rain flags at every 00UTC/12UTC from 20 June to 30 Sep 2015. From the results, we find that with the quality control, the frequency distributions of the differences between the observations and model background become more Gaussian-like and meet the requirements of a Gaussian distribution for data assimilation. Further intensive assessment for each quality control step reveals that the stations rejected by blacklisting contain poor data quality, and the IRMCD rejects outliers in a robust and physically reasonable manner.

  6. A Gaussian framework for modeling effects of frequency-dependent attenuation, frequency-dependent scattering, and gating.

    PubMed

    Wear, Keith A

    2002-11-01

    For a wide range of applications in medical ultrasound, power spectra of received signals are approximately Gaussian. It has been established previously that an ultrasound beam with a Gaussian spectrum propagating through a medium with linear attenuation remains Gaussian. In this paper, Gaussian transformations are derived to model the effects of scattering (according to a power law, as is commonly applicable in soft tissues, especially over limited frequency ranges) and gating (with a Hamming window, a commonly used gate function). These approximations are shown to be quite accurate even for relatively broad band systems with fractional bandwidths approaching 100%. The theory is validated by experiments in phantoms consisting of glass particles suspended in agar.

  7. Strong monogamy of bipartite and genuine multipartite entanglement: the Gaussian case.

    PubMed

    Adesso, Gerardo; Illuminati, Fabrizio

    2007-10-12

    We demonstrate the existence of general constraints on distributed quantum correlations, which impose a trade-off on bipartite and multipartite entanglement at once. For all N-mode Gaussian states under permutation invariance, we establish exactly a monogamy inequality, stronger than the traditional one, that by recursion defines a proper measure of genuine N-partite entanglement. Strong monogamy holds as well for subsystems of arbitrary size, and the emerging multipartite entanglement measure is found to be scale invariant. We unveil its operational connection with the optimal fidelity of continuous variable teleportation networks.

  8. Simulation of the usage of Gaussian mixture models for the purpose of modelling virtual mass spectrometry data.

    PubMed

    Plechawska, Małgorzata; Polańska, Joanna

    2009-01-01

    This article presents the method of the processing of mass spectrometry data. Mass spectra are modelled with Gaussian Mixture Models. Every peak of the spectrum is represented by a single Gaussian. Its parameters describe the location, height and width of the corresponding peak of the spectrum. An authorial version of the Expectation Maximisation Algorithm was used to perform all calculations. Errors were estimated with a virtual mass spectrometer. The discussed tool was originally designed to generate a set of spectra within defined parameters.

  9. Formation of doughnut and super-Gaussian intensity distributions of laser radiation in the far field using a bimorph mirror

    NASA Astrophysics Data System (ADS)

    Lylova, A. N.; Sheldakova, Yu. V.; Kudryashov, A. V.; Samarkin, V. V.

    2018-01-01

    We consider the methods for modelling doughnut and super-Gaussian intensity distributions in the far field by means of deformable bimorph mirrors. A method for the rapid formation of a specified intensity distribution using a Shack - Hartmann sensor is proposed, and the results of the modelling of doughnut and super-Gaussian intensity distributions are presented.

  10. Revisiting non-Gaussianity from non-attractor inflation models

    NASA Astrophysics Data System (ADS)

    Cai, Yi-Fu; Chen, Xingang; Namjoo, Mohammad Hossein; Sasaki, Misao; Wang, Dong-Gang; Wang, Ziwei

    2018-05-01

    Non-attractor inflation is known as the only single field inflationary scenario that can violate non-Gaussianity consistency relation with the Bunch-Davies vacuum state and generate large local non-Gaussianity. However, it is also known that the non-attractor inflation by itself is incomplete and should be followed by a phase of slow-roll attractor. Moreover, there is a transition process between these two phases. In the past literature, this transition was approximated as instant and the evolution of non-Gaussianity in this phase was not fully studied. In this paper, we follow the detailed evolution of the non-Gaussianity through the transition phase into the slow-roll attractor phase, considering different types of transition. We find that the transition process has important effect on the size of the local non-Gaussianity. We first compute the net contribution of the non-Gaussianities at the end of inflation in canonical non-attractor models. If the curvature perturbations keep evolving during the transition—such as in the case of smooth transition or some sharp transition scenarios—the Script O(1) local non-Gaussianity generated in the non-attractor phase can be completely erased by the subsequent evolution, although the consistency relation remains violated. In extremal cases of sharp transition where the super-horizon modes freeze immediately right after the end of the non-attractor phase, the original non-attractor result can be recovered. We also study models with non-canonical kinetic terms, and find that the transition can typically contribute a suppression factor in the squeezed bispectrum, but the final local non-Gaussianity can still be made parametrically large.

  11. Inferring Single Neuron Properties in Conductance Based Balanced Networks

    PubMed Central

    Pool, Román Rossi; Mato, Germán

    2011-01-01

    Balanced states in large networks are a usual hypothesis for explaining the variability of neural activity in cortical systems. In this regime the statistics of the inputs is characterized by static and dynamic fluctuations. The dynamic fluctuations have a Gaussian distribution. Such statistics allows to use reverse correlation methods, by recording synaptic inputs and the spike trains of ongoing spontaneous activity without any additional input. By using this method, properties of the single neuron dynamics that are masked by the balanced state can be quantified. To show the feasibility of this approach we apply it to large networks of conductance based neurons. The networks are classified as Type I or Type II according to the bifurcations which neurons of the different populations undergo near the firing onset. We also analyze mixed networks, in which each population has a mixture of different neuronal types. We determine under which conditions the intrinsic noise generated by the network can be used to apply reverse correlation methods. We find that under realistic conditions we can ascertain with low error the types of neurons present in the network. We also find that data from neurons with similar firing rates can be combined to perform covariance analysis. We compare the results of these methods (that do not requite any external input) to the standard procedure (that requires the injection of Gaussian noise into a single neuron). We find a good agreement between the two procedures. PMID:22016730

  12. Evolution of the frequency chirp of Gaussian pulses and beams when passing through a pulse compressor.

    PubMed

    Li, Derong; Lv, Xiaohua; Bowlan, Pamela; Du, Rui; Zeng, Shaoqun; Luo, Qingming

    2009-09-14

    The evolution of the frequency chirp of a laser pulse inside a classical pulse compressor is very different for plane waves and Gaussian beams, although after propagating through the last (4th) dispersive element, the two models give the same results. In this paper, we have analyzed the evolution of the frequency chirp of Gaussian pulses and beams using a method which directly obtains the spectral phase acquired by the compressor. We found the spatiotemporal couplings in the phase to be the fundamental reason for the difference in the frequency chirp acquired by a Gaussian beam and a plane wave. When the Gaussian beam propagates, an additional frequency chirp will be introduced if any spatiotemporal couplings (i.e. angular dispersion, spatial chirp or pulse front tilt) are present. However, if there are no couplings present, the chirp of the Gaussian beam is the same as that of a plane wave. When the Gaussian beam is well collimated, the introduced frequency chirp predicted by the plane wave and Gaussian beam models are in closer agreement. This work improves our understanding of pulse compressors and should be helpful for optimizing dispersion compensation schemes in many applications of femtosecond laser pulses.

  13. Superdiffusion in a non-Markovian random walk model with a Gaussian memory profile

    NASA Astrophysics Data System (ADS)

    Borges, G. M.; Ferreira, A. S.; da Silva, M. A. A.; Cressoni, J. C.; Viswanathan, G. M.; Mariz, A. M.

    2012-09-01

    Most superdiffusive Non-Markovian random walk models assume that correlations are maintained at all time scales, e.g., fractional Brownian motion, Lévy walks, the Elephant walk and Alzheimer walk models. In the latter two models the random walker can always "remember" the initial times near t = 0. Assuming jump size distributions with finite variance, the question naturally arises: is superdiffusion possible if the walker is unable to recall the initial times? We give a conclusive answer to this general question, by studying a non-Markovian model in which the walker's memory of the past is weighted by a Gaussian centered at time t/2, at which time the walker had one half the present age, and with a standard deviation σt which grows linearly as the walker ages. For large widths we find that the model behaves similarly to the Elephant model, but for small widths this Gaussian memory profile model behaves like the Alzheimer walk model. We also report that the phenomenon of amnestically induced persistence, known to occur in the Alzheimer walk model, arises in the Gaussian memory profile model. We conclude that memory of the initial times is not a necessary condition for generating (log-periodic) superdiffusion. We show that the phenomenon of amnestically induced persistence extends to the case of a Gaussian memory profile.

  14. Robust radio interferometric calibration using the t-distribution

    NASA Astrophysics Data System (ADS)

    Kazemi, S.; Yatawatta, S.

    2013-10-01

    A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.

  15. Testing the mutual information expansion of entropy with multivariate Gaussian distributions.

    PubMed

    Goethe, Martin; Fita, Ignacio; Rubi, J Miguel

    2017-12-14

    The mutual information expansion (MIE) represents an approximation of the configurational entropy in terms of low-dimensional integrals. It is frequently employed to compute entropies from simulation data of large systems, such as macromolecules, for which brute-force evaluation of the full configurational integral is intractable. Here, we test the validity of MIE for systems consisting of more than m = 100 degrees of freedom (dofs). The dofs are distributed according to multivariate Gaussian distributions which were generated from protein structures using a variant of the anisotropic network model. For the Gaussian distributions, we have semi-analytical access to the configurational entropy as well as to all contributions of MIE. This allows us to accurately assess the validity of MIE for different situations. We find that MIE diverges for systems containing long-range correlations which means that the error of consecutive MIE approximations grows with the truncation order n for all tractable n ≪ m. This fact implies severe limitations on the applicability of MIE, which are discussed in the article. For systems with correlations that decay exponentially with distance, MIE represents an asymptotic expansion of entropy, where the first successive MIE approximations approach the exact entropy, while MIE also diverges for larger orders. In this case, MIE serves as a useful entropy expansion when truncated up to a specific truncation order which depends on the correlation length of the system.

  16. Infrared maritime target detection using a probabilistic single Gaussian model of sea clutter in Fourier domain

    NASA Astrophysics Data System (ADS)

    Zhou, Anran; Xie, Weixin; Pei, Jihong; Chen, Yapei

    2018-02-01

    For ship targets detection in cluttered infrared image sequences, a robust detection method, based on the probabilistic single Gaussian model of sea background in Fourier domain, is put forward. The amplitude spectrum sequences at each frequency point of the pure seawater images in Fourier domain, being more stable than the gray value sequences of each background pixel in the spatial domain, are regarded as a Gaussian model. Next, a probability weighted matrix is built based on the stability of the pure seawater's total energy spectrum in the row direction, to make the Gaussian model more accurate. Then, the foreground frequency points are separated from the background frequency points by the model. Finally, the false-alarm points are removed utilizing ships' shape features. The performance of the proposed method is tested by visual and quantitative comparisons with others.

  17. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing.

    PubMed

    Leong, Siow Hoo; Ong, Seng Huat

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  18. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing

    PubMed Central

    Leong, Siow Hoo

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634

  19. Non-parametric directionality analysis - Extension for removal of a single common predictor and application to time series.

    PubMed

    Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob

    2016-08-01

    The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.

    PubMed

    Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen

    2014-01-01

    Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.

  1. Advances in SCA and RF-DNA Fingerprinting Through Enhanced Linear Regression Attacks and Application of Random Forest Classifiers

    DTIC Science & Technology

    2014-09-18

    Converter AES Advance Encryption Standard ANN Artificial Neural Network APS Application Support AUC Area Under the Curve CPA Correlation Power Analysis ...Importance WGN White Gaussian Noise WPAN Wireless Personal Area Networks XEnv Cross-Environment XRx Cross-Receiver xxi ADVANCES IN SCA AND RF-DNA...based tool called KillerBee was released in 2009 that increases the exposure of ZigBee and other IEEE 802.15.4-based Wireless Personal Area Networks

  2. Using Bayesian neural networks to classify forest scenes

    NASA Astrophysics Data System (ADS)

    Vehtari, Aki; Heikkonen, Jukka; Lampinen, Jouko; Juujarvi, Jouni

    1998-10-01

    We present results that compare the performance of Bayesian learning methods for neural networks on the task of classifying forest scenes into trees and background. Classification task is demanding due to the texture richness of the trees, occlusions of the forest scene objects and diverse lighting conditions under operation. This makes it difficult to determine which are optimal image features for the classification. A natural way to proceed is to extract many different types of potentially suitable features, and to evaluate their usefulness in later processing stages. One approach to cope with large number of features is to use Bayesian methods to control the model complexity. Bayesian learning uses a prior on model parameters, combines this with evidence from a training data, and the integrates over the resulting posterior to make predictions. With this method, we can use large networks and many features without fear of overfitting. For this classification task we compare two Bayesian learning methods for multi-layer perceptron (MLP) neural networks: (1) The evidence framework of MacKay uses a Gaussian approximation to the posterior weight distribution and maximizes with respect to hyperparameters. (2) In a Markov Chain Monte Carlo (MCMC) method due to Neal, the posterior distribution of the network parameters is numerically integrated using the MCMC method. As baseline classifiers for comparison we use (3) MLP early stop committee, (4) K-nearest-neighbor and (5) Classification And Regression Tree.

  3. A Heavy Tailed Expectation Maximization Hidden Markov Random Field Model with Applications to Segmentation of MRI

    PubMed Central

    Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego

    2017-01-01

    A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194

  4. Nanomechanical characterization of heterogeneous and hierarchical biomaterials and tissues using nanoindentation: the role of finite mixture models.

    PubMed

    Zadpoor, Amir A

    2015-03-01

    Mechanical characterization of biological tissues and biomaterials at the nano-scale is often performed using nanoindentation experiments. The different constituents of the characterized materials will then appear in the histogram that shows the probability of measuring a certain range of mechanical properties. An objective technique is needed to separate the probability distributions that are mixed together in such a histogram. In this paper, finite mixture models (FMMs) are proposed as a tool capable of performing such types of analysis. Finite Gaussian mixture models assume that the measured probability distribution is a weighted combination of a finite number of Gaussian distributions with separate mean and standard deviation values. Dedicated optimization algorithms are available for fitting such a weighted mixture model to experimental data. Moreover, certain objective criteria are available to determine the optimum number of Gaussian distributions. In this paper, FMMs are used for interpreting the probability distribution functions representing the distributions of the elastic moduli of osteoarthritic human cartilage and co-polymeric microspheres. As for cartilage experiments, FMMs indicate that at least three mixture components are needed for describing the measured histogram. While the mechanical properties of the softer mixture components, often assumed to be associated with Glycosaminoglycans, were found to be more or less constant regardless of whether two or three mixture components were used, those of the second mixture component (i.e. collagen network) considerably changed depending on the number of mixture components. Regarding the co-polymeric microspheres, the optimum number of mixture components estimated by the FMM theory, i.e. 3, nicely matches the number of co-polymeric components used in the structure of the polymer. The computer programs used for the presented analyses are made freely available online for other researchers to use. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Neural networks with local receptive fields and superlinear VC dimension.

    PubMed

    Schmitt, Michael

    2002-04-01

    Local receptive field neurons comprise such well-known and widely used unit types as radial basis function (RBF) neurons and neurons with center-surround receptive field. We study the Vapnik-Chervonenkis (VC) dimension of feedforward neural networks with one hidden layer of these units. For several variants of local receptive field neurons, we show that the VC dimension of these networks is superlinear. In particular, we establish the bound Omega(W log k) for any reasonably sized network with W parameters and k hidden nodes. This bound is shown to hold for discrete center-surround receptive field neurons, which are physiologically relevant models of cells in the mammalian visual system, for neurons computing a difference of gaussians, which are popular in computational vision, and for standard RBF neurons, a major alternative to sigmoidal neurons in artificial neural networks. The result for RBF neural networks is of particular interest since it answers a question that has been open for several years. The results also give rise to lower bounds for networks with fixed input dimension. Regarding constants, all bounds are larger than those known thus far for similar architectures with sigmoidal neurons. The superlinear lower bounds contrast with linear upper bounds for single local receptive field neurons also derived here.

  6. Improved Gaussian Beam-Scattering Algorithm

    NASA Technical Reports Server (NTRS)

    Lock, James A.

    1995-01-01

    The localized model of the beam-shape coefficients for Gaussian beam-scattering theory by a spherical particle provides a great simplification in the numerical implementation of the theory. We derive an alternative form for the localized coefficients that is more convenient for computer computations and that provides physical insight into the details of the scattering process. We construct a FORTRAN program for Gaussian beam scattering with the localized model and compare its computer run time on a personal computer with that of a traditional Mie scattering program and with three other published methods for computing Gaussian beam scattering. We show that the analytical form of the beam-shape coefficients makes evident the fact that the excitation rate of morphology-dependent resonances is greatly enhanced for far off-axis incidence of the Gaussian beam.

  7. Recursive random forest algorithm for constructing multilayered hierarchical gene regulatory networks that govern biological pathways.

    PubMed

    Deng, Wenping; Zhang, Kui; Busov, Victor; Wei, Hairong

    2017-01-01

    Present knowledge indicates a multilayered hierarchical gene regulatory network (ML-hGRN) often operates above a biological pathway. Although the ML-hGRN is very important for understanding how a pathway is regulated, there is almost no computational algorithm for directly constructing ML-hGRNs. A backward elimination random forest (BWERF) algorithm was developed for constructing the ML-hGRN operating above a biological pathway. For each pathway gene, the BWERF used a random forest model to calculate the importance values of all transcription factors (TFs) to this pathway gene recursively with a portion (e.g. 1/10) of least important TFs being excluded in each round of modeling, during which, the importance values of all TFs to the pathway gene were updated and ranked until only one TF was remained in the list. The above procedure, termed BWERF. After that, the importance values of a TF to all pathway genes were aggregated and fitted to a Gaussian mixture model to determine the TF retention for the regulatory layer immediately above the pathway layer. The acquired TFs at the secondary layer were then set to be the new bottom layer to infer the next upper layer, and this process was repeated until a ML-hGRN with the expected layers was obtained. BWERF improved the accuracy for constructing ML-hGRNs because it used backward elimination to exclude the noise genes, and aggregated the individual importance values for determining the TFs retention. We validated the BWERF by using it for constructing ML-hGRNs operating above mouse pluripotency maintenance pathway and Arabidopsis lignocellulosic pathway. Compared to GENIE3, BWERF showed an improvement in recognizing authentic TFs regulating a pathway. Compared to the bottom-up Gaussian graphical model algorithm we developed for constructing ML-hGRNs, the BWERF can construct ML-hGRNs with significantly reduced edges that enable biologists to choose the implicit edges for experimental validation.

  8. Gaussian graphical modeling reconstructs pathway reactions from high-throughput metabolomics data

    PubMed Central

    2011-01-01

    Background With the advent of high-throughput targeted metabolic profiling techniques, the question of how to interpret and analyze the resulting vast amount of data becomes more and more important. In this work we address the reconstruction of metabolic reactions from cross-sectional metabolomics data, that is without the requirement for time-resolved measurements or specific system perturbations. Previous studies in this area mainly focused on Pearson correlation coefficients, which however are generally incapable of distinguishing between direct and indirect metabolic interactions. Results In our new approach we propose the application of a Gaussian graphical model (GGM), an undirected probabilistic graphical model estimating the conditional dependence between variables. GGMs are based on partial correlation coefficients, that is pairwise Pearson correlation coefficients conditioned against the correlation with all other metabolites. We first demonstrate the general validity of the method and its advantages over regular correlation networks with computer-simulated reaction systems. Then we estimate a GGM on data from a large human population cohort, covering 1020 fasting blood serum samples with 151 quantified metabolites. The GGM is much sparser than the correlation network, shows a modular structure with respect to metabolite classes, and is stable to the choice of samples in the data set. On the example of human fatty acid metabolism, we demonstrate for the first time that high partial correlation coefficients generally correspond to known metabolic reactions. This feature is evaluated both manually by investigating specific pairs of high-scoring metabolites, and then systematically on a literature-curated model of fatty acid synthesis and degradation. Our method detects many known reactions along with possibly novel pathway interactions, representing candidates for further experimental examination. Conclusions In summary, we demonstrate strong signatures of intracellular pathways in blood serum data, and provide a valuable tool for the unbiased reconstruction of metabolic reactions from large-scale metabolomics data sets. PMID:21281499

  9. Mechanisms of Seizure Propagation in 2-Dimensional Centre-Surround Recurrent Networks

    PubMed Central

    Hall, David; Kuhlmann, Levin

    2013-01-01

    Understanding how seizures spread throughout the brain is an important problem in the treatment of epilepsy, especially for implantable devices that aim to avert focal seizures before they spread to, and overwhelm, the rest of the brain. This paper presents an analysis of the speed of propagation in a computational model of seizure-like activity in a 2-dimensional recurrent network of integrate-and-fire neurons containing both excitatory and inhibitory populations and having a difference of Gaussians connectivity structure, an approximation to that observed in cerebral cortex. In the same computational model network, alternative mechanisms are explored in order to simulate the range of seizure-like activity propagation speeds (0.1–100 mm/s) observed in two animal-slice-based models of epilepsy: (1) low extracellular , which creates excess excitation and (2) introduction of gamma-aminobutyric acid (GABA) antagonists, which reduce inhibition. Moreover, two alternative connection topologies are considered: excitation broader than inhibition, and inhibition broader than excitation. It was found that the empirically observed range of propagation velocities can be obtained for both connection topologies. For the case of the GABA antagonist model simulation, consistent with other studies, it was found that there is an effective threshold in the degree of inhibition below which waves begin to propagate. For the case of the low extracellular model simulation, it was found that activity-dependent reductions in inhibition provide a potential explanation for the emergence of slowly propagating waves. This was simulated as a depression of inhibitory synapses, but it may also be achieved by other mechanisms. This work provides a localised network understanding of the propagation of seizures in 2-dimensional centre-surround networks that can be tested empirically. PMID:23967201

  10. Anomalous scaling of a passive scalar advected by the Navier-Stokes velocity field: two-loop approximation.

    PubMed

    Adzhemyan, L Ts; Antonov, N V; Honkonen, J; Kim, T L

    2005-01-01

    The field theoretic renormalization group and operator-product expansion are applied to the model of a passive scalar quantity advected by a non-Gaussian velocity field with finite correlation time. The velocity is governed by the Navier-Stokes equation, subject to an external random stirring force with the correlation function proportional to delta(t- t')k(4-d-2epsilon). It is shown that the scalar field is intermittent already for small epsilon, its structure functions display anomalous scaling behavior, and the corresponding exponents can be systematically calculated as series in epsilon. The practical calculation is accomplished to order epsilon2 (two-loop approximation), including anisotropic sectors. As for the well-known Kraichnan rapid-change model, the anomalous scaling results from the existence in the model of composite fields (operators) with negative scaling dimensions, identified with the anomalous exponents. Thus the mechanism of the origin of anomalous scaling appears similar for the Gaussian model with zero correlation time and the non-Gaussian model with finite correlation time. It should be emphasized that, in contrast to Gaussian velocity ensembles with finite correlation time, the model and the perturbation theory discussed here are manifestly Galilean covariant. The relevance of these results for real passive advection and comparison with the Gaussian models and experiments are briefly discussed.

  11. Dynamic decomposition of spatiotemporal neural signals

    PubMed Central

    2017-01-01

    Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals. PMID:28558039

  12. Distant Speech Recognition Using a Microphone Array Network

    NASA Astrophysics Data System (ADS)

    Nakano, Alberto Yoshihiro; Nakagawa, Seiichi; Yamamoto, Kazumasa

    In this work, spatial information consisting of the position and orientation angle of an acoustic source is estimated by an artificial neural network (ANN). The estimated position of a speaker in an enclosed space is used to refine the estimated time delays for a delay-and-sum beamformer, thus enhancing the output signal. On the other hand, the orientation angle is used to restrict the lexicon used in the recognition phase, assuming that the speaker faces a particular direction while speaking. To compensate the effect of the transmission channel inside a short frame analysis window, a new cepstral mean normalization (CMN) method based on a Gaussian mixture model (GMM) is investigated and shows better performance than the conventional CMN for short utterances. The performance of the proposed method is evaluated through Japanese digit/command recognition experiments.

  13. Robust Target Tracking with Multi-Static Sensors under Insufficient TDOA Information.

    PubMed

    Shin, Hyunhak; Ku, Bonhwa; Nelson, Jill K; Ko, Hanseok

    2018-05-08

    This paper focuses on underwater target tracking based on a multi-static sonar network composed of passive sonobuoys and an active ping. In the multi-static sonar network, the location of the target can be estimated using TDOA (Time Difference of Arrival) measurements. However, since the sensor network may obtain insufficient and inaccurate TDOA measurements due to ambient noise and other harsh underwater conditions, target tracking performance can be significantly degraded. We propose a robust target tracking algorithm designed to operate in such a scenario. First, track management with track splitting is applied to reduce performance degradation caused by insufficient measurements. Second, a target location is estimated by a fusion of multiple TDOA measurements using a Gaussian Mixture Model (GMM). In addition, the target trajectory is refined by conducting a stack-based data association method based on multiple-frames measurements in order to more accurately estimate target trajectory. The effectiveness of the proposed method is verified through simulations.

  14. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  15. An improved pulse coupled neural network with spectral residual for infrared pedestrian segmentation

    NASA Astrophysics Data System (ADS)

    He, Fuliang; Guo, Yongcai; Gao, Chao

    2017-12-01

    Pulse coupled neural network (PCNN) has become a significant tool for the infrared pedestrian segmentation, and a variety of relevant methods have been developed at present. However, these existing models commonly have several problems of the poor adaptability of infrared noise, the inaccuracy of segmentation results, and the fairly complex determination of parameters in current methods. This paper presents an improved PCNN model that integrates the simplified framework and spectral residual to alleviate the above problem. In this model, firstly, the weight matrix of the feeding input field is designed by the anisotropic Gaussian kernels (ANGKs), in order to suppress the infrared noise effectively. Secondly, the normalized spectral residual saliency is introduced as linking coefficient to enhance the edges and structural characteristics of segmented pedestrians remarkably. Finally, the improved dynamic threshold based on the average gray values of the iterative segmentation is employed to simplify the original PCNN model. Experiments on the IEEE OTCBVS benchmark and the infrared pedestrian image database built by our laboratory, demonstrate that the superiority of both subjective visual effects and objective quantitative evaluations in information differences and segmentation errors in our model, compared with other classic segmentation methods.

  16. High-speed millimeter communication through radio-over-free-space-optics network by mode-division multiplexing

    NASA Astrophysics Data System (ADS)

    Chaudhary, Sushank; Amphawan, Angela

    2017-11-01

    In an attempt to meet the goal of distributing millimeter-wave (mm-wave) signals, recent years have witnessed significant relevance being given to combining radio frequency with optical fiber technologies. The future of radio-over-free-space-optics technology aims to build a universal platform for distributing millimeter waves for wireless local area networks without using expensive optical fibers. This work is focused on simultaneous transmission of four independent OFDM-based channels, each carrying 20 Gbps to 40 GHz data, by mode-division multiplexing of Laguerre-Gaussian mode with vortex lens and Hermite-Gaussian mode to realize a total transmission of 80 Gbps to 160 GHz data over 50-km free-space optical link. Moreover, the performance of the proposed system is also evaluated under the influence of various atmospheric turbulences, such as light fog, thin fog, and thick fog.

  17. Dynamic design of ecological monitoring networks for non-Gaussian spatio-temporal data

    USGS Publications Warehouse

    Wikle, C.K.; Royle, J. Andrew

    2005-01-01

    Many ecological processes exhibit spatial structure that changes over time in a coherent, dynamical fashion. This dynamical component is often ignored in the design of spatial monitoring networks. Furthermore, ecological variables related to processes such as habitat are often non-Gaussian (e.g. Poisson or log-normal). We demonstrate that a simulation-based design approach can be used in settings where the data distribution is from a spatio-temporal exponential family. The key random component in the conditional mean function from this distribution is then a spatio-temporal dynamic process. Given the computational burden of estimating the expected utility of various designs in this setting, we utilize an extended Kalman filter approximation to facilitate implementation. The approach is motivated by, and demonstrated on, the problem of selecting sampling locations to estimate July brood counts in the prairie pothole region of the U.S.

  18. 40Gbit/s MDM-WDM Laguerre-Gaussian Mode with Equalization for Multimode Fiber in Access Networks

    NASA Astrophysics Data System (ADS)

    Fazea, Yousef; Amphawan, Angela

    2018-04-01

    Modal dispersion is seen as the primary impairment for multimode fiber. Mode division multiplexing (MDM) is a promising technology that has been realized as a favorable technology for considerably upsurges the capacity and distance of multimode fiber in conjunction with Wavelength Division Multiplexing (WDM) for fiber-to-the-home. This paper reveals the importance of an equalization technique in conjunction with controlling the modes spacing of mode division multiplexing-wavelength division multiplexing of Laguerre-Gaussian modes to alleviate modal dispersion for multimode fiber. The effects of channel spacing of 20 channels MDM-WDM were examined through controlling the azimuthal mode number and the radial mode number of Laguerre-Gaussian modes. A data rate of 40Gbit/s was achieved for a distance of 1,500 m for MDM-WDM.

  19. Curvaton scenario within the minimal supersymmetric standard model and predictions for non-Gaussianity.

    PubMed

    Mazumdar, Anupam; Nadathur, Seshadri

    2012-03-16

    We provide a model in which both the inflaton and the curvaton are obtained from within the minimal supersymmetric standard model, with known gauge and Yukawa interactions. Since now both the inflaton and curvaton fields are successfully embedded within the same sector, their decay products thermalize very quickly before the electroweak scale. This results in two important features of the model: first, there will be no residual isocurvature perturbations, and second, observable non-Gaussianities can be generated with the non-Gaussianity parameter f(NL)~O(5-1000) being determined solely by the combination of weak-scale physics and the standard model Yukawa interactions.

  20. Non-Gaussian lineshapes and dynamics of time-resolved linear and nonlinear (correlation) spectra.

    PubMed

    Dinpajooh, Mohammadhasan; Matyushov, Dmitry V

    2014-07-17

    Signatures of nonlinear and non-Gaussian dynamics in time-resolved linear and nonlinear (correlation) 2D spectra are analyzed in a model considering a linear plus quadratic dependence of the spectroscopic transition frequency on a Gaussian nuclear coordinate of the thermal bath (quadratic coupling). This new model is contrasted to the commonly assumed linear dependence of the transition frequency on the medium nuclear coordinates (linear coupling). The linear coupling model predicts equality between the Stokes shift and equilibrium correlation functions of the transition frequency and time-independent spectral width. Both predictions are often violated, and we are asking here the question of whether a nonlinear solvent response and/or non-Gaussian dynamics are required to explain these observations. We find that correlation functions of spectroscopic observables calculated in the quadratic coupling model depend on the chromophore's electronic state and the spectral width gains time dependence, all in violation of the predictions of the linear coupling models. Lineshape functions of 2D spectra are derived assuming Ornstein-Uhlenbeck dynamics of the bath nuclear modes. The model predicts asymmetry of 2D correlation plots and bending of the center line. The latter is often used to extract two-point correlation functions from 2D spectra. The dynamics of the transition frequency are non-Gaussian. However, the effect of non-Gaussian dynamics is limited to the third-order (skewness) time correlation function, without affecting the time correlation functions of higher order. The theory is tested against molecular dynamics simulations of a model polar-polarizable chromophore dissolved in a force field water.

  1. Gaussian signal relaxation around spin echoes: Implications for precise reversible transverse relaxation quantification of pulmonary tissue at 1.5 and 3 Tesla.

    PubMed

    Zapp, Jascha; Domsch, Sebastian; Weingärtner, Sebastian; Schad, Lothar R

    2017-05-01

    To characterize the reversible transverse relaxation in pulmonary tissue and to study the benefit of a quadratic exponential (Gaussian) model over the commonly used linear exponential model for increased quantification precision. A point-resolved spectroscopy sequence was used for comprehensive sampling of the relaxation around spin echoes. Measurements were performed in an ex vivo tissue sample and in healthy volunteers at 1.5 Tesla (T) and 3 T. The goodness of fit using χred2 and the precision of the fitted relaxation time by means of its confidence interval were compared between the two relaxation models. The Gaussian model provides enhanced descriptions of pulmonary relaxation with lower χred2 by average factors of 4 ex vivo and 3 in volunteers. The Gaussian model indicates higher sensitivity to tissue structure alteration with increased precision of reversible transverse relaxation time measurements also by average factors of 4 ex vivo and 3 in volunteers. The mean relaxation times of the Gaussian model in volunteers are T2,G' = (1.97 ± 0.27) msec at 1.5 T and T2,G' = (0.83 ± 0.21) msec at 3 T. Pulmonary signal relaxation was found to be accurately modeled as Gaussian, providing a potential biomarker T2,G' with high sensitivity. Magn Reson Med 77:1938-1945, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  2. Separation of the atmospheric variability into non-Gaussian multidimensional sources by projection pursuit techniques

    NASA Astrophysics Data System (ADS)

    Pires, Carlos A. L.; Ribeiro, Andreia F. S.

    2017-02-01

    We develop an expansion of space-distributed time series into statistically independent uncorrelated subspaces (statistical sources) of low-dimension and exhibiting enhanced non-Gaussian probability distributions with geometrically simple chosen shapes (projection pursuit rationale). The method relies upon a generalization of the principal component analysis that is optimal for Gaussian mixed signals and of the independent component analysis (ICA), optimized to split non-Gaussian scalar sources. The proposed method, supported by information theory concepts and methods, is the independent subspace analysis (ISA) that looks for multi-dimensional, intrinsically synergetic subspaces such as dyads (2D) and triads (3D), not separable by ICA. Basically, we optimize rotated variables maximizing certain nonlinear correlations (contrast functions) coming from the non-Gaussianity of the joint distribution. As a by-product, it provides nonlinear variable changes `unfolding' the subspaces into nearly Gaussian scalars of easier post-processing. Moreover, the new variables still work as nonlinear data exploratory indices of the non-Gaussian variability of the analysed climatic and geophysical fields. The method (ISA, followed by nonlinear unfolding) is tested into three datasets. The first one comes from the Lorenz'63 three-dimensional chaotic model, showing a clear separation into a non-Gaussian dyad plus an independent scalar. The second one is a mixture of propagating waves of random correlated phases in which the emergence of triadic wave resonances imprints a statistical signature in terms of a non-Gaussian non-separable triad. Finally the method is applied to the monthly variability of a high-dimensional quasi-geostrophic (QG) atmospheric model, applied to the Northern Hemispheric winter. We find that quite enhanced non-Gaussian dyads of parabolic shape, perform much better than the unrotated variables in which concerns the separation of the four model's centroid regimes (positive and negative phases of the Arctic Oscillation and of the North Atlantic Oscillation). Triads are also likely in the QG model but of weaker expression than dyads due to the imposed shape and dimension. The study emphasizes the existence of nonlinear dyadic and triadic nonlinear teleconnections.

  3. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  4. Measuring Symmetry, Asymmetry and Randomness in Neural Network Connectivity

    PubMed Central

    Esposito, Umberto; Giugliano, Michele; van Rossum, Mark; Vasilaki, Eleni

    2014-01-01

    Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity. PMID:25006663

  5. Measuring symmetry, asymmetry and randomness in neural network connectivity.

    PubMed

    Esposito, Umberto; Giugliano, Michele; van Rossum, Mark; Vasilaki, Eleni

    2014-01-01

    Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity.

  6. Data from fitting Gaussian process models to various data sets using eight Gaussian process software packages.

    PubMed

    Erickson, Collin B; Ankenman, Bruce E; Sanchez, Susan M

    2018-06-01

    This data article provides the summary data from tests comparing various Gaussian process software packages. Each spreadsheet represents a single function or type of function using a particular input sample size. In each spreadsheet, a row gives the results for a particular replication using a single package. Within each spreadsheet there are the results from eight Gaussian process model-fitting packages on five replicates of the surface. There is also one spreadsheet comparing the results from two packages performing stochastic kriging. These data enable comparisons between the packages to determine which package will give users the best results.

  7. Analysis of low altitude atmospheric turbulence data measured in flight

    NASA Technical Reports Server (NTRS)

    Ganzer, V. M.; Joppa, R. G.; Vanderwees, G.

    1977-01-01

    All three components of turbulence were measured simultaneously in flight at each wing tip of a Beech D-18 aircraft. The flights were conducted at low altitude, 30.5 - 61.0 meters (100-200 ft.), over water in the presence of wind driven turbulence. Statistical properties of flight measured turbulence were compared with Gaussian and non-Gaussian turbulence models. Spatial characteristics of the turbulence were analyzed using the data from flight perpendicular and parallel to the wind. The probability density distributions of the vertical gusts show distinctly non-Gaussian characteristics. The distributions of the longitudinal and lateral gusts are generally Gaussian. The power spectra compare in the inertial subrange at some points better with the Dryden spectrum, while at other points the von Karman spectrum is a better approximation. In the low frequency range the data show peaks or dips in the power spectral density. The cross between vertical gusts in the direction of the mean wind were compared with a matched non-Gaussian model. The real component of the cross spectrum is in general close to the non-Gaussian model. The imaginary component, however, indicated a larger phase shift between these two gust components than was found in previous research.

  8. Smoothing the Marmousi Model

    NASA Astrophysics Data System (ADS)

    Žáček, K.

    Summary- The only way to make an excessively complex velocity model suitable for application of ray-based methods, such as the Gaussian beam or Gaussian packet methods, is to smooth it. We have smoothed the Marmousi model by choosing a coarser grid and by minimizing the second spatial derivatives of the slowness. This was done by minimizing the relevant Sobolev norm of slowness. We show that minimizing the relevant Sobolev norm of slowness is a suitable technique for preparing the optimum models for asymptotic ray theory methods. However, the price we pay for a model suitable for ray tracing is an increase of the difference between the smoothed and original model. Similarly, the estimated error in the travel time also increases due to the difference between the models. In smoothing the Marmousi model, we have found the estimated error of travel times at the verge of acceptability. Due to the low frequencies in the wavefield of the original Marmousi data set, we have found the Gaussian beams and Gaussian packets at the verge of applicability even in models sufficiently smoothed for ray tracing.

  9. Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions

    NASA Astrophysics Data System (ADS)

    Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio

    1993-02-01

    The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.

  10. Cost effectiveness of the stream-gaging program in North Dakota

    USGS Publications Warehouse

    Ryan, Gerald L.

    1989-01-01

    This report documents results of a cost-effectiveness study of the stream-gaging program In North Dakota. It is part of a nationwide evaluation of the stream-gaging program of the U.S. Geological Survey.One phase of evaluating cost effectiveness is to identify less costly alternative methods of simulating streamflow records. Statistical or hydro logic flow-routing methods were used as alternative methods to simulate streamflow records for 21 combinations of gaging stations from the 94-gaging-station network. Accuracy of the alternative methods was sufficient to consider discontinuing only one gaging station.Operation of the gaging-station network was evaluated by using associated uncertainty in streamflow records. The evaluation was limited to the nonwinter operation of 29 gaging stations in eastern North Dakota. The current (1987) travel routes and measurement frequencies require a budget of about $248/000 and result in an average equivalent Gaussian spread in streamflow records of 16.5 percent. Changes in routes and measurement frequencies optimally could reduce the average equivalent Gaussian spread to 14.7 percent.Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget would increase the optimal average equivalent Gaussian spread from 14.7 to 20.4 percent, and a $400,000 budget could decrease it to 5.8 percent.

  11. SIRANERISK: Modelling dispersion of steady and unsteady pollutant releases in the urban canopy

    NASA Astrophysics Data System (ADS)

    Soulhac, L.; Lamaison, G.; Cierco, F.-X.; Ben Salem, N.; Salizzoni, P.; Mejean, P.; Armand, P.; Patryl, L.

    2016-09-01

    SIRANERISK is an operational model for the simulation of the dispersion of unsteady atmospheric releases of pollutant within and above an urban area. SIRANERISK is built on the same principles as the SIRANE model, and couples a street network model for the pollutant transfers within the urban canopy with a Gaussian puff model for the transfers above it. The performance of the model are here analysed by a detailed comparisons with wind-tunnel experiments. These experiments concern the dispersion of steady and unsteady pollutant releases within and above obstacle arrays with varying geometrical configurations, representing different topologies of idealised urban districts. The overall good agreement between numerical and experimental data demonstrates the reliability of SIRANERISK as an operational tool for the assessment of risk analysis and for the management of crises due to the accidental release of harmful airborne pollutants within a built environment.

  12. TMA Navigator: network inference, patient stratification and survival analysis with tissue microarray data

    PubMed Central

    Lubbock, Alexander L. R.; Katz, Elad; Harrison, David J.; Overton, Ian M.

    2013-01-01

    Tissue microarrays (TMAs) allow multiplexed analysis of tissue samples and are frequently used to estimate biomarker protein expression in tumour biopsies. TMA Navigator (www.tmanavigator.org) is an open access web application for analysis of TMA data and related information, accommodating categorical, semi-continuous and continuous expression scores. Non-biological variation, or batch effects, can hinder data analysis and may be mitigated using the ComBat algorithm, which is incorporated with enhancements for automated application to TMA data. Unsupervised grouping of samples (patients) is provided according to Gaussian mixture modelling of marker scores, with cardinality selected by Bayesian information criterion regularization. Kaplan–Meier survival analysis is available, including comparison of groups identified by mixture modelling using the Mantel-Cox log-rank test. TMA Navigator also supports network inference approaches useful for TMA datasets, which often constitute comparatively few markers. Tissue and cell-type specific networks derived from TMA expression data offer insights into the molecular logic underlying pathophenotypes, towards more effective and personalized medicine. Output is interactive, and results may be exported for use with external programs. Private anonymous access is available, and user accounts may be generated for easier data management. PMID:23761446

  13. Capacity of PPM on Gaussian and Webb Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Dolinar, S.; Pollara, F.; Hamkins, J.

    2000-01-01

    This paper computes and compares the capacities of M-ary PPM on various idealized channels that approximate the optical communication channel: (1) the standard additive white Gaussian noise (AWGN) channel;(2) a more general AWGN channel (AWGN2) allowing different variances in signal and noise slots;(3) a Webb-distributed channel (Webb2);(4) a Webb+Gaussian channel, modeling Gaussian thermal noise added to Webb-distributed channel outputs.

  14. Simulation and analysis of scalable non-Gaussian statistically anisotropic random functions

    NASA Astrophysics Data System (ADS)

    Riva, Monica; Panzeri, Marco; Guadagnini, Alberto; Neuman, Shlomo P.

    2015-12-01

    Many earth and environmental (as well as other) variables, Y, and their spatial or temporal increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture some key aspects of such scaling by treating Y or ΔY as standard sub-Gaussian random functions. We were however unable to reconcile two seemingly contradictory observations, namely that whereas sample frequency distributions of Y (or its logarithm) exhibit relatively mild non-Gaussian peaks and tails, those of ΔY display peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we overcame this difficulty by developing a new generalized sub-Gaussian model which captures both behaviors in a unified and consistent manner, exploring it on synthetically generated random functions in one dimension (Riva et al., 2015). Here we extend our generalized sub-Gaussian model to multiple dimensions, present an algorithm to generate corresponding random realizations of statistically isotropic or anisotropic sub-Gaussian functions and illustrate it in two dimensions. We demonstrate the accuracy of our algorithm by comparing ensemble statistics of Y and ΔY (such as, mean, variance, variogram and probability density function) with those of Monte Carlo generated realizations. We end by exploring the feasibility of estimating all relevant parameters of our model by analyzing jointly spatial moments of Y and ΔY obtained from a single realization of Y.

  15. Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.

    PubMed

    Mao, Tianqi; Wang, Zhaocheng; Wang, Qi

    2017-01-23

    Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.

  16. On the lorentzian versus Gaussian character of time-domain spin-echo signals from the brain as sampled by means of gradient-echoes: Implications for quantitative transverse relaxation studies.

    PubMed

    Mulkern, Robert V; Balasubramanian, Mukund; Mitsouras, Dimitrios

    2014-07-30

    To determine whether Lorentzian or Gaussian intra-voxel frequency distributions are better suited for modeling data acquired with gradient-echo sampling of single spin-echoes for the simultaneous characterization of irreversible and reversible relaxation rates. Clinical studies (e.g., of brain iron deposition) using such acquisition schemes have typically assumed Lorentzian distributions. Theoretical expressions of the time-domain spin-echo signal for intra-voxel Lorentzian and Gaussian distributions were used to fit data from a human brain scanned at both 1.5 Tesla (T) and 3T, resulting in maps of irreversible and reversible relaxation rates for each model. The relative merits of the Lorentzian versus Gaussian model were compared by means of quality of fit considerations. Lorentzian fits were equivalent to Gaussian fits primarily in regions of the brain where irreversible relaxation dominated. In the multiple brain regions where reversible relaxation effects become prominent, however, Gaussian fits were clearly superior. The widespread assumption that a Lorentzian distribution is suitable for quantitative transverse relaxation studies of the brain should be reconsidered, particularly at 3T and higher field strengths as reversible relaxation effects become more prominent. Gaussian distributions offer alternate fits of experimental data that should prove quite useful in general. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.

  17. Cough event classification by pretrained deep neural network.

    PubMed

    Liu, Jia-Ming; You, Mingyu; Wang, Zheng; Li, Guo-Zheng; Xu, Xianghuai; Qiu, Zhongmin

    2015-01-01

    Cough is an essential symptom in respiratory diseases. In the measurement of cough severity, an accurate and objective cough monitor is expected by respiratory disease society. This paper aims to introduce a better performed algorithm, pretrained deep neural network (DNN), to the cough classification problem, which is a key step in the cough monitor. The deep neural network models are built from two steps, pretrain and fine-tuning, followed by a Hidden Markov Model (HMM) decoder to capture tamporal information of the audio signals. By unsupervised pretraining a deep belief network, a good initialization for a deep neural network is learned. Then the fine-tuning step is a back propogation tuning the neural network so that it can predict the observation probability associated with each HMM states, where the HMM states are originally achieved by force-alignment with a Gaussian Mixture Model Hidden Markov Model (GMM-HMM) on the training samples. Three cough HMMs and one noncough HMM are employed to model coughs and noncoughs respectively. The final decision is made based on viterbi decoding algorihtm that generates the most likely HMM sequence for each sample. A sample is labeled as cough if a cough HMM is found in the sequence. The experiments were conducted on a dataset that was collected from 22 patients with respiratory diseases. Patient dependent (PD) and patient independent (PI) experimental settings were used to evaluate the models. Five criteria, sensitivity, specificity, F1, macro average and micro average are shown to depict different aspects of the models. From overall evaluation criteria, the DNN based methods are superior to traditional GMM-HMM based method on F1 and micro average with maximal 14% and 11% error reduction in PD and 7% and 10% in PI, meanwhile keep similar performances on macro average. They also surpass GMM-HMM model on specificity with maximal 14% error reduction on both PD and PI. In this paper, we tried pretrained deep neural network in cough classification problem. Our results showed that comparing with the conventional GMM-HMM framework, the HMM-DNN could get better overall performance on cough classification task.

  18. Zn-metalloprotease sequences in extremophiles

    NASA Astrophysics Data System (ADS)

    Holden, T.; Dehipawala, S.; Golebiewska, U.; Cheung, E.; Tremberger, G., Jr.; Williams, E.; Schneider, P.; Gadura, N.; Lieberman, D.; Cheung, T.

    2010-09-01

    The Zn-metalloprotease family contains conserved amino acid structures such that the nucleotide fluctuation at the DNA level would exhibit correlated randomness as described by fractal dimension. A nucleotide sequence fractal dimension can be calculated from a numerical series consisting of the atomic numbers of each nucleotide. The structure's vibration modes can also be studied using a Gaussian Network Model. The vibration measure and fractal dimension values form a two-dimensional plot with a standard vector metric that can be used for comparison of structures. The preference for amino acid usage in extremophiles may suppress nucleotide fluctuations that could be analyzed in terms of fractal dimension and Shannon entropy. A protein level cold adaptation study of the thermolysin Zn-metalloprotease family using molecular dynamics simulation was reported recently and our results show that the associated nucleotide fluctuation suppression is consistent with a regression pattern generated from the sequences's fractal dimension and entropy values (R-square { 0.98, N =5). It was observed that cold adaptation selected for high entropy and low fractal dimension values. Extension to the Archaemetzincin M54 family in extremophiles reveals a similar regression pattern (R-square = 0.98, N = 6). It was observed that the metalloprotease sequences of extremely halophilic organisms possess high fractal dimension and low entropy values as compared with non-halophiles. The zinc atom is usually bonded to the histidine residue, which shows limited levels of vibration in the Gaussian Network Model. The variability of the fractal dimension and entropy for a given protein structure suggests that extremophiles would have evolved after mesophiles, consistent with the bias usage of non-prebiotic amino acids by extremophiles. It may be argued that extremophiles have the capacity to offer extinction protection during drastic changes in astrobiological environments.

  19. Interactive Gaussian Graphical Models for Discovering Depth Trends in ChemCam Data

    NASA Astrophysics Data System (ADS)

    Oyen, D. A.; Komurlu, C.; Lanza, N. L.

    2018-04-01

    Interactive Gaussian graphical models discover surface compositional features on rocks in ChemCam targets. Our approach visualizes shot-to-shot relationships among LIBS observations, and identifies the wavelengths involved in the trend.

  20. Characterization of Adrenal Adenoma by Gaussian Model-Based Algorithm.

    PubMed

    Hsu, Larson D; Wang, Carolyn L; Clark, Toshimasa J

    2016-01-01

    We confirmed that computed tomography (CT) attenuation values of pixels in an adrenal nodule approximate a Gaussian distribution. Building on this and the previously described histogram analysis method, we created an algorithm that uses mean and standard deviation to estimate the percentage of negative attenuation pixels in an adrenal nodule, thereby allowing differentiation of adenomas and nonadenomas. The institutional review board approved both components of this study in which we developed and then validated our criteria. In the first, we retrospectively assessed CT attenuation values of adrenal nodules for normality using a 2-sample Kolmogorov-Smirnov test. In the second, we evaluated a separate cohort of patients with adrenal nodules using both the conventional 10HU unit mean attenuation method and our Gaussian model-based algorithm. We compared the sensitivities of the 2 methods using McNemar's test. A total of 183 of 185 observations (98.9%) demonstrated a Gaussian distribution in adrenal nodule pixel attenuation values. The sensitivity and specificity of our Gaussian model-based algorithm for identifying adrenal adenoma were 86.1% and 83.3%, respectively. The sensitivity and specificity of the mean attenuation method were 53.2% and 94.4%, respectively. The sensitivities of the 2 methods were significantly different (P value < 0.001). In conclusion, the CT attenuation values within an adrenal nodule follow a Gaussian distribution. Our Gaussian model-based algorithm can characterize adrenal adenomas with higher sensitivity than the conventional mean attenuation method. The use of our algorithm, which does not require additional postprocessing, may increase workflow efficiency and reduce unnecessary workup of benign nodules. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Assimilating every-30-second 100-m-mesh radar observations for convective weather: implications to non-Gaussian PDF

    NASA Astrophysics Data System (ADS)

    Miyoshi, T.; Teramura, T.; Ruiz, J.; Kondo, K.; Lien, G. Y.

    2016-12-01

    Convective weather is known to be highly nonlinear and chaotic, and it is hard to predict their location and timing precisely. Our Big Data Assimilation (BDA) effort has been exploring to use dense and frequent observations to avoid non-Gaussian probability density function (PDF) and to apply an ensemble Kalman filter under the Gaussian error assumption. The phased array weather radar (PAWR) can observe a dense three-dimensional volume scan with 100-m range resolution and 100 elevation angles in only 30 seconds. The BDA system assimilates the PAWR reflectivity and Doppler velocity observations every 30 seconds into 100 ensemble members of storm-scale numerical weather prediction (NWP) model at 100-m grid spacing. The 30-second-update, 100-m-mesh BDA system has been quite successful in multiple case studies of local severe rainfall events. However, with 1000 ensemble members, the reduced-resolution BDA system at 1-km grid spacing showed significant non-Gaussian PDF with every-30-second updates. With a 10240-member ensemble Kalman filter with a global NWP model at 112-km grid spacing, we found roughly 1000 members satisfactory to capture the non-Gaussian error structures. With these in mind, we explore how the density of observations in space and time affects the non-Gaussianity in an ensemble Kalman filter with a simple toy model. In this presentation, we will present the most up-to-date results of the BDA research, as well as the investigation with the toy model on the non-Gaussianity with dense and frequent observations.

  2. Estimation of the daily global solar radiation based on the Gaussian process regression methodology in the Saharan climate

    NASA Astrophysics Data System (ADS)

    Guermoui, Mawloud; Gairaa, Kacem; Rabehi, Abdelaziz; Djafer, Djelloul; Benkaciali, Said

    2018-06-01

    Accurate estimation of solar radiation is the major concern in renewable energy applications. Over the past few years, a lot of machine learning paradigms have been proposed in order to improve the estimation performances, mostly based on artificial neural networks, fuzzy logic, support vector machine and adaptive neuro-fuzzy inference system. The aim of this work is the prediction of the daily global solar radiation, received on a horizontal surface through the Gaussian process regression (GPR) methodology. A case study of Ghardaïa region (Algeria) has been used in order to validate the above methodology. In fact, several combinations have been tested; it was found that, GPR-model based on sunshine duration, minimum air temperature and relative humidity gives the best results in term of mean absolute bias error (MBE), root mean square error (RMSE), relative mean square error (rRMSE), and correlation coefficient ( r) . The obtained values of these indicators are 0.67 MJ/m2, 1.15 MJ/m2, 5.2%, and 98.42%, respectively.

  3. Vegetation Monitoring with Gaussian Processes and Latent Force Models

    NASA Astrophysics Data System (ADS)

    Camps-Valls, Gustau; Svendsen, Daniel; Martino, Luca; Campos, Manuel; Luengo, David

    2017-04-01

    Monitoring vegetation by biophysical parameter retrieval from Earth observation data is a challenging problem, where machine learning is currently a key player. Neural networks, kernel methods, and Gaussian Process (GP) regression have excelled in parameter retrieval tasks at both local and global scales. GP regression is based on solid Bayesian statistics, yield efficient and accurate parameter estimates, and provides interesting advantages over competing machine learning approaches such as confidence intervals. However, GP models are hampered by lack of interpretability, that prevented the widespread adoption by a larger community. In this presentation we will summarize some of our latest developments to address this issue. We will review the main characteristics of GPs and their advantages in vegetation monitoring standard applications. Then, three advanced GP models will be introduced. First, we will derive sensitivity maps for the GP predictive function that allows us to obtain feature ranking from the model and to assess the influence of examples in the solution. Second, we will introduce a Joint GP (JGP) model that combines in situ measurements and simulated radiative transfer data in a single GP model. The JGP regression provides more sensible confidence intervals for the predictions, respects the physics of the underlying processes, and allows for transferability across time and space. Finally, a latent force model (LFM) for GP modeling that encodes ordinary differential equations to blend data-driven modeling and physical models of the system is presented. The LFM performs multi-output regression, adapts to the signal characteristics, is able to cope with missing data in the time series, and provides explicit latent functions that allow system analysis and evaluation. Empirical evidence of the performance of these models will be presented through illustrative examples.

  4. Optimal observation network design for conceptual model discrimination and uncertainty reduction

    NASA Astrophysics Data System (ADS)

    Pham, Hai V.; Tsai, Frank T.-C.

    2016-02-01

    This study expands the Box-Hill discrimination function to design an optimal observation network to discriminate conceptual models and, in turn, identify a most favored model. The Box-Hill discrimination function measures the expected decrease in Shannon entropy (for model identification) before and after the optimal design for one additional observation. This study modifies the discrimination function to account for multiple future observations that are assumed spatiotemporally independent and Gaussian-distributed. Bayesian model averaging (BMA) is used to incorporate existing observation data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. In addition, the BMA method is adopted to predict future observation data in a statistical sense. The design goal is to find optimal locations and least data via maximizing the Box-Hill discrimination function value subject to a posterior model probability threshold. The optimal observation network design is illustrated using a groundwater study in Baton Rouge, Louisiana, to collect additional groundwater heads from USGS wells. The sources of uncertainty creating multiple groundwater models are geological architecture, boundary condition, and fault permeability architecture. Impacts of considering homoscedastic and heteroscedastic future observation data and the sources of uncertainties on potential observation areas are analyzed. Results show that heteroscedasticity should be considered in the design procedure to account for various sources of future observation uncertainty. After the optimal design is obtained and the corresponding data are collected for model updating, total variances of head predictions can be significantly reduced by identifying a model with a superior posterior model probability.

  5. Approach to the unfolding and folding dynamics of add A-riboswitch upon adenine dissociation using a coarse-grained elastic network model

    NASA Astrophysics Data System (ADS)

    Li, Chunhua; Lv, Dashuai; Zhang, Lei; Yang, Feng; Wang, Cunxin; Su, Jiguo; Zhang, Yang

    2016-07-01

    Riboswitches are noncoding mRNA segments that can regulate the gene expression via altering their structures in response to specific metabolite binding. We proposed a coarse-grained Gaussian network model (GNM) to examine the unfolding and folding dynamics of adenosine deaminase (add) A-riboswitch upon the adenine dissociation, in which the RNA is modeled by a nucleotide chain with interaction networks formed by connecting adjoining atomic contacts. It was shown that the adenine binding is critical to the folding of the add A-riboswitch while the removal of the ligand can result in drastic increase of the thermodynamic fluctuations especially in the junction regions between helix domains. Under the assumption that the native contacts with the highest thermodynamic fluctuations break first, the iterative GNM simulations showed that the unfolding process of the adenine-free add A-riboswitch starts with the denature of the terminal helix stem, followed by the loops and junctions involving ligand binding pocket, and then the central helix domains. Despite the simplified coarse-grained modeling, the unfolding dynamics and pathways are shown in close agreement with the results from atomic-level MD simulations and the NMR and single-molecule force spectroscopy experiments. Overall, the study demonstrates a new avenue to investigate the binding and folding dynamics of add A-riboswitch molecule which can be readily extended for other RNA molecules.

  6. Experimental study of the focusing properties of a Gaussian Schell-model vortex beam

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Zhu, Shijun; Cai, Yangjian

    2011-08-01

    We carry out an experimental and theoretical study of the focusing properties of a Gaussian Schell-model (GSM) vortex beam. It is found that we can shape the beam profile of the focused GSM vortex beam by varying its initial spatial coherence width. Focused dark hollow, flat-topped, and Gaussian beam spots can be obtained in our experiment, which will be useful for trapping particles. The experimental results agree well with the theoretical results.

  7. Event rate and reaction time performance in ADHD: Testing predictions from the state regulation deficit hypothesis using an ex-Gaussian model.

    PubMed

    Metin, Baris; Wiersema, Jan R; Verguts, Tom; Gasthuys, Roos; van Der Meere, Jacob J; Roeyers, Herbert; Sonuga-Barke, Edmund

    2016-01-01

    According to the state regulation deficit (SRD) account, ADHD is associated with a problem using effort to maintain an optimal activation state under demanding task settings such as very fast or very slow event rates. This leads to a prediction of disrupted performance at event rate extremes reflected in higher Gaussian response variability that is a putative marker of activation during motor preparation. In the current study, we tested this hypothesis using ex-Gaussian modeling, which distinguishes Gaussian from non-Gaussian variability. Twenty-five children with ADHD and 29 typically developing controls performed a simple Go/No-Go task under four different event-rate conditions. There was an accentuated quadratic relationship between event rate and Gaussian variability in the ADHD group compared to the controls. The children with ADHD had greater Gaussian variability at very fast and very slow event rates but not at moderate event rates. The results provide evidence for the SRD account of ADHD. However, given that this effect did not explain all group differences (some of which were independent of event rate) other cognitive and/or motivational processes are also likely implicated in ADHD performance deficits.

  8. INPUFF: A SINGLE SOURCE GAUSSIAN PUFF DISPERSION ALGORITHM. USER'S GUIDE

    EPA Science Inventory

    INPUFF is a Gaussian INtegrated PUFF model. The Gaussian puff diffusion equation is used to compute the contribution to the concentration at each receptor from each puff every time step. Computations in INPUFF can be made for a single point source at up to 25 receptor locations. ...

  9. Sparse covariance estimation in heterogeneous samples*

    PubMed Central

    Rodríguez, Abel; Lenkoski, Alex; Dobra, Adrian

    2015-01-01

    Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the pre-Euro era. PMID:26925189

  10. A new paradigm for the molecular basis of rubber elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, David E.; Barber, John L.

    The molecular basis for rubber elasticity is arguably the oldest and one of the most important questions in the field of polymer physics. The theoretical investigation of rubber elasticity began in earnest almost a century ago with the development of analytic thermodynamic models, based on simple, highly-symmetric configurations of so-called Gaussian chains, i.e. polymer chains that obey Markov statistics. Numerous theories have been proposed over the past 90 years based on the ansatz that the elastic force for individual network chains arises from the entropy change associated with the distribution of end-to-end distances of a free polymer chain. There aremore » serious philosophical objections to this assumption and others, such as the assumption that all network nodes undergo affine motion and that all of the network chains have the same length. Recently, a new paradigm for elasticity in rubber networks has been proposed that is based on mechanisms that originate at the molecular level. Using conventional statistical mechanics analyses, quantum chemistry, and molecular dynamics simulations, the fundamental entropic and enthalpic chain extension forces for polyisoprene (natural rubber) have been determined, along with estimates for the basic force constants. Concurrently, the complex morphology of natural rubber networks (the joint probability density distributions that relate the chain end-to-end distance to its contour length) has also been captured in a numerical model. When molecular chain forces are merged with the network structure in this model, it is possible to study the mechanical response to tensile and compressive strains of a representative volume element of a polymer network. As strain is imposed on a network, pathways of connected taut chains, that completely span the network along strain axis, emerge. Although these chains represent only a few percent of the total, they account for nearly all of the elastic stress at high strain. Here we provide a brief review of previous elasticity theories and their deficiencies, and present a new paradigm with an emphasis on experimental comparisons.« less

  11. A new paradigm for the molecular basis of rubber elasticity

    DOE PAGES

    Hanson, David E.; Barber, John L.

    2015-02-19

    The molecular basis for rubber elasticity is arguably the oldest and one of the most important questions in the field of polymer physics. The theoretical investigation of rubber elasticity began in earnest almost a century ago with the development of analytic thermodynamic models, based on simple, highly-symmetric configurations of so-called Gaussian chains, i.e. polymer chains that obey Markov statistics. Numerous theories have been proposed over the past 90 years based on the ansatz that the elastic force for individual network chains arises from the entropy change associated with the distribution of end-to-end distances of a free polymer chain. There aremore » serious philosophical objections to this assumption and others, such as the assumption that all network nodes undergo affine motion and that all of the network chains have the same length. Recently, a new paradigm for elasticity in rubber networks has been proposed that is based on mechanisms that originate at the molecular level. Using conventional statistical mechanics analyses, quantum chemistry, and molecular dynamics simulations, the fundamental entropic and enthalpic chain extension forces for polyisoprene (natural rubber) have been determined, along with estimates for the basic force constants. Concurrently, the complex morphology of natural rubber networks (the joint probability density distributions that relate the chain end-to-end distance to its contour length) has also been captured in a numerical model. When molecular chain forces are merged with the network structure in this model, it is possible to study the mechanical response to tensile and compressive strains of a representative volume element of a polymer network. As strain is imposed on a network, pathways of connected taut chains, that completely span the network along strain axis, emerge. Although these chains represent only a few percent of the total, they account for nearly all of the elastic stress at high strain. Here we provide a brief review of previous elasticity theories and their deficiencies, and present a new paradigm with an emphasis on experimental comparisons.« less

  12. A compound memristive synapse model for statistical learning through STDP in spiking neural networks

    PubMed Central

    Bill, Johannes; Legenstein, Robert

    2014-01-01

    Memristors have recently emerged as promising circuit elements to mimic the function of biological synapses in neuromorphic computing. The fabrication of reliable nanoscale memristive synapses, that feature continuous conductance changes based on the timing of pre- and postsynaptic spikes, has however turned out to be challenging. In this article, we propose an alternative approach, the compound memristive synapse, that circumvents this problem by the use of memristors with binary memristive states. A compound memristive synapse employs multiple bistable memristors in parallel to jointly form one synapse, thereby providing a spectrum of synaptic efficacies. We investigate the computational implications of synaptic plasticity in the compound synapse by integrating the recently observed phenomenon of stochastic filament formation into an abstract model of stochastic switching. Using this abstract model, we first show how standard pulsing schemes give rise to spike-timing dependent plasticity (STDP) with a stabilizing weight dependence in compound synapses. In a next step, we study unsupervised learning with compound synapses in networks of spiking neurons organized in a winner-take-all architecture. Our theoretical analysis reveals that compound-synapse STDP implements generalized Expectation-Maximization in the spiking network. Specifically, the emergent synapse configuration represents the most salient features of the input distribution in a Mixture-of-Gaussians generative model. Furthermore, the network's spike response to spiking input streams approximates a well-defined Bayesian posterior distribution. We show in computer simulations how such networks learn to represent high-dimensional distributions over images of handwritten digits with high fidelity even in presence of substantial device variations and under severe noise conditions. Therefore, the compound memristive synapse may provide a synaptic design principle for future neuromorphic architectures. PMID:25565943

  13. SU-F-T-158: Experimental Characterization of Field Size Dependence of Dose and Lateral Beam Profiles of Scanning Proton and Carbon Ion Beams for Empirical Model in Air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Hsi, W; Zhao, J

    2016-06-15

    Purpose: The Gaussian model for the lateral profiles in air is crucial for an accurate treatment planning system. The field size dependence of dose and the lateral beam profiles of scanning proton and carbon ion beams are due mainly to particles undergoing multiple Coulomb scattering in the beam line components and secondary particles produced by nuclear interactions in the target, both of which depend upon the energy and species of the beam. In this work, lateral profile shape parameters were fitted to measurements of field size dependence dose at the center of field size in air. Methods: Previous studies havemore » employed empirical fits to measured profile data to significantly reduce the QA time required for measurements. From this approach to derive the weight and sigma of lateral profiles in air, empirical model formulations were simulated for three selected energies for both proton and carbon beams. Results: The 20%–80% lateral penumbras predicted by the double model for proton and single model for carbon with the error functions agreed with the measurements within 1 mm. The standard deviation between measured and fitted field size dependence of dose for empirical model in air has a maximum accuracy of 0.74% for proton with double Gaussian, and of 0.57% for carbon with single Gaussian. Conclusion: We have demonstrated that the double Gaussian model of lateral beam profiles is significantly better than the single Gaussian model for proton while a single Gaussian model is sufficient for carbon. The empirical equation may be used to double check the separately obtained model that is currently used by the planning system. The empirical model in air for dose of spot scanning proton and carbon ion beams cannot be directly used for irregular shaped patient fields, but can be to provide reference values for clinical use and quality assurance.« less

  14. Improving stability of prediction models based on correlated omics data by using network approaches.

    PubMed

    Tissier, Renaud; Houwing-Duistermaat, Jeanine; Rodríguez-Girondo, Mar

    2018-01-01

    Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1) network construction, 2) clustering to empirically derive modules or pathways, and 3) building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM) and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  15. On the numbers of images of two stochastic gravitational lensing models

    NASA Astrophysics Data System (ADS)

    Wei, Ang

    2017-02-01

    We study two gravitational lensing models with Gaussian randomness: the continuous mass fluctuation model and the floating black hole model. The lens equations of these models are related to certain random harmonic functions. Using Rice's formula and Gaussian techniques, we obtain the expected numbers of zeros of these functions, which indicate the amounts of images in the corresponding lens systems.

  16. Solute Concentration at a Pumping Well in Non-Gaussian Random Aquifers under Time-Varying Operational Schedules

    NASA Astrophysics Data System (ADS)

    Libera, A.; de Barros, F.; Riva, M.; Guadagnini, A.

    2016-12-01

    Managing contaminated groundwater systems is an arduous task for multiple reasons. First, subsurface hydraulic properties are heterogeneous and the high costs associated with site characterization leads to data scarcity (therefore, model predictions are uncertain). Second, it is common for water agencies to schedule groundwater extraction through a temporal sequence of pumping rates to maximize the benefits to anthropogenic activities and minimize the environmental footprint of the withdrawal operations. The temporal variability in pumping rates and aquifer heterogeneity affect dilution rates of contaminant plumes and chemical concentration breakthrough curves (BTCs) at the well. While contaminant transport under steady-state pumping is widely studied, the manner in which a given time-varying pumping schedule affects contaminant plume behavior is tackled only marginally. At the same time, most studies focus on the impact of Gaussian random hydraulic conductivity (K) fields on transport. Here, we systematically analyze the significance of the random space function (RSF) model characterizing K in the presence of distinct pumping operations on the uncertainty of the concentration BTC at the operating well. We juxtapose Monte Carlo based numerical results associated with two models: (a) a recently proposed Generalized Sub-Gaussian model which allows capturing non-Gaussian statistical scaling features of RSFs such as hydraulic conductivity, and (b) the commonly used Gaussian field approximation. Our novel results include an appraisal of the coupled effect of (a) the model employed to depict the random spatial variability of K and (b) transient flow regime, as induced by a temporally varying pumping schedule, on the concentration BTC at the operating well. We systematically quantify the sensitivity of the uncertainty in the contaminant BTC to the RSF model adopted for K (non-Gaussian or Gaussian) in the presence of diverse well pumping schedules. Results contribute to determine conditions under which any of these two key factors prevails on the other.

  17. Numerical modeling of Gaussian beam propagation and diffraction in inhomogeneous media based on the complex eikonal equation

    NASA Astrophysics Data System (ADS)

    Huang, Xingguo; Sun, Hui

    2018-05-01

    Gaussian beam is an important complex geometrical optical technology for modeling seismic wave propagation and diffraction in the subsurface with complex geological structure. Current methods for Gaussian beam modeling rely on the dynamic ray tracing and the evanescent wave tracking. However, the dynamic ray tracing method is based on the paraxial ray approximation and the evanescent wave tracking method cannot describe strongly evanescent fields. This leads to inaccuracy of the computed wave fields in the region with a strong inhomogeneous medium. To address this problem, we compute Gaussian beam wave fields using the complex phase by directly solving the complex eikonal equation. In this method, the fast marching method, which is widely used for phase calculation, is combined with Gauss-Newton optimization algorithm to obtain the complex phase at the regular grid points. The main theoretical challenge in combination of this method with Gaussian beam modeling is to address the irregular boundary near the curved central ray. To cope with this challenge, we present the non-uniform finite difference operator and a modified fast marching method. The numerical results confirm the proposed approach.

  18. Benchmarking Measures of Network Controllability on Canonical Graph Models

    NASA Astrophysics Data System (ADS)

    Wu-Yan, Elena; Betzel, Richard F.; Tang, Evelyn; Gu, Shi; Pasqualetti, Fabio; Bassett, Danielle S.

    2018-03-01

    The control of networked dynamical systems opens the possibility for new discoveries and therapies in systems biology and neuroscience. Recent theoretical advances provide candidate mechanisms by which a system can be driven from one pre-specified state to another, and computational approaches provide tools to test those mechanisms in real-world systems. Despite already having been applied to study network systems in biology and neuroscience, the practical performance of these tools and associated measures on simple networks with pre-specified structure has yet to be assessed. Here, we study the behavior of four control metrics (global, average, modal, and boundary controllability) on eight canonical graphs (including Erdős-Rényi, regular, small-world, random geometric, Barábasi-Albert preferential attachment, and several modular networks) with different edge weighting schemes (Gaussian, power-law, and two nonparametric distributions from brain networks, as examples of real-world systems). We observe that differences in global controllability across graph models are more salient when edge weight distributions are heavy-tailed as opposed to normal. In contrast, differences in average, modal, and boundary controllability across graph models (as well as across nodes in the graph) are more salient when edge weight distributions are less heavy-tailed. Across graph models and edge weighting schemes, average and modal controllability are negatively correlated with one another across nodes; yet, across graph instances, the relation between average and modal controllability can be positive, negative, or nonsignificant. Collectively, these findings demonstrate that controllability statistics (and their relations) differ across graphs with different topologies and that these differences can be muted or accentuated by differences in the edge weight distributions. More generally, our numerical studies motivate future analytical efforts to better understand the mathematical underpinnings of the relationship between graph topology and control, as well as efforts to design networks with specific control profiles.

  19. Stochastic inflation lattice simulations - Ultra-large scale structure of the universe

    NASA Technical Reports Server (NTRS)

    Salopek, D. S.

    1991-01-01

    Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.

  20. Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models

    NASA Astrophysics Data System (ADS)

    Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing

    2018-06-01

    The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.

  1. Modeling the controllable pH-responsive swelling and pore size of networked alginate based biomaterials.

    PubMed

    Chan, Ariel W; Neufeld, Ronald J

    2009-10-01

    Semisynthetic network alginate polymer (SNAP), synthesized by acetalization of linear alginate with di-aldehyde, is a pH-responsive tetrafunctionally linked 3D gel network, and has potential application in oral delivery of protein therapeutics and active biologicals, and as tissue bioscaffold for regenerative medicine. A constitutive polyelectrolyte gel model based on non-Gaussian polymer elasticity, Flory-Huggins liquid lattice theory, and non-ideal Donnan membrane equilibria was derived, to describe SNAP gel swelling in dilute and ionic solutions containing uni-univalent, uni-bivalent, bi-univalent or bi-bi-valent electrolyte solutions. Flory-Huggins interaction parameters as a function of ionic strength and characteristic ratio of alginates of various molecular weights were determined experimentally to numerically predict SNAP hydrogel swelling. SNAP hydrogel swells pronouncedly to 1000 times in dilute solution, compared to its compact polymer volume, while behaving as a neutral polymer with limited swelling in high ionic strength or low pH solutions. The derived model accurately describes the pH-responsive swelling of SNAP hydrogel in acid and alkaline solutions of wide range of ionic strength. The pore sizes of the synthesized SNAP hydrogels of various crosslink densities were estimated from the derived model to be in the range of 30-450 nm which were comparable to that measured by thermoporometry, and diffusion of bovine serum albumin. The derived equilibrium swelling model can characterize hydrogel structure such as molecular weight between crosslinks and crosslinking density, or can be used as predictive model for swelling, pore size and mechanical properties if gel structural information is known, and can potentially be applied to other point-link network polyelectrolytes such as hyaluronic acid gel.

  2. EM in high-dimensional spaces.

    PubMed

    Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim

    2005-06-01

    This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.

  3. Experimental study of the focusing properties of a Gaussian Schell-model vortex beam.

    PubMed

    Wang, Fei; Zhu, Shijun; Cai, Yangjian

    2011-08-15

    We carry out an experimental and theoretical study of the focusing properties of a Gaussian Schell-model (GSM) vortex beam. It is found that we can shape the beam profile of the focused GSM vortex beam by varying its initial spatial coherence width. Focused dark hollow, flat-topped, and Gaussian beam spots can be obtained in our experiment, which will be useful for trapping particles. The experimental results agree well with the theoretical results. © 2011 Optical Society of America

  4. Study of Gaussian Doped Double Gate JunctionLess (GD-DG-JL) transistor including source drain depletion length: Model for sub-threshold behavior

    NASA Astrophysics Data System (ADS)

    Kumari, Vandana; Kumar, Ayush; Saxena, Manoj; Gupta, Mridula

    2018-01-01

    The sub-threshold model formulation of Gaussian Doped Double Gate JunctionLess (GD-DG-JL) FET including source/drain depletion length is reported in the present work under the assumption that the ungated regions are fully depleted. To provide deeper insight into the device performance, the impact of gaussian straggle, channel length, oxide and channel thickness and high-k gate dielectric has been studied using extensive TCAD device simulation.

  5. A model for simulation of flow in singular and interconnected channels

    USGS Publications Warehouse

    Schaffranek, Raymond W.; Baltzer, R.A.; Goldberg, D.E.

    1981-01-01

    A one-dimensional numerical model is presented for simulating the unsteady flow in singular riverine or estuarine reaches and in networks of reaches composed of interconnected channels. The model is both general and flexible in that it can be used to simulate a wide range of flow conditions for various channel configurations. The channel geometry of the network to be modeled should be sufficiently simple so as to lend itself to characterization in one spatial dimension. The flow must be substantially homogenous in density, and hydrostatic pressure must prevail everywhere in the network channels. The slope of each channel bottom ought to be mild and reasonably constant over its length so that the flow remains subcritical. The model accommodates tributary inflows and diversions and includes the effects of wind shear on the water surface as a forcing function in the flow equations. Water-surface elevations and flow discharges are computed at channel junctions, as well as at specified intermediate locations within the network channels. The one-dimensional branch-network flow model uses a four-point, implicit, finite-difference approximation of the unsteady-flow equations. The flow equations are linearized over a time step, and branch transformations are formulated that describe the relationship between the unknowns at the end points of the channels. The resultant matrix of branch-transformation equations and required boundary-condition equations is solved by Gaussian elimination using maximum pivot strategy. Five example applications of the flow model are illustrated. The applications cover such diverse conditions as a singular upland river reach in which unsteady flow results from hydropower regulations, coastal rivers composed of sequentially connected reaches subject to unsteady tide-driven flow, and a multiply connected network of channels whose flow is principally governed by wind tides and seiches in adjoining lakes. The report includes a listing of the FORTRAN IV computer program and a description of the input data requirements. Model supporting programs for the processing and input of initial and boundary-value data are identified, various model output formats are illustrated, and instructions are given to permit the production of graphical output using the line printer, electromechanical pen plotters, cathode-ray-tube display units, or microfilm recorders.

  6. Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry

    PubMed Central

    Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna

    2015-01-01

    Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717

  7. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    PubMed

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  8. Cluster analysis of word frequency dynamics

    NASA Astrophysics Data System (ADS)

    Maslennikova, Yu S.; Bochkarev, V. V.; Belashova, I. A.

    2015-01-01

    This paper describes the analysis and modelling of word usage frequency time series. During one of previous studies, an assumption was put forward that all word usage frequencies have uniform dynamics approaching the shape of a Gaussian function. This assumption can be checked using the frequency dictionaries of the Google Books Ngram database. This database includes 5.2 million books published between 1500 and 2008. The corpus contains over 500 billion words in American English, British English, French, German, Spanish, Russian, Hebrew, and Chinese. We clustered time series of word usage frequencies using a Kohonen neural network. The similarity between input vectors was estimated using several algorithms. As a result of the neural network training procedure, more than ten different forms of time series were found. They describe the dynamics of word usage frequencies from birth to death of individual words. Different groups of word forms were found to have different dynamics of word usage frequency variations.

  9. The formation of cosmic structure in a texture-seeded cold dark matter cosmogony

    NASA Technical Reports Server (NTRS)

    Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III

    1992-01-01

    The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.

  10. Non-Gaussian microwave background fluctuations from nonlinear gravitational effects

    NASA Technical Reports Server (NTRS)

    Salopek, D. S.; Kunstatter, G. (Editor)

    1991-01-01

    Whether the statistics of primordial fluctuations for structure formation are Gaussian or otherwise may be determined if the Cosmic Background Explorer (COBE) Satellite makes a detection of the cosmic microwave-background temperature anisotropy delta T(sub CMB)/T(sub CMB). Non-Gaussian fluctuations may be generated in the chaotic inflationary model if two scalar fields interact nonlinearly with gravity. Theoretical contour maps are calculated for the resulting Sachs-Wolfe temperature fluctuations at large angular scales (greater than 3 degrees). In the long-wavelength approximation, one can confidently determine the nonlinear evolution of quantum noise with gravity during the inflationary epoch because: (1) different spatial points are no longer in causal contact; and (2) quantum gravity corrections are typically small-- it is sufficient to model the system using classical random fields. If the potential for two scalar fields V(phi sub 1, phi sub 2) possesses a sharp feature, then non-Gaussian fluctuations may arise. An explicit model is given where cold spots in delta T(sub CMB)/T(sub CMB) maps are suppressed as compared to the Gaussian case. The fluctuations are essentially scale-invariant.

  11. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  12. Proceedings of the Organization of 1990 Meeting of International Neural Network Society Jointed with IEEE Held in Washington, DC on January 15 - 19, 1990. Volume 1. Theory Track Neural and cognitive Sciences Track

    DTIC Science & Technology

    1990-11-30

    signaI flow , xi. The learning" of such statistics could result from synaptic modification rules similar to those known to exist in the brain 7 " 1 0,1 1...in figure 1 had been established. If the series are appro\\imat.ed by Gaussian process. the information flow from X to Y can be expressed by the...Based on this model. the information flow in different direction were calculated by using eq.(1). RESULTS Figures 2 illustrates the information flow

  13. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  14. Computational thermochemistry: Automated generation of scale factors for vibrational frequencies calculated by electronic structure model chemistries

    NASA Astrophysics Data System (ADS)

    Yu, Haoyu S.; Fiedler, Lucas J.; Alecu, I. M.; Truhlar, Donald G.

    2017-01-01

    We present a Python program, FREQ, for calculating the optimal scale factors for calculating harmonic vibrational frequencies, fundamental vibrational frequencies, and zero-point vibrational energies from electronic structure calculations. The program utilizes a previously published scale factor optimization model (Alecu et al., 2010) to efficiently obtain all three scale factors from a set of computed vibrational harmonic frequencies. In order to obtain the three scale factors, the user only needs to provide zero-point energies of 15 or 6 selected molecules. If the user has access to the Gaussian 09 or Gaussian 03 program, we provide the option for the user to run the program by entering the keywords for a certain method and basis set in the Gaussian 09 or Gaussian 03 program. Four other Python programs, input.py, input6, pbs.py, and pbs6.py, are also provided for generating Gaussian 09 or Gaussian 03 input and PBS files. The program can also be used with data from any other electronic structure package. A manual of how to use this program is included in the code package.

  15. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models

    PubMed Central

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348

  16. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.

    PubMed

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.

  17. Non-Gaussian spatiotemporal simulation of multisite daily precipitation: downscaling framework

    NASA Astrophysics Data System (ADS)

    Ben Alaya, M. A.; Ouarda, T. B. M. J.; Chebana, F.

    2018-01-01

    Probabilistic regression approaches for downscaling daily precipitation are very useful. They provide the whole conditional distribution at each forecast step to better represent the temporal variability. The question addressed in this paper is: how to simulate spatiotemporal characteristics of multisite daily precipitation from probabilistic regression models? Recent publications point out the complexity of multisite properties of daily precipitation and highlight the need for using a non-Gaussian flexible tool. This work proposes a reasonable compromise between simplicity and flexibility avoiding model misspecification. A suitable nonparametric bootstrapping (NB) technique is adopted. A downscaling model which merges a vector generalized linear model (VGLM as a probabilistic regression tool) and the proposed bootstrapping technique is introduced to simulate realistic multisite precipitation series. The model is applied to data sets from the southern part of the province of Quebec, Canada. It is shown that the model is capable of reproducing both at-site properties and the spatial structure of daily precipitations. Results indicate the superiority of the proposed NB technique, over a multivariate autoregressive Gaussian framework (i.e. Gaussian copula).

  18. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.

    PubMed

    Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten

    2017-10-01

    Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.

  19. Future constraints on angle-dependent non-Gaussianity from large radio surveys

    NASA Astrophysics Data System (ADS)

    Raccanelli, Alvise; Shiraishi, Maresuke; Bartolo, Nicola; Bertacca, Daniele; Liguori, Michele; Matarrese, Sabino; Norris, Ray P.; Parkinson, David

    2017-03-01

    We investigate how well future large-scale radio surveys could measure different shapes of primordial non-Gaussianity; in particular we focus on angle-dependent non-Gaussianity arising from primordial anisotropic sources, whose bispectrum has an angle dependence between the three wavevectors that is characterized by Legendre polynomials PL and expansion coefficients cL. We provide forecasts for measurements of galaxy power spectrum, finding that Large-Scale Structure (LSS) data could allow measurements of primordial non-Gaussianity that would be competitive with, or improve upon, current constraints set by CMB experiments, for all the shapes considered. We argue that the best constraints will come from the possibility to assign redshift information to radio galaxy surveys, and investigate a few possible scenarios for the EMU and SKA surveys. A realistic (futuristic) modeling could provide constraints of fNLloc ≈ 1(0 . 5) for the local shape, fNL of O(10) (O(1)) for the orthogonal, equilateral and folded shapes, and cL=1 ≈ 80(2) , cL=2 ≈ 400(10) for angle-dependent non-Gaussianity showing that only futuristic galaxy surveys will be able to set strong constraints on these models. Nevertheless, the more futuristic forecasts show the potential of LSS analyses to considerably improve current constraints on non-Gaussianity, and so on models of the primordial Universe. Finally, we find the minimum requirements that would be needed to reach σ(cL=1) = 10, which can be considered as a typical (lower) value predicted by some (inflationary) models.

  20. TRACING CO-REGULATORY NETWORK DYNAMICS IN NOISY, SINGLE-CELL TRANSCRIPTOME TRAJECTORIES.

    PubMed

    Cordero, Pablo; Stuart, Joshua M

    2017-01-01

    The availability of gene expression data at the single cell level makes it possible to probe the molecular underpinnings of complex biological processes such as differentiation and oncogenesis. Promising new methods have emerged for reconstructing a progression 'trajectory' from static single-cell transcriptome measurements. However, it remains unclear how to adequately model the appreciable level of noise in these data to elucidate gene regulatory network rewiring. Here, we present a framework called Single Cell Inference of MorphIng Trajectories and their Associated Regulation (SCIMITAR) that infers progressions from static single-cell transcriptomes by employing a continuous parametrization of Gaussian mixtures in high-dimensional curves. SCIMITAR yields rich models from the data that highlight genes with expression and co-expression patterns that are associated with the inferred progression. Further, SCIMITAR extracts regulatory states from the implicated trajectory-evolvingco-expression networks. We benchmark the method on simulated data to show that it yields accurate cell ordering and gene network inferences. Applied to the interpretation of a single-cell human fetal neuron dataset, SCIMITAR finds progression-associated genes in cornerstone neural differentiation pathways missed by standard differential expression tests. Finally, by leveraging the rewiring of gene-gene co-expression relations across the progression, the method reveals the rise and fall of co-regulatory states and trajectory-dependent gene modules. These analyses implicate new transcription factors in neural differentiation including putative co-factors for the multi-functional NFAT pathway.

  1. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    PubMed Central

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  2. Bayesian Regression with Network Prior: Optimal Bayesian Filtering Perspective

    PubMed Central

    Qian, Xiaoning; Dougherty, Edward R.

    2017-01-01

    The recently introduced intrinsically Bayesian robust filter (IBRF) provides fully optimal filtering relative to a prior distribution over an uncertainty class ofjoint random process models, whereas formerly the theory was limited to model-constrained Bayesian robust filters, for which optimization was limited to the filters that are optimal for models in the uncertainty class. This paper extends the IBRF theory to the situation where there are both a prior on the uncertainty class and sample data. The result is optimal Bayesian filtering (OBF), where optimality is relative to the posterior distribution derived from the prior and the data. The IBRF theories for effective characteristics and canonical expansions extend to the OBF setting. A salient focus of the present work is to demonstrate the advantages of Bayesian regression within the OBF setting over the classical Bayesian approach in the context otlinear Gaussian models. PMID:28824268

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lange, R.; Dickerson, M.A.; Peterson, K.R.

    Two numerical models for the calculation of air concentration and ground deposition of airborne effluent releases are compared. The Particle-in-Cell (PIC) model and the Straight-Line Airflow Gaussian model were used for the simulation. Two sites were selected for comparison: the Hudson River Valley, New York, and the area around the Savannah River Plant, South Carolina. Input for the models was synthesized from meteorological data gathered in previous studies by various investigators. It was found that the PIC model more closely simulated the three-dimensional effects of the meteorology and topography. Overall, the Gaussian model calculated higher concentrations under stable conditions withmore » better agreement between the two methods during neutral to unstable conditions. In addition, because of its consideration of exposure from the returning plume after flow reversal, the PIC model calculated air concentrations over larger areas than did the Gaussian model.« less

  4. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  5. Bayesian spatial transformation models with applications in neuroimaging data

    PubMed Central

    Miranda, Michelle F.; Zhu, Hongtu; Ibrahim, Joseph G.

    2013-01-01

    Summary The aim of this paper is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. Our STMs include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov Random Field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. PMID:24128143

  6. Gaussian Finite Element Method for Description of Underwater Sound Diffraction

    NASA Astrophysics Data System (ADS)

    Huang, Dehua

    A new method for solving diffraction problems is presented in this dissertation. It is based on the use of Gaussian diffraction theory. The Rayleigh integral is used to prove the core of Gaussian theory: the diffraction field of a Gaussian is described by a Gaussian function. The parabolic approximation used by previous authors is not necessary to this proof. Comparison of the Gaussian beam expansion and Fourier series expansion reveals that the Gaussian expansion is a more general and more powerful technique. The method combines the Gaussian beam superposition technique (Wen and Breazeale, J. Acoust. Soc. Am. 83, 1752-1756 (1988)) and the Finite element solution to the parabolic equation (Huang, J. Acoust. Soc. Am. 84, 1405-1413 (1988)). Computer modeling shows that the new method is capable of solving for the sound field even in an inhomogeneous medium, whether the source is a Gaussian source or a distributed source. It can be used for horizontally layered interfaces or irregular interfaces. Calculated results are compared with experimental results by use of a recently designed and improved Gaussian transducer in a laboratory water tank. In addition, the power of the Gaussian Finite element method is demonstrated by comparing numerical results with experimental results from use of a piston transducer in a water tank.

  7. Statistical analysis of textural features for improved classification of oral histopathological images.

    PubMed

    Muthu Rama Krishnan, M; Shah, Pratik; Chakraborty, Chandan; Ray, Ajoy K

    2012-04-01

    The objective of this paper is to provide an improved technique, which can assist oncopathologists in correct screening of oral precancerous conditions specially oral submucous fibrosis (OSF) with significant accuracy on the basis of collagen fibres in the sub-epithelial connective tissue. The proposed scheme is composed of collagen fibres segmentation, its textural feature extraction and selection, screening perfomance enhancement under Gaussian transformation and finally classification. In this study, collagen fibres are segmented on R,G,B color channels using back-probagation neural network from 60 normal and 59 OSF histological images followed by histogram specification for reducing the stain intensity variation. Henceforth, textural features of collgen area are extracted using fractal approaches viz., differential box counting and brownian motion curve . Feature selection is done using Kullback-Leibler (KL) divergence criterion and the screening performance is evaluated based on various statistical tests to conform Gaussian nature. Here, the screening performance is enhanced under Gaussian transformation of the non-Gaussian features using hybrid distribution. Moreover, the routine screening is designed based on two statistical classifiers viz., Bayesian classification and support vector machines (SVM) to classify normal and OSF. It is observed that SVM with linear kernel function provides better classification accuracy (91.64%) as compared to Bayesian classifier. The addition of fractal features of collagen under Gaussian transformation improves Bayesian classifier's performance from 80.69% to 90.75%. Results are here studied and discussed.

  8. Using an artificial neural network to classify multicomponent emission lines with integral field spectroscopy from SAMI and S7

    NASA Astrophysics Data System (ADS)

    Hampton, E. J.; Medling, A. M.; Groves, B.; Kewley, L.; Dopita, M.; Davies, R.; Ho, I.-T.; Kaasinen, M.; Leslie, S.; Sharp, R.; Sweet, S. M.; Thomas, A. D.; Allen, J.; Bland-Hawthorn, J.; Brough, S.; Bryant, J. J.; Croom, S.; Goodwin, M.; Green, A.; Konstantantopoulos, I. S.; Lawrence, J.; López-Sánchez, Á. R.; Lorente, N. P. F.; McElroy, R.; Owers, M. S.; Richards, S. N.; Shastri, P.

    2017-09-01

    Integral field spectroscopy (IFS) surveys are changing how we study galaxies and are creating vastly more spectroscopic data available than before. The large number of resulting spectra makes visual inspection of emission line fits an infeasible option. Here, we present a demonstration of an artificial neural network (ANN) that determines the number of Gaussian components needed to describe the complex emission line velocity structures observed in galaxies after being fit with lzifu. We apply our ANN to IFS data for the S7 survey, conducted using the Wide Field Spectrograph on the ANU 2.3 m Telescope, and the SAMI Galaxy Survey, conducted using the SAMI instrument on the 4 m Anglo-Australian Telescope. We use the spectral fitting code lzifu (Ho et al. 2016a) to fit the emission line spectra of individual spaxels from S7 and SAMI data cubes with 1-, 2- and 3-Gaussian components. We demonstrate that using an ANN is comparable to astronomers performing the same visual inspection task of determining the best number of Gaussian components to describe the physical processes in galaxies. The advantage of our ANN is that it is capable of processing the spectra for thousands of galaxies in minutes, as compared to the years this task would take individual astronomers to complete by visual inspection.

  9. Weakly anomalous diffusion with non-Gaussian propagators

    NASA Astrophysics Data System (ADS)

    Cressoni, J. C.; Viswanathan, G. M.; Ferreira, A. S.; da Silva, M. A. A.

    2012-08-01

    A poorly understood phenomenon seen in complex systems is diffusion characterized by Hurst exponent H≈1/2 but with non-Gaussian statistics. Motivated by such empirical findings, we report an exact analytical solution for a non-Markovian random walk model that gives rise to weakly anomalous diffusion with H=1/2 but with a non-Gaussian propagator.

  10. Moving vehicles segmentation based on Gaussian motion model

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Fang, Xiang Z.; Lin, Wei Y.

    2005-07-01

    Moving objects segmentation is a challenge in computer vision. This paper focuses on the segmentation of moving vehicles in dynamic scene. We analyses the psychology of human vision and present a framework for segmenting moving vehicles in the highway. The proposed framework consists of two parts. Firstly, we propose an adaptive background update method in which the background is updated according to the change of illumination conditions and thus can adapt to the change of illumination sensitively. Secondly, we construct a Gaussian motion model to segment moving vehicles, in which the motion vectors of the moving pixels are modeled as a Gaussian model and an on-line EM algorithm is used to update the model. The Gaussian distribution of the adaptive model is elevated to determine which moving vectors result from moving vehicles and which from other moving objects such as waving trees. Finally, the pixels with motion vector result from the moving vehicles are segmented. Experimental results of several typical scenes show that the proposed model can detect the moving vehicles correctly and is immune from influence of the moving objects caused by the waving trees and the vibration of camera.

  11. Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity.

    PubMed

    Dummer, Benjamin; Wieland, Stefan; Lindner, Benjamin

    2014-01-01

    A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i) a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii) a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, 2000) and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide an excellent approximations to the autocorrelation of spike trains in the recurrent network.

  12. Diffusion of Super-Gaussian Profiles

    ERIC Educational Resources Information Center

    Rosenberg, C.-J.; Anderson, D.; Desaix, M.; Johannisson, P.; Lisak, M.

    2007-01-01

    The present analysis describes an analytically simple and systematic approximation procedure for modelling the free diffusive spreading of initially super-Gaussian profiles. The approach is based on a self-similar ansatz for the evolution of the diffusion profile, and the parameter functions involved in the modelling are determined by suitable…

  13. Kinetic and energy production analysis of pyrolysis of lignocellulosic biomass using a three-parallel Gaussian reaction model.

    PubMed

    Chen, Tianju; Zhang, Jinzhi; Wu, Jinhu

    2016-07-01

    The kinetic and energy productions of pyrolysis of a lignocellulosic biomass were investigated using a three-parallel Gaussian distribution method in this work. The pyrolysis experiment of the pine sawdust was performed using a thermogravimetric-mass spectroscopy (TG-MS) analyzer. A three-parallel Gaussian distributed activation energy model (DAEM)-reaction model was used to describe thermal decomposition behaviors of the three components, hemicellulose, cellulose and lignin. The first, second and third pseudocomponents represent the fractions of hemicellulose, cellulose and lignin, respectively. It was found that the model is capable of predicting the pyrolysis behavior of the pine sawdust. The activation energy distribution peaks for the three pseudo-components were centered at 186.8, 197.5 and 203.9kJmol(-1) for the pine sawdust, respectively. The evolution profiles of H2, CH4, CO, and CO2 were well predicted using the three-parallel Gaussian distribution model. In addition, the chemical composition of bio-oil was also obtained by pyrolysis-gas chromatography/mass spectrometry instrument (Py-GC/MS). Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  15. Geographically weighted regression model on poverty indicator

    NASA Astrophysics Data System (ADS)

    Slamet, I.; Nugroho, N. F. T. A.; Muslich

    2017-12-01

    In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.

  16. Robust Linear Models for Cis-eQTL Analysis.

    PubMed

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  17. Statistical description of turbulent transport for flux driven toroidal plasmas

    NASA Astrophysics Data System (ADS)

    Anderson, J.; Imadera, K.; Kishimoto, Y.; Li, J. Q.; Nordman, H.

    2017-06-01

    A novel methodology to analyze non-Gaussian probability distribution functions (PDFs) of intermittent turbulent transport in global full-f gyrokinetic simulations is presented. In this work, the auto-regressive integrated moving average (ARIMA) model is applied to time series data of intermittent turbulent heat transport to separate noise and oscillatory trends, allowing for the extraction of non-Gaussian features of the PDFs. It was shown that non-Gaussian tails of the PDFs from first principles based gyrokinetic simulations agree with an analytical estimation based on a two fluid model.

  18. Gaussian process regression for sensor networks under localization uncertainty

    USGS Publications Warehouse

    Jadaliha, M.; Xu, Yunfei; Choi, Jongeun; Johnson, N.S.; Li, Weiming

    2013-01-01

    In this paper, we formulate Gaussian process regression with observations under the localization uncertainty due to the resource-constrained sensor networks. In our formulation, effects of observations, measurement noise, localization uncertainty, and prior distributions are all correctly incorporated in the posterior predictive statistics. The analytically intractable posterior predictive statistics are proposed to be approximated by two techniques, viz., Monte Carlo sampling and Laplace's method. Such approximation techniques have been carefully tailored to our problems and their approximation error and complexity are analyzed. Simulation study demonstrates that the proposed approaches perform much better than approaches without considering the localization uncertainty properly. Finally, we have applied the proposed approaches on the experimentally collected real data from a dye concentration field over a section of a river and a temperature field of an outdoor swimming pool to provide proof of concept tests and evaluate the proposed schemes in real situations. In both simulation and experimental results, the proposed methods outperform the quick-and-dirty solutions often used in practice.

  19. Topological structures in the equities market network

    PubMed Central

    Leibon, Gregory; Pauls, Scott; Rockmore, Daniel; Savell, Robert

    2008-01-01

    We present a new method for articulating scale-dependent topological descriptions of the network structure inherent in many complex systems. The technique is based on “partition decoupled null models,” a new class of null models that incorporate the interaction of clustered partitions into a random model and generalize the Gaussian ensemble. As an application, we analyze a correlation matrix derived from 4 years of close prices of equities in the New York Stock Exchange (NYSE) and National Association of Securities Dealers Automated Quotation (NASDAQ). In this example, we expose (i) a natural structure composed of 2 interacting partitions of the market that both agrees with and generalizes standard notions of scale (e.g., sector and industry) and (ii) structure in the first partition that is a topological manifestation of a well-known pattern of capital flow called “sector rotation.” Our approach gives rise to a natural form of multiresolution analysis of the underlying time series that naturally decomposes the basic data in terms of the effects of the different scales at which it clusters. We support our conclusions and show the robustness of the technique with a successful analysis on a simulated network with an embedded topological structure. The equities market is a prototypical complex system, and we expect that our approach will be of use in understanding a broad class of complex systems in which correlation structures are resident.

  20. Speech Enhancement Using Gaussian Scale Mixture Models

    PubMed Central

    Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.

    2011-01-01

    This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139

  1. Implementation of Dynamic Extensible Adaptive Locally Exchangeable Measures (IDEALEM) v 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sim, Alex; Lee, Dongeun; Wu, K. John

    2016-03-04

    Handling large streaming data is essential for various applications such as network traffic analysis, social networks, energy cost trends, and environment modeling. However, it is in general intractable to store, compute, search, and retrieve large streaming data. This software addresses a fundamental issue, which is to reduce the size of large streaming data and still obtain accurate statistical analysis. As an example, when a high-speed network such as 100 Gbps network is monitored, the collected measurement data rapidly grows so that polynomial time algorithms (e.g., Gaussian processes) become intractable. One possible solution to reduce the storage of vast amounts ofmore » measured data is to store a random sample, such as one out of 1000 network packets. However, such static sampling methods (linear sampling) have drawbacks: (1) it is not scalable for high-rate streaming data, and (2) there is no guarantee of reflecting the underlying distribution. In this software, we implemented a dynamic sampling algorithm, based on the recent technology from the relational dynamic bayesian online locally exchangeable measures, that reduces the storage of data records in a large scale, and still provides accurate analysis of large streaming data. The software can be used for both online and offline data records.« less

  2. Theory and generation of conditional, scalable sub-Gaussian random fields

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2016-03-01

    Many earth and environmental (as well as a host of other) variables, Y, and their spatial (or temporal) increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture key aspects of such non-Gaussian scaling by treating Y and/or ΔY as sub-Gaussian random fields (or processes). This however left unaddressed the empirical finding that whereas sample frequency distributions of Y tend to display relatively mild non-Gaussian peaks and tails, those of ΔY often reveal peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we proposed a generalized sub-Gaussian model (GSG) which resolves this apparent inconsistency between the statistical scaling behaviors of observed variables and their increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. Most importantly, we demonstrated the feasibility of estimating all parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments, ΔY. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random fields, introduce two approximate versions of this algorithm to reduce CPU time, and explore them on one and two-dimensional synthetic test cases.

  3. Different approach to the modeling of nonfree particle diffusion

    NASA Astrophysics Data System (ADS)

    Buhl, Niels

    2018-03-01

    A new approach to the modeling of nonfree particle diffusion is presented. The approach uses a general setup based on geometric graphs (networks of curves), which means that particle diffusion in anything from arrays of barriers and pore networks to general geometric domains can be considered and that the (free random walk) central limit theorem can be generalized to cover also the nonfree case. The latter gives rise to a continuum-limit description of the diffusive motion where the effect of partially absorbing barriers is accounted for in a natural and non-Markovian way that, in contrast to the traditional approach, quantifies the absorptivity of a barrier in terms of a dimensionless parameter in the range 0 to 1. The generalized theorem gives two general analytic expressions for the continuum-limit propagator: an infinite sum of Gaussians and an infinite sum of plane waves. These expressions entail the known method-of-images and Laplace eigenfunction expansions as special cases and show how the presence of partially absorbing barriers can lead to phenomena such as line splitting and band gap formation in the plane wave wave-number spectrum.

  4. Hot Spots in a Network of Functional Sites

    PubMed Central

    Ozbek, Pemra; Soner, Seren; Haliloglu, Turkan

    2013-01-01

    It is of significant interest to understand how proteins interact, which holds the key phenomenon in biological functions. Using dynamic fluctuations in high frequency modes, we show that the Gaussian Network Model (GNM) predicts hot spot residues with success rates ranging between S 8–58%, C 84–95%, P 5–19% and A 81–92% on unbound structures and S 8–51%, C 97–99%, P 14–50%, A 94–97% on complex structures for sensitivity, specificity, precision and accuracy, respectively. High specificity and accuracy rates with a single property on unbound protein structures suggest that hot spots are predefined in the dynamics of unbound structures and forming the binding core of interfaces, whereas the prediction of other functional residues with similar dynamic behavior explains the lower precision values. The latter is demonstrated with the case studies; ubiquitin, hen egg-white lysozyme and M2 proton channel. The dynamic fluctuations suggest a pseudo network of residues with high frequency fluctuations, which could be plausible for the mechanism of biological interactions and allosteric regulation. PMID:24023934

  5. Synchronisation under shocks: The Lévy Kuramoto model

    NASA Astrophysics Data System (ADS)

    Roberts, Dale; Kalloniatis, Alexander C.

    2018-04-01

    We study the Kuramoto model of identical oscillators on Erdős-Rényi (ER) and Barabasi-Alberts (BA) scale free networks examining the dynamics when perturbed by a Lévy noise. Lévy noise exhibits heavier tails than Gaussian while allowing for their tempering in a controlled manner. This allows us to understand how 'shocks' influence individual oscillator and collective system behaviour of a paradigmatic complex system. Skewed α-stable Lévy noise, equivalent to fractional diffusion perturbations, are considered, but overlaid by exponential tempering of rate λ. In an earlier paper we found that synchrony takes a variety of forms for identical Kuramoto oscillators subject to stable Lévy noise, not seen for the Gaussian case, and changing with α: a noise-induced drift, a smooth α dependence of the point of cross-over of synchronisation point of ER and BA networks, and a severe loss of synchronisation at low values of α. In the presence of tempering we observe both analytically and numerically a dramatic change to the α < 1 behaviour where synchronisation is sustained over a larger range of values of the 'noise strength' σ, improved compared to the α > 1 tempered cases. Analytically we study the system close to the phase synchronised fixed point and solve the tempered fractional Fokker-Planck equation. There we observe that densities show stronger support in the basin of attraction at low α for fixed coupling, σ and tempering λ. We then perform numerical simulations for networks of size N = 1000 and average degree d ¯ = 10. There, we compute the order parameter r as a function of σ for fixed α and λ and observe values of r ≈ 1 over larger ranges of σ for α < 1 and λ ≠ 0. In addition we observe drift of both positive and negative slopes for different α and λ when native frequencies are equal, and confirm a sustainment of synchronisation down to low values of α. We propose a mechanism for this in terms of the basic shape of the tempered stable Lévy densities for various α and how it feeds into Kuramoto oscillator dynamics and illustrate this with examples of specific paths.

  6. Stable radiation pressure acceleration of ions by suppressing transverse Rayleigh-Taylor instability with multiple Gaussian pulses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, M. L.; Liu, B.; Hu, R. H.

    In the case of a thin plasma slab accelerated by the radiation pressure of an ultra-intense laser pulse, the development of Rayleigh-Taylor instability (RTI) will destroy the acceleration structure and terminate the acceleration process much sooner than theoretical limit. In this paper, a new scheme using multiple Gaussian pulses for ion acceleration in a radiation pressure acceleration regime is investigated with particle-in-cell simulation. We found that with multiple Gaussian pulses, the instability could be efficiently suppressed and the divergence of the ion bunch is greatly reduced, resulting in a longer acceleration time and much more collimated ion bunch with highermore » energy than using a single Gaussian pulse. An analytical model is developed to describe the suppression of RTI at the laser-plasma interface. The model shows that the suppression of RTI is due to the introduction of the long wavelength mode RTI by the multiple Gaussian pulses.« less

  7. An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.

    PubMed

    Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe

    2014-03-01

    The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.

  8. Leading non-Gaussian corrections for diffusion orientation distribution function.

    PubMed

    Jensen, Jens H; Helpern, Joseph A; Tabesh, Ali

    2014-02-01

    An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed from the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves on the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. 2013 John Wiley & Sons, Ltd.

  9. Leading Non-Gaussian Corrections for Diffusion Orientation Distribution Function

    PubMed Central

    Jensen, Jens H.; Helpern, Joseph A.; Tabesh, Ali

    2014-01-01

    An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed out of the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves upon the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. PMID:24738143

  10. Assessing Human Activity in Elderly People Using Non-Intrusive Load Monitoring.

    PubMed

    Alcalá, José M; Ureña, Jesús; Hernández, Álvaro; Gualda, David

    2017-02-11

    The ageing of the population, and their increasing wish of living independently, are motivating the development of welfare and healthcare models. Existing approaches based on the direct heath-monitoring using body sensor networks (BSN) are precise and accurate. Nonetheless, their intrusiveness causes non-acceptance. New approaches seek the indirect monitoring through monitoring activities of daily living (ADLs), which proves to be a suitable solution. ADL monitoring systems use many heterogeneous sensors, are less intrusive, and are less expensive than BSN, however, the deployment and maintenance of wireless sensor networks (WSN) prevent them from a widespread acceptance. In this work, a novel technique to monitor the human activity, based on non-intrusive load monitoring (NILM), is presented. The proposal uses only smart meter data, which leads to minimum intrusiveness and a potential massive deployment at minimal cost. This could be the key to develop sustainable healthcare models for smart homes, capable of complying with the elderly people' demands. This study also uses the Dempster-Shafer theory to provide a daily score of normality with regard to the regular behavior. This approach has been evaluated using real datasets and, additionally, a benchmarking against a Gaussian mixture model approach is presented.

  11. Assessing Human Activity in Elderly People Using Non-Intrusive Load Monitoring

    PubMed Central

    Alcalá, José M.; Ureña, Jesús; Hernández, Álvaro; Gualda, David

    2017-01-01

    The ageing of the population, and their increasing wish of living independently, are motivating the development of welfare and healthcare models. Existing approaches based on the direct heath-monitoring using body sensor networks (BSN) are precise and accurate. Nonetheless, their intrusiveness causes non-acceptance. New approaches seek the indirect monitoring through monitoring activities of daily living (ADLs), which proves to be a suitable solution. ADL monitoring systems use many heterogeneous sensors, are less intrusive, and are less expensive than BSN, however, the deployment and maintenance of wireless sensor networks (WSN) prevent them from a widespread acceptance. In this work, a novel technique to monitor the human activity, based on non-intrusive load monitoring (NILM), is presented. The proposal uses only smart meter data, which leads to minimum intrusiveness and a potential massive deployment at minimal cost. This could be the key to develop sustainable healthcare models for smart homes, capable of complying with the elderly people’ demands. This study also uses the Dempster-Shafer theory to provide a daily score of normality with regard to the regular behavior. This approach has been evaluated using real datasets and, additionally, a benchmarking against a Gaussian mixture model approach is presented. PMID:28208672

  12. The modelling of carbon-based supercapacitors: Distributions of time constants and Pascal Equivalent Circuits

    NASA Astrophysics Data System (ADS)

    Fletcher, Stephen; Kirkpatrick, Iain; Dring, Roderick; Puttock, Robert; Thring, Rob; Howroyd, Simon

    2017-03-01

    Supercapacitors are an emerging technology with applications in pulse power, motive power, and energy storage. However, their carbon electrodes show a variety of non-ideal behaviours that have so far eluded explanation. These include Voltage Decay after charging, Voltage Rebound after discharging, and Dispersed Kinetics at long times. In the present work, we establish that a vertical ladder network of RC components can reproduce all these puzzling phenomena. Both software and hardware realizations of the network are described. In general, porous carbon electrodes contain random distributions of resistance R and capacitance C, with a wider spread of log R values than log C values. To understand what this implies, a simplified model is developed in which log R is treated as a Gaussian random variable while log C is treated as a constant. From this model, a new family of equivalent circuits is developed in which the continuous distribution of log R values is replaced by a discrete set of log R values drawn from a geometric series. We call these Pascal Equivalent Circuits. Their behaviour is shown to resemble closely that of real supercapacitors. The results confirm that distributions of RC time constants dominate the behaviour of real supercapacitors.

  13. Modeling of dispersion near roadways based on the vehicle-induced turbulence concept

    NASA Astrophysics Data System (ADS)

    Sahlodin, Ali M.; Sotudeh-Gharebagh, Rahmat; Zhu, Yifang

    A mathematical model is developed for dispersion near roadways by incorporating vehicle-induced turbulence (VIT) into Gaussian dispersion modeling using computational fluid dynamics (CFD). The model is based on the Gaussian plume equation in which roadway is regarded as a series of point sources. The Gaussian dispersion parameters are modified by simulation of the roadway using CFD in order to evaluate turbulent kinetic energy (TKE) as a measure of VIT. The model was evaluated against experimental carbon monoxide concentrations downwind of two major freeways reported in the literature. Good agreements were achieved between model results and the literature data. A significant difference was observed between the model results with and without considering VIT. The difference is rather high for data very close to the freeways. This model, after evaluation with additional data, may be used as a framework for predicting dispersion and deposition from any roadway for different traffic (vehicle type and speed) conditions.

  14. Brownian motion under dynamic disorder: effects of memory on the decay of the non-Gaussianity parameter

    NASA Astrophysics Data System (ADS)

    Tyagi, Neha; Cherayil, Binny J.

    2018-03-01

    The increasingly widespread occurrence in complex fluids of particle motion that is both Brownian and non-Gaussian has recently been found to be successfully modeled by a process (frequently referred to as ‘diffusing diffusivity’) in which the white noise that governs Brownian diffusion is itself stochastically modulated by either Ornstein–Uhlenbeck dynamics or by two-state noise. But the model has so far not been able to account for an aspect of non-Gaussian Brownian motion that is also commonly observed: a non-monotonic decay of the parameter that quantifies the extent of deviation from Gaussian behavior. In this paper, we show that the inclusion of memory effects in the model—via a generalized Langevin equation—can rationalise this phenomenon.

  15. Cosmic microwave background power asymmetry from non-Gaussian modulation.

    PubMed

    Schmidt, Fabian; Hui, Lam

    2013-01-04

    Non-Gaussianity in the inflationary perturbations can couple observable scales to modes of much longer wavelength (even superhorizon), leaving as a signature a large-angle modulation of the observed cosmic microwave background power spectrum. This provides an alternative origin for a power asymmetry that is otherwise often ascribed to a breaking of statistical isotropy. The non-Gaussian modulation effect can be significant even for typical ~10(-5) perturbations while respecting current constraints on non-Gaussianity if the squeezed limit of the bispectrum is sufficiently infrared divergent. Just such a strongly infrared-divergent bispectrum has been claimed for inflation models with a non-Bunch-Davies initial state, for instance. Upper limits on the observed cosmic microwave background power asymmetry place stringent constraints on the duration of inflation in such models.

  16. Degeneracy of energy levels of pseudo-Gaussian oscillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iacob, Theodor-Felix; Iacob, Felix, E-mail: felix@physics.uvt.ro; Lute, Marina

    2015-12-07

    We study the main features of the isotropic radial pseudo-Gaussian oscillators spectral properties. This study is made upon the energy levels degeneracy with respect to orbital angular momentum quantum number. In a previous work [6] we have shown that the pseudo-Gaussian oscillators belong to the class of quasi-exactly solvable models and an exact solution has been found.

  17. Recent advances in scalable non-Gaussian geostatistics: The generalized sub-Gaussian model

    NASA Astrophysics Data System (ADS)

    Guadagnini, Alberto; Riva, Monica; Neuman, Shlomo P.

    2018-07-01

    Geostatistical analysis has been introduced over half a century ago to allow quantifying seemingly random spatial variations in earth quantities such as rock mineral content or permeability. The traditional approach has been to view such quantities as multivariate Gaussian random functions characterized by one or a few well-defined spatial correlation scales. There is, however, mounting evidence that many spatially varying quantities exhibit non-Gaussian behavior over a multiplicity of scales. The purpose of this minireview is not to paint a broad picture of the subject and its treatment in the literature. Instead, we focus on very recent advances in the recognition and analysis of this ubiquitous phenomenon, which transcends hydrology and the Earth sciences, brought about largely by our own work. In particular, we use porosity data from a deep borehole to illustrate typical aspects of such scalable non-Gaussian behavior, describe a very recent theoretical model that (for the first time) captures all these behavioral aspects in a comprehensive manner, show how this allows generating random realizations of the quantity conditional on sampled values, point toward ways of incorporating scalable non-Gaussian behavior in hydrologic analysis, highlight the significance of doing so, and list open questions requiring further research.

  18. On the Optimum Architecture of the Biologically Inspired Hierarchical Temporal Memory Model Applied to the Hand-Written Digit Recognition

    NASA Astrophysics Data System (ADS)

    Štolc, Svorad; Bajla, Ivan

    2010-01-01

    In the paper we describe basic functions of the Hierarchical Temporal Memory (HTM) network based on a novel biologically inspired model of the large-scale structure of the mammalian neocortex. The focus of this paper is in a systematic exploration of possibilities how to optimize important controlling parameters of the HTM model applied to the classification of hand-written digits from the USPS database. The statistical properties of this database are analyzed using the permutation test which employs a randomization distribution of the training and testing data. Based on a notion of the homogeneous usage of input image pixels, a methodology of the HTM parameter optimization is proposed. In order to study effects of two substantial parameters of the architecture: the patch size and the overlap in more details, we have restricted ourselves to the single-level HTM networks. A novel method for construction of the training sequences by ordering series of the static images is developed. A novel method for estimation of the parameter maxDist based on the box counting method is proposed. The parameter sigma of the inference Gaussian is optimized on the basis of the maximization of the belief distribution entropy. Both optimization algorithms can be equally applied to the multi-level HTM networks as well. The influences of the parameters transitionMemory and requestedGroupCount on the HTM network performance have been explored. Altogether, we have investigated 2736 different HTM network configurations. The obtained classification accuracy results have been benchmarked with the published results of several conventional classifiers.

  19. Pollution source localization in an urban water supply network based on dynamic water demand.

    PubMed

    Yan, Xuesong; Zhu, Zhixin; Li, Tian

    2017-10-27

    Urban water supply networks are susceptible to intentional, accidental chemical, and biological pollution, which pose a threat to the health of consumers. In recent years, drinking-water pollution incidents have occurred frequently, seriously endangering social stability and security. The real-time monitoring for water quality can be effectively implemented by placing sensors in the water supply network. However, locating the source of pollution through the data detection obtained by water quality sensors is a challenging problem. The difficulty lies in the limited number of sensors, large number of water supply network nodes, and dynamic user demand for water, which leads the pollution source localization problem to an uncertainty, large-scale, and dynamic optimization problem. In this paper, we mainly study the dynamics of the pollution source localization problem. Previous studies of pollution source localization assume that hydraulic inputs (e.g., water demand of consumers) are known. However, because of the inherent variability of urban water demand, the problem is essentially a fluctuating dynamic problem of consumer's water demand. In this paper, the water demand is considered to be stochastic in nature and can be described using Gaussian model or autoregressive model. On this basis, an optimization algorithm is proposed based on these two dynamic water demand change models to locate the pollution source. The objective of the proposed algorithm is to find the locations and concentrations of pollution sources that meet the minimum between the analogue and detection values of the sensor. Simulation experiments were conducted using two different sizes of urban water supply network data, and the experimental results were compared with those of the standard genetic algorithm.

  20. Bayesian spatial transformation models with applications in neuroimaging data.

    PubMed

    Miranda, Michelle F; Zhu, Hongtu; Ibrahim, Joseph G

    2013-12-01

    The aim of this article is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. The proposed STM include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov random field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. © 2013, The International Biometric Society.

  1. Testing for the Gaussian nature of cosmological density perturbations through the three-point temperature correlation function

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1993-01-01

    One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian where seeds produced by topological defects tend to be non-Gaussian. The three-point correlation function of the temperature anisotropy of the cosmic microwave background radiation (CBR) provides a sensitive test of this aspect of the primordial density field. In this paper, this function is calculated in the general context of various allowed non-Gaussian models. It is shown that the Cosmic Background Explorer and the forthcoming South Pole and balloon CBR anisotropy data may be able to provide a crucial test of the Gaussian nature of the perturbations.

  2. Analysis of fluid flow and solute transport through a single fracture with variable apertures intersecting a canister: Comparison between fractal and Gaussian fractures

    NASA Astrophysics Data System (ADS)

    Liu, L.; Neretnieks, I.

    Canisters with spent nuclear fuel will be deposited in fractured crystalline rock in the Swedish concept for a final repository. The fractures intersect the canister holes at different angles and they have variable apertures and therefore locally varying flowrates. Our previous model with fractures with a constant aperture and a 90° intersection angle is now extended to arbitrary intersection angles and stochastically variable apertures. It is shown that the previous basic model can be simply amended to account for these effects. More importantly, it has been found that the distributions of the volumetric and the equivalent flow rates are all close to the Normal for both fractal and Gaussian fractures, with the mean of the distribution of the volumetric flow rate being determined solely by the hydraulic aperture, and that of the equivalent flow rate being determined by the mechanical aperture. Moreover, the standard deviation of the volumetric flow rates of the many realizations increases with increasing roughness and spatial correlation length of the aperture field, and so does that of the equivalent flow rates. Thus, two simple statistical relations can be developed to describe the stochastic properties of fluid flow and solute transport through a single fracture with spatially variable apertures. This obviates, then, the need to simulate each fracture that intersects a canister in great detail, and allows the use of complex fractures also in very large fracture network models used in performance assessment.

  3. Estimation of sum-to-one constrained parameters with non-Gaussian extensions of ensemble-based Kalman filters: application to a 1D ocean biogeochemical model

    NASA Astrophysics Data System (ADS)

    Simon, E.; Bertino, L.; Samuelsen, A.

    2011-12-01

    Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.

  4. Poisson-Gaussian Noise Reduction Using the Hidden Markov Model in Contourlet Domain for Fluorescence Microscopy Images

    PubMed Central

    Yang, Sejung; Lee, Byung-Uk

    2015-01-01

    In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138

  5. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  6. Gaussian or non-Gaussian logconductivity distribution at the MADE site: What is its impact on the breakthrough curve?

    PubMed

    Fiori, Aldo; Volpi, Elena; Zarlenga, Antonio; Bohling, Geoffrey C

    2015-08-01

    The impact of the logconductivity (Y=ln K) distribution fY on transport at the MADE site is analyzed. Our principal interest is in non-Gaussian fY characterized by heavier tails than the Gaussian. Both the logconductivity moments and fY itself are inferred, taking advantage of the detailed measurements of Bohling et al. (2012). The resulting logconductivity distribution displays heavier tails than the Gaussian, although the departure from Gaussianity is not significant. The effect of the logconductivity distribution on the breakthrough curve (BTC) is studied through an analytical, physically based model. It is found that the non-Gaussianity of the MADE logconductivity distribution does not strongly affect the BTC. Counterintuitively, assuming heavier tailed distributions for Y, with same variance, leads to BTCs which are more symmetrical than those for the Gaussian fY, with less pronounced preferential flow. Results indicate that the impact of strongly non-Gaussian, heavy tailed distributions on solute transport in heterogeneous porous formations can be significant, especially in the presence of high heterogeneity, resulting in reduced preferential flow and retarded peak arrivals. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. A Non-Gaussian Stock Price Model: Options, Credit and a Multi-Timescale Memory

    NASA Astrophysics Data System (ADS)

    Borland, L.

    We review a recently proposed model of stock prices, based on astatistical feedback model that results in a non-Gaussian distribution of price changes. Applications to option pricing and the pricing of debt is discussed. A generalization to account for feedback effects over multiple timescales is also presented. This model reproduces most of the stylized facts (ie statistical anomalies) observed in real financial markets.

  8. Mean First Passage Time and Stochastic Resonance in a Transcriptional Regulatory System with Non-Gaussian Noise

    NASA Astrophysics Data System (ADS)

    Kang, Yan-Mei; Chen, Xi; Lin, Xu-Dong; Tan, Ning

    The mean first passage time (MFPT) in a phenomenological gene transcriptional regulatory model with non-Gaussian noise is analytically investigated based on the singular perturbation technique. The effect of the non-Gaussian noise on the phenomenon of stochastic resonance (SR) is then disclosed based on a new combination of adiabatic elimination and linear response approximation. Compared with the results in the Gaussian noise case, it is found that bounded non-Gaussian noise inhibits the transition between different concentrations of protein, while heavy-tailed non-Gaussian noise accelerates the transition. It is also found that the optimal noise intensity for SR in the heavy-tailed noise case is smaller, while the optimal noise intensity in the bounded noise case is larger. These observations can be explained by the heavy-tailed noise easing random transitions.

  9. Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin

    2014-05-01

    We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.

  10. Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas

    PubMed Central

    Bedford, Tim; Daneshkhah, Alireza

    2015-01-01

    Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240

  11. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    PubMed

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  12. Landsat 7 thermal-IR image sharpening using an artificial neural network and sensor model

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Schowengerdt, R.A.; ,

    2001-01-01

    The enhanced thematic mapper (plus) (ETM+) instrument on Landsat 7 shares the same basic design as the TM sensors on Landsats 4 and 5, with some significant improvements. In common are six multispectral bands with a 30-m ground-projected instantaneous field of view (GIFOV). However, the thermaL-IR (TIR) band now has a 60-m GIFOV, instead of 120-m. Also, a 15-m panchromatic band has been added. The artificial neural network (NN) image sharpening method described here uses data from the higher spatial resolution ETM+ bands to enhance (sharpen) the spatial resolution of the TIR imagery. It is based on an assumed correlation over multiple scales of resolution, between image edge contrast patterns in the TIR band and several other spectral bands. A multilayer, feedforward NN is trained to approximate TIR data at 60m, given degraded (from 30-m to 60-m) spatial resolution input from spectral bands 7, 5, and 2. After training, the NN output for full-resolution input generates an approximation of a TIR image at 30-m resolution. Two methods are used to degrade the spatial resolution of the imagery used for NN training, and the corresponding sharpening results are compared. One degradation method uses a published sensor transfer function (TF) for Landsat 5 to simulate sensor coarser resolution imagery from higher resolution imagery. For comparison, the second degradation method is simply Gaussian low pass filtering and subsampling, wherein the Gaussian filter approximates the full width at half maximum amplitude characteristics of the TF-based spatial filter. Two fixed-size NNs (that is, number of weights and processing elements) were trained separately with the degraded resolution data, and the sharpening results compared. The comparison evaluates the relative influence of the degradation technique employed and whether or not it is desirable to incorporate a sensor TF model. Preliminary results indicate some improvements for the sensor model-based technique. Further evaluation using a higher resolution reference image and strict application of sensor model to data is recommended.

  13. Multi-PON access network using a coarse AWG for smooth migration from TDM to WDM PON

    NASA Astrophysics Data System (ADS)

    Shachaf, Y.; Chang, C.-H.; Kourtessis, P.; Senior, J. M.

    2007-06-01

    An interoperable access network architecture based on a coarse array waveguide grating (AWG) is described, displaying dynamic wavelength assignment to manage the network load across multiple PONs. The multi-PON architecture utilizes coarse Gaussian channels of an AWG to facilitate scalability and smooth migration path between TDM and WDM PONs. Network simulations of a cross-operational protocol platform confirmed successful routing of individual PON clusters through 7 nm-wide passband windows of the AWG. Furthermore, polarization-dependent wavelength shift and phase errors of the device proved not to impose restrain on the routing performance. Optical transmission tests at 2.5 Gbit/s for distances up to 20 km are demonstrated.

  14. Inferring network structure in non-normal and mixed discrete-continuous genomic data.

    PubMed

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2018-03-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. © 2017, The International Biometric Society.

  15. Inferring network structure in non-normal and mixed discrete-continuous genomic data

    PubMed Central

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2017-01-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. PMID:28437848

  16. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  17. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  18. Multi-variate joint PDF for non-Gaussianities: exact formulation and generic approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verde, Licia; Jimenez, Raul; Alvarez-Gaume, Luis

    2013-06-01

    We provide an exact expression for the multi-variate joint probability distribution function of non-Gaussian fields primordially arising from local transformations of a Gaussian field. This kind of non-Gaussianity is generated in many models of inflation. We apply our expression to the non-Gaussianity estimation from Cosmic Microwave Background maps and the halo mass function where we obtain analytical expressions. We also provide analytic approximations and their range of validity. For the Cosmic Microwave Background we give a fast way to compute the PDF which is valid up to more than 7σ for f{sub NL} values (both true and sampled) not ruledmore » out by current observations, which consists of expressing the PDF as a combination of bispectrum and trispectrum of the temperature maps. The resulting expression is valid for any kind of non-Gaussianity and is not limited to the local type. The above results may serve as the basis for a fully Bayesian analysis of the non-Gaussianity parameter.« less

  19. Gaussian-Beam Laser-Resonator Program

    NASA Technical Reports Server (NTRS)

    Cross, Patricia L.; Bair, Clayton H.; Barnes, Norman

    1989-01-01

    Gaussian Beam Laser Resonator Program models laser resonators by use of Gaussian-beam-propagation techniques. Used to determine radii of beams as functions of position in laser resonators. Algorithm used in program has three major components. First, ray-transfer matrix for laser resonator must be calculated. Next, initial parameters of beam calculated. Finally, propagation of beam through optical elements computed. Written in Microsoft FORTRAN (Version 4.01).

  20. Analysis of Flow and Transport in non-Gaussian Heterogeneous Formations Using a Generalized Sub-Gaussian Model

    NASA Astrophysics Data System (ADS)

    Guadagnini, A.; Riva, M.; Neuman, S. P.

    2016-12-01

    Environmental quantities such as log hydraulic conductivity (or transmissivity), Y(x) = ln K(x), and their spatial (or temporal) increments, ΔY, are known to be generally non-Gaussian. Documented evidence of such behavior includes symmetry of increment distributions at all separation scales (or lags) between incremental values of Y with sharp peaks and heavy tails that decay asymptotically as lag increases. This statistical scaling occurs in porous as well as fractured media characterized by either one or a hierarchy of spatial correlation scales. In hierarchical media one observes a range of additional statistical ΔY scaling phenomena, all of which are captured comprehensibly by a novel generalized sub-Gaussian (GSG) model. In this model Y forms a mixture Y(x) = U(x) G(x) of single- or multi-scale Gaussian processes G having random variances, U being a non-negative subordinator independent of G. Elsewhere we developed ways to generate unconditional and conditional random realizations of isotropic or anisotropic GSG fields which can be embedded in numerical Monte Carlo flow and transport simulations. Here we present and discuss expressions for probability distribution functions of Y and ΔY as well as their lead statistical moments. We then focus on a simple flow setting of mean uniform steady state flow in an unbounded, two-dimensional domain, exploring ways in which non-Gaussian heterogeneity affects stochastic flow and transport descriptions. Our expressions represent (a) lead order autocovariance and cross-covariance functions of hydraulic head, velocity and advective particle displacement as well as (b) analogues of preasymptotic and asymptotic Fickian dispersion coefficients. We compare them with corresponding expressions developed in the literature for Gaussian Y.

  1. Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.

    2016-12-01

    We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.

  2. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  3. Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian

    NASA Astrophysics Data System (ADS)

    Teneng, Dean

    2013-09-01

    We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.

  4. Model-independent test for scale-dependent non-Gaussianities in the cosmic microwave background.

    PubMed

    Räth, C; Morfill, G E; Rossmanith, G; Banday, A J; Górski, K M

    2009-04-03

    We present a model-independent method to test for scale-dependent non-Gaussianities in combination with scaling indices as test statistics. Therefore, surrogate data sets are generated, in which the power spectrum of the original data is preserved, while the higher order correlations are partly randomized by applying a scale-dependent shuffling procedure to the Fourier phases. We apply this method to the Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background and find signatures for non-Gaussianities on large scales. Further tests are required to elucidate the origin of the detected anomalies.

  5. Probing the cosmological initial conditions using the CMB

    NASA Astrophysics Data System (ADS)

    Yadav, Amit P. S.

    In the last few decades, advances in observational cosmology have given us a standard model of cosmology. The basic cosmological parameters have been laid out to high precision. Cosmologists have started asking questions about the nature of the cosmological initial conditions. Many ambitious experiments such as Planck satellite, EBEX, ACT, CAPMAP, QUaD, BICEP, SPIDER, QUIET, and GEM are underway. Experiments like these will provide us with a wealth of information about CMB polarization, CMB lensing, and polarization foregrounds. These experiments will be complemented with great observational campaigns to map the 3D structure in the Universe and new particle physics constraints from the Large Hadron Collider. In my graduate work I have made explicit how observations of the CMB temperature and E-polarization anisotropies can be combined to provide optimal constraints on models of the early universe at the highest energies. I have developed new ways of constraining models of the early universe using CMB temperature and polarization data. Inflation is one of the most promising theories of the early universe. Different inflationary models predict different amounts of non-Gaussian perturbations. Although any non-Gaussianity predicted by the canonical inflation model is very small, there exist models which can generate significant amounts of non-Gaussianities. Hence any characterization of non-Gaussianity of the primordial perturbations constrains the models of inflation. The information in the bispectrum (or higher order moments) is completely independent of the power spectrum constraints on the amplitude of primordial power spectrum (A), the scalar spectral index of the primordial power spectrum ns, and the running of the primordial power spectrum. My work has made it possible to extract the bispectrum information from large, high resolution CMB temperature and polarization data. We have demonstrated that the primordial adiabatic perturbations can be reconstructed using CMB temperature and E-polarization information (Yadav and Wandelt 2005). One of the main motivations of reconstructing the primordial perturbations is to study the primordial non-Gaussianities. Since the amplitude of primordial non-Gaussianity is very small, any enhancement in sensitivity to the primordial features is useful because it improves the characterization of the primordial non-Gaussianity. Our reconstruction allows us to be more sensitive to the primordial features, whereas most of the current probes of non-Gaussianity do not specifically select for them. We have also developed a fast cubic (bispectrum) estimator of non-Gaussianity f NL of local type, using combined temperature and E-polarization data (Yadavet al. 2007). The estimator is computationally efficient, scaling as O( N 3/2 ) compared to the O( N 5/2 ) scaling of the brute force bispectrum calculation for sky maps with N pixels. For the Planck satellite, this translates into a speed-up by factors of millions, reducing the required computing time from thousands of years to just hours and thus making f NL estimation feasible. The speed of our estimator allows us to study its statistical properties using Monte Carlo simulations. Our estimator in its original form was optimal for homogeneous noise. In order to apply our estimator to realistic data, the estimator needed to be able to deal with inhomogeneous noise. We have generalized the fast polarized estimator to deal with inhomogeneous noise. The generalized estimator is also computationally efficient, scaling as O( N 3/2 ). Furthermore, we have studied and characterized our estimators in the presence of realistic noise, finite resolution, incomplete sky-coverage, and using non-Gaussian CMB maps (Yadavet al. 2008a). We have also developed a numerical code to generate CMB temperature and polarization non-Gaussian maps starting from a given primordial non-Gaussianity (f NL ) (Liguori et al. 2007). In the process of non-Gaussian CMB map making, the code also generates corresponding non-Gaussian primordial curvature perturbations. We use these curvature perturbations to quantify the quality of the tomographic reconstruction method described in (Yadav and Wandelt 2005). We check whether the tomographic reconstruction method preserves the non-Gaussian features, especially the phase information, in the reconstructed curvature perturbations (Yadav et al. in preparation). Finally, using our estimator we found (Yadav and Wandelt 2008) evidence for primordial non-Gaussianity of the local type (f NL ) in the temperature anisotropy of the Cosmic Microwave Background. Analyzing the bispectrum of the WMAP 3-year data up to l max =750 we find 27< f NL <147 (95% CL). This amounts to a rejection of f NL =0 at 2.8s, disfavoring canonical single field slow-roll inflation. The signal is robust to variations in l max , frequency, and masks. No known foreground, instrument systematic, or secondary anisotropy explains it. We explore the impact of several analysis choices on the quoted significance and find 2.5s to be conservative.

  6. Investigating the relation between the geometric properties of river basins and the filtering parameters for regional land hydrology applications using GRACE models

    NASA Astrophysics Data System (ADS)

    Piretzidis, Dimitrios; Sideris, Michael G.

    2016-04-01

    This study investigates the possibilities of local hydrology signal extraction using GRACE data and conventional filtering techniques. The impact of the basin shape has also been studied in order to derive empirical rules for tuning the GRACE filter parameters. GRACE CSR Release 05 monthly solutions were used from April 2002 to August 2015 (161 monthly solutions in total). SLR data were also used to replace the GRACE C2,0 coefficient, and a de-correlation filter with optimal parameters for CSR Release 05 data was applied to attenuate the correlation errors of monthly mass differences. For basins located at higher latitudes, the effect of Glacial Isostatic Adjustment (GIA) was taken into account using the ICE-6G model. The study focuses on three geometric properties, i.e., the area, the convexity and the width in the longitudinal direction, of 100 basins with global distribution. Two experiments have been performed. The first one deals with the determination of the Gaussian smoothing radius that minimizes the gaussianity of GRACE equivalent water height (EWH) over the selected basins. The EWH kurtosis was selected as a metric of gaussianity. The second experiment focuses on the derivation of the Gaussian smoothing radius that minimizes the RMS difference between GRACE data and a hydrology model. The GLDAS 1.0 Noah hydrology model was chosen, which shows good agreement with GRACE data according to previous studies. Early results show that there is an apparent relation between the geometric attributes of the basins examined and the Gaussian radius derived from the two experiments. The kurtosis analysis experiment tends to underestimate the optimal Gaussian radius, which is close to 200-300 km in many cases. Empirical rules for the selection of the Gaussian radius have been also developed for sub-regional scale basins.

  7. Modelling daily water temperature from air temperature for the Missouri River.

    PubMed

    Zhu, Senlin; Nyarko, Emmanuel Karlo; Hadzima-Nyarko, Marijana

    2018-01-01

    The bio-chemical and physical characteristics of a river are directly affected by water temperature, which thereby affects the overall health of aquatic ecosystems. It is a complex problem to accurately estimate water temperature. Modelling of river water temperature is usually based on a suitable mathematical model and field measurements of various atmospheric factors. In this article, the air-water temperature relationship of the Missouri River is investigated by developing three different machine learning models (Artificial Neural Network (ANN), Gaussian Process Regression (GPR), and Bootstrap Aggregated Decision Trees (BA-DT)). Standard models (linear regression, non-linear regression, and stochastic models) are also developed and compared to machine learning models. Analyzing the three standard models, the stochastic model clearly outperforms the standard linear model and nonlinear model. All the three machine learning models have comparable results and outperform the stochastic model, with GPR having slightly better results for stations No. 2 and 3, while BA-DT has slightly better results for station No. 1. The machine learning models are very effective tools which can be used for the prediction of daily river temperature.

  8. Gaussian solitary waves and compactons in Fermi–Pasta–Ulam lattices with Hertzian potentials

    PubMed Central

    James, Guillaume; Pelinovsky, Dmitry

    2014-01-01

    We consider a class of fully nonlinear Fermi–Pasta–Ulam (FPU) lattices, consisting of a chain of particles coupled by fractional power nonlinearities of order α>1. This class of systems incorporates a classical Hertzian model describing acoustic wave propagation in chains of touching beads in the absence of precompression. We analyse the propagation of localized waves when α is close to unity. Solutions varying slowly in space and time are searched with an appropriate scaling, and two asymptotic models of the chain of particles are derived consistently. The first one is a logarithmic Korteweg–de Vries (KdV) equation and possesses linearly orbitally stable Gaussian solitary wave solutions. The second model consists of a generalized KdV equation with Hölder-continuous fractional power nonlinearity and admits compacton solutions, i.e. solitary waves with compact support. When , we numerically establish the asymptotically Gaussian shape of exact FPU solitary waves with near-sonic speed and analytically check the pointwise convergence of compactons towards the limiting Gaussian profile. PMID:24808748

  9. Ewald Summation Approach to Potential Models of Aqueous Electrolytes Involving Gaussian Charges and Induced Dipoles: Formal and Simulation Results

    DOE PAGES

    Chialvo, Ariel A.; Vlcek, Lukas

    2014-11-01

    We present a detailed derivation of the complete set of expressions required for the implementation of an Ewald summation approach to handle the long-range electrostatic interactions of polar and ionic model systems involving Gaussian charges and induced dipole moments with a particular application to the isobaricisothermal molecular dynamics simulation of our Gaussian Charge Polarizable (GCP) water model and its extension to aqueous electrolytes solutions. The set comprises the individual components of the potential energy, electrostatic potential, electrostatic field and gradient, the electrostatic force and the corresponding virial. Moreover, we show how the derived expressions converge to known point-based electrostatic counterpartsmore » when the parameters, defining the Gaussian charge and induced-dipole distributions, are extrapolated to their limiting point values. Finally, we illustrate the Ewald implementation against the current reaction field approach by isothermal-isobaric molecular dynamics of ambient GCP water for which we compared the outcomes of the thermodynamic, microstructural, and polarization behavior.« less

  10. Bayesian seismic inversion based on rock-physics prior modeling for the joint estimation of acoustic impedance, porosity and lithofacies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio

    We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less

  11. The Gaussian Laser Angular Distribution in HYDRA's 3D Laser Ray Trace Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sepke, Scott M.

    In this note, the angular distribution of rays launched by the 3D LZR ray trace package is derived for Gaussian beams (npower==2) with bm model=3±. Beams with bm model=+3 have a nearly at distribution, and beams with bm model=-3 have a nearly linear distribution when the spot size is large compared to the wavelength.

  12. Efficient Multi-Source Data Fusion for Decentralized Sensor Networks

    DTIC Science & Technology

    2006-10-01

    Operating Picture (COP). Robovolc, accessing a single DDF node associated with a CCTV camera (marked in orange in Figure 3a), defends a ‘ sensitive ...Gaussian environments. Figure 10: Particle Distribution Snapshots osition error between each target and the me ed particle set at the bearing-only

  13. Visibility Graph Based Time Series Analysis.

    PubMed

    Stephen, Mutua; Gu, Changgui; Yang, Huijie

    2015-01-01

    Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  14. Gaussian process inference for estimating pharmacokinetic parameters of dynamic contrast-enhanced MR images.

    PubMed

    Wang, Shijun; Liu, Peter; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Summers, Ronald M

    2012-01-01

    In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.

  15. Tests for Gaussianity of the MAXIMA-1 cosmic microwave background map.

    PubMed

    Wu, J H; Balbi, A; Borrill, J; Ferreira, P G; Hanany, S; Jaffe, A H; Lee, A T; Rabii, B; Richards, P L; Smoot, G F; Stompor, R; Winant, C D

    2001-12-17

    Gaussianity of the cosmological perturbations is one of the key predictions of standard inflation, but it is violated by other models of structure formation such as cosmic defects. We present the first test of the Gaussianity of the cosmic microwave background (CMB) on subdegree angular scales, where deviations from Gaussianity are most likely to occur. We apply the methods of moments, cumulants, the Kolmogorov test, the chi(2) test, and Minkowski functionals in eigen, real, Wiener-filtered, and signal-whitened spaces, to the MAXIMA-1 CMB anisotropy data. We find that the data, which probe angular scales between 10 arcmin and 5 deg, are consistent with Gaussianity. These results show consistency with the standard inflation and place constraints on the existence of cosmic defects.

  16. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects. Copyright © 2016 Crop Science Society of America.

  17. Gaussian States Minimize the Output Entropy of One-Mode Quantum Gaussian Channels

    NASA Astrophysics Data System (ADS)

    De Palma, Giacomo; Trevisan, Dario; Giovannetti, Vittorio

    2017-04-01

    We prove the long-standing conjecture stating that Gaussian thermal input states minimize the output von Neumann entropy of one-mode phase-covariant quantum Gaussian channels among all the input states with a given entropy. Phase-covariant quantum Gaussian channels model the attenuation and the noise that affect any electromagnetic signal in the quantum regime. Our result is crucial to prove the converse theorems for both the triple trade-off region and the capacity region for broadcast communication of the Gaussian quantum-limited amplifier. Our result extends to the quantum regime the entropy power inequality that plays a key role in classical information theory. Our proof exploits a completely new technique based on the recent determination of the p →q norms of the quantum-limited amplifier [De Palma et al., arXiv:1610.09967]. This technique can be applied to any quantum channel.

  18. Gaussian States Minimize the Output Entropy of One-Mode Quantum Gaussian Channels.

    PubMed

    De Palma, Giacomo; Trevisan, Dario; Giovannetti, Vittorio

    2017-04-21

    We prove the long-standing conjecture stating that Gaussian thermal input states minimize the output von Neumann entropy of one-mode phase-covariant quantum Gaussian channels among all the input states with a given entropy. Phase-covariant quantum Gaussian channels model the attenuation and the noise that affect any electromagnetic signal in the quantum regime. Our result is crucial to prove the converse theorems for both the triple trade-off region and the capacity region for broadcast communication of the Gaussian quantum-limited amplifier. Our result extends to the quantum regime the entropy power inequality that plays a key role in classical information theory. Our proof exploits a completely new technique based on the recent determination of the p→q norms of the quantum-limited amplifier [De Palma et al., arXiv:1610.09967]. This technique can be applied to any quantum channel.

  19. Flexible link functions in nonparametric binary regression with Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Lin, Lizhen; Dey, Dipak K

    2016-09-01

    In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. © 2015, The International Biometric Society.

  20. Flexible Link Functions in Nonparametric Binary Regression with Gaussian Process Priors

    PubMed Central

    Li, Dan; Lin, Lizhen; Dey, Dipak K.

    2015-01-01

    Summary In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. PMID:26686333

  1. Loop corrections to primordial non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Boran, Sibel; Kahya, E. O.

    2018-02-01

    We discuss quantum gravitational loop effects to observable quantities such as curvature power spectrum and primordial non-Gaussianity of cosmic microwave background (CMB) radiation. We first review the previously shown case where one gets a time dependence for zeta-zeta correlator due to loop corrections. Then we investigate the effect of loop corrections to primordial non-Gaussianity of CMB. We conclude that, even with a single scalar inflaton, one might get a huge value for non-Gaussianity which would exceed the observed value by at least 30 orders of magnitude. Finally we discuss the consequences of this result for scalar driven inflationary models.

  2. Spatial Copula Model for Imputing Traffic Flow Data from Remote Microwave Sensors.

    PubMed

    Ma, Xiaolei; Luan, Sen; Du, Bowen; Yu, Bin

    2017-09-21

    Issues of missing data have become increasingly serious with the rapid increase in usage of traffic sensors. Analyses of the Beijing ring expressway have showed that up to 50% of microwave sensors pose missing values. The imputation of missing traffic data must be urgently solved although a precise solution that cannot be easily achieved due to the significant number of missing portions. In this study, copula-based models are proposed for the spatial interpolation of traffic flow from remote traffic microwave sensors. Most existing interpolation methods only rely on covariance functions to depict spatial correlation and are unsuitable for coping with anomalies due to Gaussian consumption. Copula theory overcomes this issue and provides a connection between the correlation function and the marginal distribution function of traffic flow. To validate copula-based models, a comparison with three kriging methods is conducted. Results indicate that copula-based models outperform kriging methods, especially on roads with irregular traffic patterns. Copula-based models demonstrate significant potential to impute missing data in large-scale transportation networks.

  3. Improved Classification of Lung Cancer Using Radial Basis Function Neural Network with Affine Transforms of Voss Representation.

    PubMed

    Adetiba, Emmanuel; Olugbara, Oludayo O

    2015-01-01

    Lung cancer is one of the diseases responsible for a large number of cancer related death cases worldwide. The recommended standard for screening and early detection of lung cancer is the low dose computed tomography. However, many patients diagnosed die within one year, which makes it essential to find alternative approaches for screening and early detection of lung cancer. We present computational methods that can be implemented in a functional multi-genomic system for classification, screening and early detection of lung cancer victims. Samples of top ten biomarker genes previously reported to have the highest frequency of lung cancer mutations and sequences of normal biomarker genes were respectively collected from the COSMIC and NCBI databases to validate the computational methods. Experiments were performed based on the combinations of Z-curve and tetrahedron affine transforms, Histogram of Oriented Gradient (HOG), Multilayer perceptron and Gaussian Radial Basis Function (RBF) neural networks to obtain an appropriate combination of computational methods to achieve improved classification of lung cancer biomarker genes. Results show that a combination of affine transforms of Voss representation, HOG genomic features and Gaussian RBF neural network perceptibly improves classification accuracy, specificity and sensitivity of lung cancer biomarker genes as well as achieving low mean square error.

  4. The Common Patterns of Nature

    PubMed Central

    Frank, Steven A.

    2010-01-01

    We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern, and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. PMID:19538344

  5. Accounting for Non-Gaussian Sources of Spatial Correlation in Parametric Functional Magnetic Resonance Imaging Paradigms I: Revisiting Cluster-Based Inferences.

    PubMed

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Sathian, K

    2018-02-01

    In a recent study, Eklund et al. employed resting-state functional magnetic resonance imaging data as a surrogate for null functional magnetic resonance imaging (fMRI) datasets and posited that cluster-wise family-wise error (FWE) rate-corrected inferences made by using parametric statistical methods in fMRI studies over the past two decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; this was principally because the spatial autocorrelation functions (sACF) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggested otherwise. Here, we show that accounting for non-Gaussian signal components such as those arising from resting-state neural activity as well as physiological responses and motion artifacts in the null fMRI datasets yields first- and second-level general linear model analysis residuals with nearly uniform and Gaussian sACF. Further comparison with nonparametric permutation tests indicates that cluster-based FWE corrected inferences made with Gaussian spatial noise approximations are valid.

  6. Blended particle filters for large-dimensional chaotic dynamical systems

    PubMed Central

    Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.

    2014-01-01

    A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886

  7. A 2D Gaussian-Beam-Based Method for Modeling the Dichroic Surfaces of Quasi-Optical Systems

    NASA Astrophysics Data System (ADS)

    Elis, Kevin; Chabory, Alexandre; Sokoloff, Jérôme; Bolioli, Sylvain

    2016-08-01

    In this article, we propose an approach in the spectral domain to treat the interaction of a field with a dichroic surface in two dimensions. For a Gaussian beam illumination of the surface, the reflected and transmitted fields are approximated by one reflected and one transmitted Gaussian beams. Their characteristics are determined by means of a matching in the spectral domain, which requires a second-order approximation of the dichroic surface response when excited by plane waves. This approximation is of the same order as the one used in Gaussian beam shooting algorithm to model curved interfaces associated with lenses, reflector, etc. The method uses general analytical formulations for the GBs that depend either on a paraxial or far-field approximation. Numerical experiments are led to test the efficiency of the method in terms of accuracy and computation time. They include a parametric study and a case for which the illumination is provided by a horn antenna. For the latter, the incident field is firstly expressed as a sum of Gaussian beams by means of Gabor frames.

  8. Time-domain least-squares migration using the Gaussian beam summation method

    NASA Astrophysics Data System (ADS)

    Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo

    2018-04-01

    With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.

  9. Time-domain least-squares migration using the Gaussian beam summation method

    NASA Astrophysics Data System (ADS)

    Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo

    2018-07-01

    With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.

  10. Direct Importance Estimation with Gaussian Mixture Models

    NASA Astrophysics Data System (ADS)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  11. Changes in flexibility upon binding: Application of the self-consistent pair contact probability method to protein-protein interactions

    NASA Astrophysics Data System (ADS)

    Canino, Lawrence S.; Shen, Tongye; McCammon, J. Andrew

    2002-12-01

    We extend the self-consistent pair contact probability method to the evaluation of the partition function for a protein complex at thermodynamic equilibrium. Specifically, we adapt the method for multichain models and introduce a parametrization for amino acid-specific pairwise interactions. This method is similar to the Gaussian network model but allows for the adjusting of the strengths of native state contacts. The method is first validated on a high resolution x-ray crystal structure of bovine Pancreatic Phospholipase A2 by comparing calculated B-factors with reported values. We then examine binding-induced changes in flexibility in protein-protein complexes, comparing computed results with those obtained from x-ray crystal structures and molecular dynamics simulations. In particular, we focus on the mouse acetylcholinesterase:fasciculin II and the human α-thrombin:thrombomodulin complexes.

  12. A simple mathematical model of gradual Darwinian evolution: emergence of a Gaussian trait distribution in adaptation along a fitness gradient.

    PubMed

    Biktashev, Vadim N

    2014-04-01

    We consider a simple mathematical model of gradual Darwinian evolution in continuous time and continuous trait space, due to intraspecific competition for common resource in an asexually reproducing population in constant environment, while far from evolutionary stable equilibrium. The model admits exact analytical solution. In particular, Gaussian distribution of the trait emerges from generic initial conditions.

  13. Transition to Chaos in Random Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Kadmon, Jonathan; Sompolinsky, Haim

    2015-10-01

    Firing patterns in the central nervous system often exhibit strong temporal irregularity and considerable heterogeneity in time-averaged response properties. Previous studies suggested that these properties are the outcome of the intrinsic chaotic dynamics of the neural circuits. Indeed, simplified rate-based neuronal networks with synaptic connections drawn from Gaussian distribution and sigmoidal nonlinearity are known to exhibit chaotic dynamics when the synaptic gain (i.e., connection variance) is sufficiently large. In the limit of an infinitely large network, there is a sharp transition from a fixed point to chaos, as the synaptic gain reaches a critical value. Near the onset, chaotic fluctuations are slow, analogous to the ubiquitous, slow irregular fluctuations observed in the firing rates of many cortical circuits. However, the existence of a transition from a fixed point to chaos in neuronal circuit models with more realistic architectures and firing dynamics has not been established. In this work, we investigate rate-based dynamics of neuronal circuits composed of several subpopulations with randomly diluted connections. Nonzero connections are either positive for excitatory neurons or negative for inhibitory ones, while single neuron output is strictly positive with output rates rising as a power law above threshold, in line with known constraints in many biological systems. Using dynamic mean field theory, we find the phase diagram depicting the regimes of stable fixed-point, unstable-dynamic, and chaotic-rate fluctuations. We focus on the latter and characterize the properties of systems near this transition. We show that dilute excitatory-inhibitory architectures exhibit the same onset to chaos as the single population with Gaussian connectivity. In these architectures, the large mean excitatory and inhibitory inputs dynamically balance each other, amplifying the effect of the residual fluctuations. Importantly, the existence of a transition to chaos and its critical properties depend on the shape of the single-neuron nonlinear input-output transfer function, near firing threshold. In particular, for nonlinear transfer functions with a sharp rise near threshold, the transition to chaos disappears in the limit of a large network; instead, the system exhibits chaotic fluctuations even for small synaptic gain. Finally, we investigate transition to chaos in network models with spiking dynamics. We show that when synaptic time constants are slow relative to the mean inverse firing rates, the network undergoes a transition from fast spiking fluctuations with constant rates to a state where the firing rates exhibit chaotic fluctuations, similar to the transition predicted by rate-based dynamics. Systems with finite synaptic time constants and firing rates exhibit a smooth transition from a regime dominated by stationary firing rates to a regime of slow rate fluctuations. This smooth crossover obeys scaling properties, similar to crossover phenomena in statistical mechanics. The theoretical results are supported by computer simulations of several neuronal architectures and dynamics. Consequences for cortical circuit dynamics are discussed. These results advance our understanding of the properties of intrinsic dynamics in realistic neuronal networks and their functional consequences.

  14. Discrete-time neural network for fast solving large linear L1 estimation problems and its application to image restoration.

    PubMed

    Xia, Youshen; Sun, Changyin; Zheng, Wei Xing

    2012-05-01

    There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.

  15. Spatio-temporal modelling of wind speed variations and extremes in the Caribbean and the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Rychlik, Igor; Mao, Wengang

    2018-02-01

    The wind speed variability in the North Atlantic has been successfully modelled using a spatio-temporal transformed Gaussian field. However, this type of model does not correctly describe the extreme wind speeds attributed to tropical storms and hurricanes. In this study, the transformed Gaussian model is further developed to include the occurrence of severe storms. In this new model, random components are added to the transformed Gaussian field to model rare events with extreme wind speeds. The resulting random field is locally stationary and homogeneous. The localized dependence structure is described by time- and space-dependent parameters. The parameters have a natural physical interpretation. To exemplify its application, the model is fitted to the ECMWF ERA-Interim reanalysis data set. The model is applied to compute long-term wind speed distributions and return values, e.g., 100- or 1000-year extreme wind speeds, and to simulate random wind speed time series at a fixed location or spatio-temporal wind fields around that location.

  16. Separation of the low-frequency atmospheric variability into non-Gaussian multidimensional sources by Independent Subspace Analysis

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Ribeiro, Andreia

    2016-04-01

    An efficient nonlinear method of statistical source separation of space-distributed non-Gaussian distributed data is proposed. The method relies in the so called Independent Subspace Analysis (ISA), being tested on a long time-series of the stream-function field of an atmospheric quasi-geostrophic 3-level model (QG3) simulating the winter's monthly variability of the Northern Hemisphere. ISA generalizes the Independent Component Analysis (ICA) by looking for multidimensional and minimally dependent, uncorrelated and non-Gaussian distributed statistical sources among the rotated projections or subspaces of the multivariate probability distribution of the leading principal components of the working field whereas ICA restrict to scalar sources. The rationale of that technique relies upon the projection pursuit technique, looking for data projections of enhanced interest. In order to accomplish the decomposition, we maximize measures of the sources' non-Gaussianity by contrast functions which are given by squares of nonlinear, cross-cumulant-based correlations involving the variables spanning the sources. Therefore sources are sought matching certain nonlinear data structures. The maximized contrast function is built in such a way that it provides the minimization of the mean square of the residuals of certain nonlinear regressions. The issuing residuals, followed by spherization, provide a new set of nonlinear variable changes that are at once uncorrelated, quasi-independent and quasi-Gaussian, representing an advantage with respect to the Independent Components (scalar sources) obtained by ICA where the non-Gaussianity is concentrated into the non-Gaussian scalar sources. The new scalar sources obtained by the above process encompass the attractor's curvature thus providing improved nonlinear model indices of the low-frequency atmospheric variability which is useful since large circulation indices are nonlinearly correlated. The non-Gaussian tested sources (dyads and triads, respectively of two and three dimensions) lead to a dense data concentration along certain curves or surfaces, nearby which the clusters' centroids of the joint probability density function tend to be located. That favors a better splitting of the QG3 atmospheric model's weather regimes: the positive and negative phases of the Arctic Oscillation and positive and negative phases of the North Atlantic Oscillation. The leading model's non-Gaussian dyad is associated to a positive correlation between: 1) the squared anomaly of the extratropical jet-stream and 2) the meridional jet-stream meandering. Triadic sources coming from maximized third-order cross cumulants between pairwise uncorrelated components reveal situations of triadic wave resonance and nonlinear triadic teleconnections, only possible thanks to joint non-Gaussianity. That kind of triadic synergies are accounted for an Information-Theoretic measure: the Interaction Information. The dominant model's triad occurs between anomalies of: 1) the North Pole anomaly pressure 2) the jet-stream intensity at the Eastern North-American boundary and 3) the jet-stream intensity at the Eastern Asian boundary. Publication supported by project FCT UID/GEO/50019/2013 - Instituto Dom Luiz.

  17. Assessment of the non-Gaussianity and non-linearity levels of simulated sEMG signals on stationary segments.

    PubMed

    Messaoudi, Noureddine; Bekka, Raïs El'hadi; Ravier, Philippe; Harba, Rachid

    2017-02-01

    The purpose of this paper was to evaluate the effects of the longitudinal single differential (LSD), the longitudinal double differential (LDD) and the normal double differential (NDD) spatial filters, the electrode shape, the inter-electrode distance (IED) on non-Gaussianity and non-linearity levels of simulated surface EMG (sEMG) signals when the maximum voluntary contraction (MVC) varied from 10% to 100% by a step of 10%. The effects of recruitment range thresholds (RR), the firing rate (FR) strategy and the peak firing rate (PFR) of motor units were also considered. A cylindrical multilayer model of the volume conductor and a model of motor unit (MU) recruitment and firing rate were used to simulate sEMG signals in a pool of 120 MUs for 5s. Firstly, the stationarity of sEMG signals was tested by the runs, the reverse arrangements (RA) and the modified reverse arrangements (MRA) tests. Then the non-Gaussianity was characterised with bicoherence and kurtosis, and non-linearity levels was evaluated with linearity test. The kurtosis analysis showed that the sEMG signals detected by the LSD filter were the most Gaussian and those detected by the NDD filter were the least Gaussian. In addition, the sEMG signals detected by the LSD filter were the most linear. For a given filter, the sEMG signals detected by using rectangular electrodes were more Gaussian and more linear than that detected with circular electrodes. Moreover, the sEMG signals are less non-Gaussian and more linear with reverse onion-skin firing rate strategy than those with onion-skin strategy. The levels of sEMG signal Gaussianity and linearity increased with the increase of the IED, RR and PFR. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Quantification of Greenhouse Gas Emission Rates from strong Point Sources by Space-borne IPDA Lidar Measurements: Results from a Sensitivity Analysis Study

    NASA Astrophysics Data System (ADS)

    Ehret, G.; Kiemle, C.; Rapp, M.

    2017-12-01

    The practical implementation of the Paris Agreement (COP21) vastly profit from an independent, reliable and global measurement system of greenhouse gas emissions, in particular of CO2, in order to complement and cross-check national efforts. Most fossil-fuel CO2 emitters emanate from large sources such as cities and power plants. These emissions increase the local CO2 abundance in the atmosphere by 1-10 parts per million (ppm) which is a signal that is significantly larger than the variability from natural sources and sinks over the local source domain. Despite these large signals, they are only sparsely sampled by the ground-based network which calls for satellite measurements. However, none of the existing and forthcoming passive satellite instruments, operating in the NIR spectral domain, can measure CO2 emissions at night time or in low sunlight conditions and in high latitude regions in winter times. The resulting sparse coverage of passive spectrometers is a serious limitation, particularly for the Northern Hemisphere, since these regions exhibit substantial emissions during the winter as well as other times of the year. In contrast, CO2 measurements by an Integrated Path Differential Absorption (IPDA) Lidar are largely immune to these limitations and initial results from airborne application look promising. In this study, we discuss the implication for a space-borne IPDA Lidar system. A Gaussian plume model will be used to simulate the CO2-distribution of large power plants downstream to the source. The space-borne measurements are simulated by applying a simple forward model based on Gaussian error distribution. Besides the sampling frequency, the sampling geometry (e.g. measurement distance to the emitting source) and the error of the measurement itself vastly impact on the flux inversion performance. We will discuss the results by incorporating Gaussian plume and mass budget approaches to quantify the emission rates.

  19. Searching for gravitational-waves from compact binary coalescences while dealing with challenges of real data and simulated waveforms

    NASA Astrophysics Data System (ADS)

    Dayanga, Waduthanthree Thilina

    Albert Einstein's general theory of relativity predicts the existence of gravitational waves (GWs). Direct detection of GWs will provide enormous amount of new information about physics, astronomy and cosmology. Scientists around the world are currently working towards the first direct detection of GWs. The global network of ground-based GW detectors are currently preparing for their first advanced detector Science runs. In this thesis we focus on detection of GWs from compact binary coalescence (CBC) systems. Ability to accurately model CBC GW waveforms makes them the most promising source for the first direct detection of GWs. In this thesis we try to address several challenges associated with detecting CBC signals buried in ground-based GW detector data for past and future searches. Data analysis techniques we employ to detect GW signals assume detector noise is Gaussian and stationary. However, in reality, detector data is neither Gaussian nor stationary. To estimate the performance loss due to these features, we compare the efficiencies of detecting CBC signals in simulated Gaussian and real data. Additionally, we also demonstrate the effectiveness of multi-detector signal based consistency tests such ad null-stream. Despite, non-Gaussian and non-stationary features of real detector data, with effective data quality studies and signal-based vetoes we can approach the performance of Gaussian and stationary data. As we are moving towards advanced detector era, it is important to be prepared for future CBC searches. In this thesis we investigate the performances of non-spinning binary black hole (BBH) searches in simulated Gaussian using advanced detector noise curves predicted for 2015--2016. In the same study, we analyze the GW detection probabilities of latest pN-NR hybrid waveforms submitted to second version of Numerical Injection Analysis (NINJA-2) project. The main motivation for this study is to understand the ability to detect realistic BBH signals of currently available template waveforms in LIGO Algorithms Libraries (LAL) such as EOBNR waveform family. Results of the analysis demonstrates, although the detection efficiency is least affected, parameter estimation can be challenging in future searches. Many authors suggested and demonstrated coherent searches are the most sensitive in detecting GW signals using network of multiple detectors. Owing to computational expenses in recent Science data searches of LIGO and Virgo we did not employ coherent search methods. In this thesis we demonstrate how to employ coherent searches for current CBC searches in computational feasible way. As a solution, we thoroughly investigate many aspects of coherent searches using a all-sky blind hierarchical coherent pipeline. Most importantly we presents some powerful insights extracted by running coherent hierarchical pipeline on LIGO and Virgo data. This also includes the challenges we need to address before moving to all-sky all-time fully coherent searches. Estimating GW background play critical role in data analysis. We are still exploring the best way to estimate background of a CBC GW search when one or more signal present in data. In this thesis we try to address this to certain extend through NINJA-2 mock data challenge. However, due to limitations of methods and computer power, for triple coincident GW candidates we only consider loudest two interferometers for background estimation purposes.

  20. Identifying stochastic oscillations in single-cell live imaging time series using Gaussian processes

    PubMed Central

    Manning, Cerys; Rattray, Magnus

    2017-01-01

    Multiple biological processes are driven by oscillatory gene expression at different time scales. Pulsatile dynamics are thought to be widespread, and single-cell live imaging of gene expression has lead to a surge of dynamic, possibly oscillatory, data for different gene networks. However, the regulation of gene expression at the level of an individual cell involves reactions between finite numbers of molecules, and this can result in inherent randomness in expression dynamics, which blurs the boundaries between aperiodic fluctuations and noisy oscillators. This underlies a new challenge to the experimentalist because neither intuition nor pre-existing methods work well for identifying oscillatory activity in noisy biological time series. Thus, there is an acute need for an objective statistical method for classifying whether an experimentally derived noisy time series is periodic. Here, we present a new data analysis method that combines mechanistic stochastic modelling with the powerful methods of non-parametric regression with Gaussian processes. Our method can distinguish oscillatory gene expression from random fluctuations of non-oscillatory expression in single-cell time series, despite peak-to-peak variability in period and amplitude of single-cell oscillations. We show that our method outperforms the Lomb-Scargle periodogram in successfully classifying cells as oscillatory or non-oscillatory in data simulated from a simple genetic oscillator model and in experimental data. Analysis of bioluminescent live-cell imaging shows a significantly greater number of oscillatory cells when luciferase is driven by a Hes1 promoter (10/19), which has previously been reported to oscillate, than the constitutive MoMuLV 5’ LTR (MMLV) promoter (0/25). The method can be applied to data from any gene network to both quantify the proportion of oscillating cells within a population and to measure the period and quality of oscillations. It is publicly available as a MATLAB package. PMID:28493880

Top