Sample records for hidden variable model

  1. Comment on 'All quantum observables in a hidden-variable model must commute simultaneously'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagata, Koji

    Malley discussed [Phys. Rev. A 69, 022118 (2004)] that all quantum observables in a hidden-variable model for quantum events must commute simultaneously. In this comment, we discuss that Malley's theorem is indeed valid for the hidden-variable theoretical assumptions, which were introduced by Kochen and Specker. However, we give an example that the local hidden-variable (LHV) model for quantum events preserves noncommutativity of quantum observables. It turns out that Malley's theorem is not related to the LHV model for quantum events, in general.

  2. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    PubMed

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  3. Nonparametric model validations for hidden Markov models with applications in financial econometrics

    PubMed Central

    Zhao, Zhibiao

    2011-01-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601

  4. p-adic stochastic hidden variable model

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrew

    1998-03-01

    We propose stochastic hidden variables model in which hidden variables have a p-adic probability distribution ρ(λ) and at the same time conditional probabilistic distributions P(U,λ), U=A,A',B,B', are ordinary probabilities defined on the basis of the Kolmogorov measure-theoretical axiomatics. A frequency definition of p-adic probability is quite similar to the ordinary frequency definition of probability. p-adic frequency probability is defined as the limit of relative frequencies νn but in the p-adic metric. We study a model with p-adic stochastics on the level of the hidden variables description. But, of course, responses of macroapparatuses have to be described by ordinary stochastics. Thus our model describes a mixture of p-adic stochastics of the microworld and ordinary stochastics of macroapparatuses. In this model probabilities for physical observables are the ordinary probabilities. At the same time Bell's inequality is violated.

  5. State Space Model with hidden variables for reconstruction of gene regulatory networks.

    PubMed

    Wu, Xi; Li, Peng; Wang, Nan; Gong, Ping; Perkins, Edward J; Deng, Youping; Zhang, Chaoyang

    2011-01-01

    State Space Model (SSM) is a relatively new approach to inferring gene regulatory networks. It requires less computational time than Dynamic Bayesian Networks (DBN). There are two types of variables in the linear SSM, observed variables and hidden variables. SSM uses an iterative method, namely Expectation-Maximization, to infer regulatory relationships from microarray datasets. The hidden variables cannot be directly observed from experiments. How to determine the number of hidden variables has a significant impact on the accuracy of network inference. In this study, we used SSM to infer Gene regulatory networks (GRNs) from synthetic time series datasets, investigated Bayesian Information Criterion (BIC) and Principle Component Analysis (PCA) approaches to determining the number of hidden variables in SSM, and evaluated the performance of SSM in comparison with DBN. True GRNs and synthetic gene expression datasets were generated using GeneNetWeaver. Both DBN and linear SSM were used to infer GRNs from the synthetic datasets. The inferred networks were compared with the true networks. Our results show that inference precision varied with the number of hidden variables. For some regulatory networks, the inference precision of DBN was higher but SSM performed better in other cases. Although the overall performance of the two approaches is compatible, SSM is much faster and capable of inferring much larger networks than DBN. This study provides useful information in handling the hidden variables and improving the inference precision.

  6. Clustering coefficients of protein-protein interaction networks

    NASA Astrophysics Data System (ADS)

    Miller, Gerald A.; Shi, Yi Y.; Qian, Hong; Bomsztyk, Karol

    2007-05-01

    The properties of certain networks are determined by hidden variables that are not explicitly measured. The conditional probability (propagator) that a vertex with a given value of the hidden variable is connected to k other vertices determines all measurable properties. We study hidden variable models and find an averaging approximation that enables us to obtain a general analytical result for the propagator. Analytic results showing the validity of the approximation are obtained. We apply hidden variable models to protein-protein interaction networks (PINs) in which the hidden variable is the association free energy, determined by distributions that depend on biochemistry and evolution. We compute degree distributions as well as clustering coefficients of several PINs of different species; good agreement with measured data is obtained. For the human interactome two different parameter sets give the same degree distributions, but the computed clustering coefficients differ by a factor of about 2. This shows that degree distributions are not sufficient to determine the properties of PINs.

  7. All quantum observables in a hidden-variable model must commute simultaneously

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malley, James D.

    Under a standard set of assumptions for a hidden-variable model for quantum events we show that all observables must commute simultaneously. This seems to be an ultimate statement about the inapplicability of the usual hidden-variable model for quantum events. And, despite Bell's complaint that a key condition of von Neumann's was quite unrealistic, we show that these conditions, under which von Neumann produced the first no-go proof, are entirely equivalent to those introduced by Bell and Kochen and Specker. As these conditions are also equivalent to those under which the Bell-Clauster-Horne inequalities are derived, we see that the experimental violationsmore » of the inequalities demonstrate only that quantum observables do not commute.« less

  8. Context-invariant quasi hidden variable (qHV) modelling of all joint von Neumann measurements for an arbitrary Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loubenets, Elena R.

    We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less

  9. Bell's theorem and the problem of decidability between the views of Einstein and Bohr.

    PubMed

    Hess, K; Philipp, W

    2001-12-04

    Einstein, Podolsky, and Rosen (EPR) have designed a gedanken experiment that suggested a theory that was more complete than quantum mechanics. The EPR design was later realized in various forms, with experimental results close to the quantum mechanical prediction. The experimental results by themselves have no bearing on the EPR claim that quantum mechanics must be incomplete nor on the existence of hidden parameters. However, the well known inequalities of Bell are based on the assumption that local hidden parameters exist and, when combined with conflicting experimental results, do appear to prove that local hidden parameters cannot exist. This fact leaves only instantaneous actions at a distance (called "spooky" by Einstein) to explain the experiments. The Bell inequalities are based on a mathematical model of the EPR experiments. They have no experimental confirmation, because they contradict the results of all EPR experiments. In addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions; for instance, he assumes that the hidden parameters are governed by a single probability measure independent of the analyzer settings. We argue that the mathematical model of Bell excludes a large set of local hidden variables and a large variety of probability densities. Our set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does permit derivation of the quantum result and is consistent with all known experiments.

  10. Communication cost of simulating Bell correlations.

    PubMed

    Toner, B F; Bacon, D

    2003-10-31

    What classical resources are required to simulate quantum correlations? For the simplest and most important case of local projective measurements on an entangled Bell pair state, we show that exact simulation is possible using local hidden variables augmented by just one bit of classical communication. Certain quantum teleportation experiments, which teleport a single qubit, therefore admit a local hidden variables model.

  11. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    PubMed

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  12. Local hidden-variable model for a recent experimental test of quantum nonlocality and local contextuality

    NASA Astrophysics Data System (ADS)

    La Cour, Brian R.

    2017-07-01

    An experiment has recently been performed to demonstrate quantum nonlocality by establishing contextuality in one of a pair of photons encoding four qubits; however, low detection efficiencies and use of the fair-sampling hypothesis leave these results open to possible criticism due to the detection loophole. In this Letter, a physically motivated local hidden-variable model is considered as a possible mechanism for explaining the experimentally observed results. The model, though not intrinsically contextual, acquires this quality upon post-selection of coincident detections.

  13. Measurement problem and local hidden variables with entangled photons

    NASA Astrophysics Data System (ADS)

    Muchowski, Eugen

    2017-12-01

    It is shown that there is no remote action with polarization measurements of photons in singlet state. A model is presented introducing a hidden parameter which determines the polarizer output. This model is able to explain the polarization measurement results with entangled photons. It is not ruled out by Bell's Theorem.

  14. Infinite hidden conditional random fields for human behavior analysis.

    PubMed

    Bousmalis, Konstantinos; Zafeiriou, Stefanos; Morency, Louis-Philippe; Pantic, Maja

    2013-01-01

    Hidden conditional random fields (HCRFs) are discriminative latent variable models that have been shown to successfully learn the hidden structure of a given classification problem (provided an appropriate validation of the number of hidden states). In this brief, we present the infinite HCRF (iHCRF), which is a nonparametric model based on hierarchical Dirichlet processes and is capable of automatically learning the optimal number of hidden states for a classification task. We show how we learn the model hyperparameters with an effective Markov-chain Monte Carlo sampling technique, and we explain the process that underlines our iHCRF model with the Restaurant Franchise Rating Agencies analogy. We show that the iHCRF is able to converge to a correct number of represented hidden states, and outperforms the best finite HCRFs--chosen via cross-validation--for the difficult tasks of recognizing instances of agreement, disagreement, and pain. Moreover, the iHCRF manages to achieve this performance in significantly less total training, validation, and testing time.

  15. Sharp Contradiction for Local-Hidden-State Model in Quantum Steering.

    PubMed

    Chen, Jing-Ling; Su, Hong-Yi; Xu, Zhen-Peng; Pati, Arun Kumar

    2016-08-26

    In quantum theory, no-go theorems are important as they rule out the existence of a particular physical model under consideration. For instance, the Greenberger-Horne-Zeilinger (GHZ) theorem serves as a no-go theorem for the nonexistence of local hidden variable models by presenting a full contradiction for the multipartite GHZ states. However, the elegant GHZ argument for Bell's nonlocality does not go through for bipartite Einstein-Podolsky-Rosen (EPR) state. Recent study on quantum nonlocality has shown that the more precise description of EPR's original scenario is "steering", i.e., the nonexistence of local hidden state models. Here, we present a simple GHZ-like contradiction for any bipartite pure entangled state, thus proving a no-go theorem for the nonexistence of local hidden state models in the EPR paradox. This also indicates that the very simple steering paradox presented here is indeed the closest form to the original spirit of the EPR paradox.

  16. Experimental demonstration of nonbilocal quantum correlations.

    PubMed

    Saunders, Dylan J; Bennet, Adam J; Branciard, Cyril; Pryde, Geoff J

    2017-04-01

    Quantum mechanics admits correlations that cannot be explained by local realistic models. The most studied models are the standard local hidden variable models, which satisfy the well-known Bell inequalities. To date, most works have focused on bipartite entangled systems. We consider correlations between three parties connected via two independent entangled states. We investigate the new type of so-called "bilocal" models, which correspondingly involve two independent hidden variables. These models describe scenarios that naturally arise in quantum networks, where several independent entanglement sources are used. Using photonic qubits, we build such a linear three-node quantum network and demonstrate nonbilocal correlations by violating a Bell-like inequality tailored for bilocal models. Furthermore, we show that the demonstration of nonbilocality is more noise-tolerant than that of standard Bell nonlocality in our three-party quantum network.

  17. Variable complexity online sequential extreme learning machine, with applications to streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lima, Aranildo R.; Hsieh, William W.; Cannon, Alex J.

    2017-12-01

    In situations where new data arrive continually, online learning algorithms are computationally much less costly than batch learning ones in maintaining the model up-to-date. The extreme learning machine (ELM), a single hidden layer artificial neural network with random weights in the hidden layer, is solved by linear least squares, and has an online learning version, the online sequential ELM (OSELM). As more data become available during online learning, information on the longer time scale becomes available, so ideally the model complexity should be allowed to change, but the number of hidden nodes (HN) remains fixed in OSELM. A variable complexity VC-OSELM algorithm is proposed to dynamically add or remove HN in the OSELM, allowing the model complexity to vary automatically as online learning proceeds. The performance of VC-OSELM was compared with OSELM in daily streamflow predictions at two hydrological stations in British Columbia, Canada, with VC-OSELM significantly outperforming OSELM in mean absolute error, root mean squared error and Nash-Sutcliffe efficiency at both stations.

  18. Hidden Markov models incorporating fuzzy measures and integrals for protein sequence identification and alignment.

    PubMed

    Bidargaddi, Niranjan P; Chetty, Madhu; Kamruzzaman, Joarder

    2008-06-01

    Profile hidden Markov models (HMMs) based on classical HMMs have been widely applied for protein sequence identification. The formulation of the forward and backward variables in profile HMMs is made under statistical independence assumption of the probability theory. We propose a fuzzy profile HMM to overcome the limitations of that assumption and to achieve an improved alignment for protein sequences belonging to a given family. The proposed model fuzzifies the forward and backward variables by incorporating Sugeno fuzzy measures and Choquet integrals, thus further extends the generalized HMM. Based on the fuzzified forward and backward variables, we propose a fuzzy Baum-Welch parameter estimation algorithm for profiles. The strong correlations and the sequence preference involved in the protein structures make this fuzzy architecture based model as a suitable candidate for building profiles of a given family, since the fuzzy set can handle uncertainties better than classical methods.

  19. EMG-based speech recognition using hidden markov models with global control variables.

    PubMed

    Lee, Ki-Seung

    2008-03-01

    It is well known that a strong relationship exists between human voices and the movement of articulatory facial muscles. In this paper, we utilize this knowledge to implement an automatic speech recognition scheme which uses solely surface electromyogram (EMG) signals. The sequence of EMG signals for each word is modelled by a hidden Markov model (HMM) framework. The main objective of the work involves building a model for state observation density when multichannel observation sequences are given. The proposed model reflects the dependencies between each of the EMG signals, which are described by introducing a global control variable. We also develop an efficient model training method, based on a maximum likelihood criterion. In a preliminary study, 60 isolated words were used as recognition variables. EMG signals were acquired from three articulatory facial muscles. The findings indicate that such a system may have the capacity to recognize speech signals with an accuracy of up to 87.07%, which is superior to the independent probabilistic model.

  20. Sharp Contradiction for Local-Hidden-State Model in Quantum Steering

    PubMed Central

    Chen, Jing-Ling; Su, Hong-Yi; Xu, Zhen-Peng; Pati, Arun Kumar

    2016-01-01

    In quantum theory, no-go theorems are important as they rule out the existence of a particular physical model under consideration. For instance, the Greenberger-Horne-Zeilinger (GHZ) theorem serves as a no-go theorem for the nonexistence of local hidden variable models by presenting a full contradiction for the multipartite GHZ states. However, the elegant GHZ argument for Bell’s nonlocality does not go through for bipartite Einstein-Podolsky-Rosen (EPR) state. Recent study on quantum nonlocality has shown that the more precise description of EPR’s original scenario is “steering”, i.e., the nonexistence of local hidden state models. Here, we present a simple GHZ-like contradiction for any bipartite pure entangled state, thus proving a no-go theorem for the nonexistence of local hidden state models in the EPR paradox. This also indicates that the very simple steering paradox presented here is indeed the closest form to the original spirit of the EPR paradox. PMID:27562658

  1. Sharp Contradiction for Local-Hidden-State Model in Quantum Steering

    NASA Astrophysics Data System (ADS)

    Chen, Jing-Ling; Su, Hong-Yi; Xu, Zhen-Peng; Pati, Arun Kumar

    2016-08-01

    In quantum theory, no-go theorems are important as they rule out the existence of a particular physical model under consideration. For instance, the Greenberger-Horne-Zeilinger (GHZ) theorem serves as a no-go theorem for the nonexistence of local hidden variable models by presenting a full contradiction for the multipartite GHZ states. However, the elegant GHZ argument for Bell’s nonlocality does not go through for bipartite Einstein-Podolsky-Rosen (EPR) state. Recent study on quantum nonlocality has shown that the more precise description of EPR’s original scenario is “steering”, i.e., the nonexistence of local hidden state models. Here, we present a simple GHZ-like contradiction for any bipartite pure entangled state, thus proving a no-go theorem for the nonexistence of local hidden state models in the EPR paradox. This also indicates that the very simple steering paradox presented here is indeed the closest form to the original spirit of the EPR paradox.

  2. Violation of Leggett-type inequalities in the spin-orbit degrees of freedom of a single photon

    NASA Astrophysics Data System (ADS)

    Cardano, Filippo; Karimi, Ebrahim; Marrucci, Lorenzo; de Lisio, Corrado; Santamato, Enrico

    2013-09-01

    We report the experimental violation of Leggett-type inequalities for a hybrid entangled state of spin and orbital angular momentum of a single photon. These inequalities give a physical criterion to verify the possible validity of a class of hidden-variable theories, originally named “crypto nonlocal,” that are not excluded by the violation of Bell-type inequalities. In our case, the tested theories assume the existence of hidden variables associated with independent degrees of freedom of the same particle, while admitting the possibility of an influence between the two measurements, i.e., the so-called contextuality of observables. We observe a violation of the Leggett inequalities for a range of experimental inputs, with a maximum violation of seven standard deviations, thus ruling out this class of hidden-variable models with a high level of confidence.

  3. Studies of regional-scale climate variability and change. Hidden Markov models and coupled ocean-atmosphere modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghil, M.; Kravtsov, S.; Robertson, A. W.

    2008-10-14

    This project was a continuation of previous work under DOE CCPP funding, in which we had developed a twin approach of probabilistic network (PN) models (sometimes called dynamic Bayesian networks) and intermediate-complexity coupled ocean-atmosphere models (ICMs) to identify the predictable modes of climate variability and to investigate their impacts on the regional scale. We had developed a family of PNs (similar to Hidden Markov Models) to simulate historical records of daily rainfall, and used them to downscale GCM seasonal predictions. Using an idealized atmospheric model, we had established a novel mechanism through which ocean-induced sea-surface temperature (SST) anomalies might influencemore » large-scale atmospheric circulation patterns on interannual and longer time scales; we had found similar patterns in a hybrid coupled ocean-atmosphere-sea-ice model. The goal of the this continuation project was to build on these ICM results and PN model development to address prediction of rainfall and temperature statistics at the local scale, associated with global climate variability and change, and to investigate the impact of the latter on coupled ocean-atmosphere modes. Our main results from the grant consist of extensive further development of the hidden Markov models for rainfall simulation and downscaling together with the development of associated software; new intermediate coupled models; a new methodology of inverse modeling for linking ICMs with observations and GCM results; and, observational studies of decadal and multi-decadal natural climate results, informed by ICM results.« less

  4. Central Compact Objects in Kes 79 and RCW 103 as `Hidden' Magnetars with Crustal Activity

    NASA Astrophysics Data System (ADS)

    Popov, S. B.; Kaurov, A. A.; Kaminker, A. D.

    2015-05-01

    We propose that observations of `hidden' magnetars in central compact objects can be used to probe crustal activity of neutron stars with large internal magnetic fields. Estimates based on calculations by Perna & Pons, Pons & Rea and Kaminker et al. suggest that central compact objects, which are proposed to be `hidden' magnetars, must demonstrate flux variations on the time scale of months-years. However, the most prominent candidate for the `hidden' magnetars - CXO J1852.6+0040 in Kes 79 - shows constant (within error bars) flux. This can be interpreted by lower variable crustal activity than in typical magnetars. Alternatively, CXO J1852.6+0040 can be in a high state of variable activity during the whole period of observations. Then we consider the source 1E161348 - 5055 in RCW103 as another candidate. Employing a simple 2D-modelling we argue that properties of the source can be explained by the crustal activity of the magnetar type. Thus, this object may be supplemented for the three known candidates for the `hidden' magnetars among central compact objects discussed in literature.

  5. Multivariate generalized hidden Markov regression models with random covariates: Physical exercise in an elderly population.

    PubMed

    Punzo, Antonio; Ingrassia, Salvatore; Maruotti, Antonello

    2018-04-22

    A time-varying latent variable model is proposed to jointly analyze multivariate mixed-support longitudinal data. The proposal can be viewed as an extension of hidden Markov regression models with fixed covariates (HMRMFCs), which is the state of the art for modelling longitudinal data, with a special focus on the underlying clustering structure. HMRMFCs are inadequate for applications in which a clustering structure can be identified in the distribution of the covariates, as the clustering is independent from the covariates distribution. Here, hidden Markov regression models with random covariates are introduced by explicitly specifying state-specific distributions for the covariates, with the aim of improving the recovering of the clusters in the data with respect to a fixed covariates paradigm. The hidden Markov regression models with random covariates class is defined focusing on the exponential family, in a generalized linear model framework. Model identifiability conditions are sketched, an expectation-maximization algorithm is outlined for parameter estimation, and various implementation and operational issues are discussed. Properties of the estimators of the regression coefficients, as well as of the hidden path parameters, are evaluated through simulation experiments and compared with those of HMRMFCs. The method is applied to physical activity data. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Hidden Variable Theories and Quantum Nonlocality

    ERIC Educational Resources Information Center

    Boozer, A. D.

    2009-01-01

    We clarify the meaning of Bell's theorem and its implications for the construction of hidden variable theories by considering an example system consisting of two entangled spin-1/2 particles. Using this example, we present a simplified version of Bell's theorem and describe several hidden variable theories that agree with the predictions of…

  7. Experimental demonstration of nonbilocal quantum correlations

    PubMed Central

    Saunders, Dylan J.; Bennet, Adam J.; Branciard, Cyril; Pryde, Geoff J.

    2017-01-01

    Quantum mechanics admits correlations that cannot be explained by local realistic models. The most studied models are the standard local hidden variable models, which satisfy the well-known Bell inequalities. To date, most works have focused on bipartite entangled systems. We consider correlations between three parties connected via two independent entangled states. We investigate the new type of so-called “bilocal” models, which correspondingly involve two independent hidden variables. These models describe scenarios that naturally arise in quantum networks, where several independent entanglement sources are used. Using photonic qubits, we build such a linear three-node quantum network and demonstrate nonbilocal correlations by violating a Bell-like inequality tailored for bilocal models. Furthermore, we show that the demonstration of nonbilocality is more noise-tolerant than that of standard Bell nonlocality in our three-party quantum network. PMID:28508045

  8. Nonlinear dynamical modes of climate variability: from curves to manifolds

    NASA Astrophysics Data System (ADS)

    Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander

    2016-04-01

    The necessity of efficient dimensionality reduction methods capturing dynamical properties of the system from observed data is evident. Recent study shows that nonlinear dynamical mode (NDM) expansion is able to solve this problem and provide adequate phase variables in climate data analysis [1]. A single NDM is logical extension of linear spatio-temporal structure (like empirical orthogonal function pattern): it is constructed as nonlinear transformation of hidden scalar time series to the space of observed variables, i. e. projection of observed dataset onto a nonlinear curve. Both the hidden time series and the parameters of the curve are learned simultaneously using Bayesian approach. The only prior information about the hidden signal is the assumption of its smoothness. The optimal nonlinearity degree and smoothness are found using Bayesian evidence technique. In this work we do further extension and look for vector hidden signals instead of scalar with the same smoothness restriction. As a result we resolve multidimensional manifolds instead of sum of curves. The dimension of the hidden manifold is optimized using also Bayesian evidence. The efficiency of the extension is demonstrated on model examples. Results of application to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep15510

  9. Final Technical Report for Collaborative Research: Regional climate-change projections through next-generation empirical and dynamical models, DE-FG02-07ER64429

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smyth, Padhraic

    2013-07-22

    This is the final report for a DOE-funded research project describing the outcome of research on non-homogeneous hidden Markov models (NHMMs) and coupled ocean-atmosphere (O-A) intermediate-complexity models (ICMs) to identify the potentially predictable modes of climate variability, and to investigate their impacts on the regional-scale. The main results consist of extensive development of the hidden Markov models for rainfall simulation and downscaling specifically within the non-stationary climate change context together with the development of parallelized software; application of NHMMs to downscaling of rainfall projections over India; identification and analysis of decadal climate signals in data and models; and, studies ofmore » climate variability in terms of the dynamics of atmospheric flow regimes.« less

  10. Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.

    PubMed

    Zhang, Yue; Berhane, Kiros

    2016-01-01

    We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.

  11. Pitowsky's Kolmogorovian Models and Super-determinism.

    PubMed

    Kellner, Jakob

    2017-01-01

    In an attempt to demonstrate that local hidden variables are mathematically possible, Pitowsky constructed "spin-[Formula: see text] functions" and later "Kolmogorovian models", which employs a nonstandard notion of probability. We describe Pitowsky's analysis and argue (with the benefit of hindsight) that his notion of hidden variables is in fact just super-determinism (and accordingly physically not relevant). Pitowsky's first construction uses the Continuum Hypothesis. Farah and Magidor took this as an indication that at some stage physics might give arguments for or against adopting specific new axioms of set theory. We would rather argue that it supports the opposing view, i.e., the widespread intuition "if you need a non-measurable function, it is physically irrelevant".

  12. Applying Data Mining Techniques to Extract Hidden Patterns about Breast Cancer Survival in an Iranian Cohort Study.

    PubMed

    Khalkhali, Hamid Reza; Lotfnezhad Afshar, Hadi; Esnaashari, Omid; Jabbari, Nasrollah

    2016-01-01

    Breast cancer survival has been analyzed by many standard data mining algorithms. A group of these algorithms belonged to the decision tree category. Ability of the decision tree algorithms in terms of visualizing and formulating of hidden patterns among study variables were main reasons to apply an algorithm from the decision tree category in the current study that has not studied already. The classification and regression trees (CART) was applied to a breast cancer database contained information on 569 patients in 2007-2010. The measurement of Gini impurity used for categorical target variables was utilized. The classification error that is a function of tree size was measured by 10-fold cross-validation experiments. The performance of created model was evaluated by the criteria as accuracy, sensitivity and specificity. The CART model produced a decision tree with 17 nodes, 9 of which were associated with a set of rules. The rules were meaningful clinically. They showed in the if-then format that Stage was the most important variable for predicting breast cancer survival. The scores of accuracy, sensitivity and specificity were: 80.3%, 93.5% and 53%, respectively. The current study model as the first one created by the CART was able to extract useful hidden rules from a relatively small size dataset.

  13. Hidden symmetry in the confined hydrogen atom problem

    NASA Astrophysics Data System (ADS)

    Pupyshev, Vladimir I.; Scherbinin, Andrei V.

    2002-07-01

    The classical counterpart of the well-known quantum mechanical model of a spherically confined hydrogen atom is examined in terms of the Lenz vector, a dynamic variable featuring the conventional Kepler problem. It is shown that a conditional conservation law associated with the Lenz vector is true, in fair agreement with the corresponding quantum problem previously found to exhibit a hidden symmetry as well.

  14. Reinforcement learning state estimator.

    PubMed

    Morimoto, Jun; Doya, Kenji

    2007-03-01

    In this study, we propose a novel use of reinforcement learning for estimating hidden variables and parameters of nonlinear dynamical systems. A critical issue in hidden-state estimation is that we cannot directly observe estimation errors. However, by defining errors of observable variables as a delayed penalty, we can apply a reinforcement learning frame-work to state estimation problems. Specifically, we derive a method to construct a nonlinear state estimator by finding an appropriate feedback input gain using the policy gradient method. We tested the proposed method on single pendulum dynamics and show that the joint angle variable could be successfully estimated by observing only the angular velocity, and vice versa. In addition, we show that we could acquire a state estimator for the pendulum swing-up task in which a swing-up controller is also acquired by reinforcement learning simultaneously. Furthermore, we demonstrate that it is possible to estimate the dynamics of the pendulum itself while the hidden variables are estimated in the pendulum swing-up task. Application of the proposed method to a two-linked biped model is also presented.

  15. Determinism, independence, and objectivity are incompatible.

    PubMed

    Ionicioiu, Radu; Mann, Robert B; Terno, Daniel R

    2015-02-13

    Hidden-variable models aim to reproduce the results of quantum theory and to satisfy our classical intuition. Their refutation is usually based on deriving predictions that are different from those of quantum mechanics. Here instead we study the mutual compatibility of apparently reasonable classical assumptions. We analyze a version of the delayed-choice experiment which ostensibly combines determinism, independence of hidden variables on the conducted experiments, and wave-particle objectivity (the assertion that quantum systems are, at any moment, either particles or waves, but not both). These three ideas are incompatible with any theory, not only with quantum mechanics.

  16. General Method for Constructing Local Hidden Variable Models for Entangled Quantum States

    NASA Astrophysics Data System (ADS)

    Cavalcanti, D.; Guerini, L.; Rabelo, R.; Skrzypczyk, P.

    2016-11-01

    Entanglement allows for the nonlocality of quantum theory, which is the resource behind device-independent quantum information protocols. However, not all entangled quantum states display nonlocality. A central question is to determine the precise relation between entanglement and nonlocality. Here we present the first general test to decide whether a quantum state is local, and show that the test can be implemented by semidefinite programing. This method can be applied to any given state and for the construction of new examples of states with local hidden variable models for both projective and general measurements. As applications, we provide a lower-bound estimate of the fraction of two-qubit local entangled states and present new explicit examples of such states, including those that arise from physical noise models, Bell-diagonal states, and noisy Greenberger-Horne-Zeilinger and W states.

  17. Assessing the effect of quantitative and qualitative predictors on gastric cancer individuals survival using hierarchical artificial neural network models.

    PubMed

    Amiri, Zohreh; Mohammad, Kazem; Mahmoudi, Mahmood; Parsaeian, Mahbubeh; Zeraati, Hojjat

    2013-01-01

    There are numerous unanswered questions in the application of artificial neural network models for analysis of survival data. In most studies, independent variables have been studied as qualitative dichotomous variables, and results of using discrete and continuous quantitative, ordinal, or multinomial categorical predictive variables in these models are not well understood in comparison to conventional models. This study was designed and conducted to examine the application of these models in order to determine the survival of gastric cancer patients, in comparison to the Cox proportional hazards model. We studied the postoperative survival of 330 gastric cancer patients who suffered surgery at a surgical unit of the Iran Cancer Institute over a five-year period. Covariates of age, gender, history of substance abuse, cancer site, type of pathology, presence of metastasis, stage, and number of complementary treatments were entered in the models, and survival probabilities were calculated at 6, 12, 18, 24, 36, 48, and 60 months using the Cox proportional hazards and neural network models. We estimated coefficients of the Cox model and the weights in the neural network (with 3, 5, and 7 nodes in the hidden layer) in the training group, and used them to derive predictions in the study group. Predictions with these two methods were compared with those of the Kaplan-Meier product limit estimator as the gold standard. Comparisons were performed with the Friedman and Kruskal-Wallis tests. Survival probabilities at different times were determined using the Cox proportional hazards and a neural network with three nodes in the hidden layer; the ratios of standard errors with these two methods to the Kaplan-Meier method were 1.1593 and 1.0071, respectively, revealed a significant difference between Cox and Kaplan-Meier (P < 0.05) and no significant difference between Cox and the neural network, and the neural network and the standard (Kaplan-Meier), as well as better accuracy for the neural network (with 3 nodes in the hidden layer). Probabilities of survival were calculated using three neural network models with 3, 5, and 7 nodes in the hidden layer, and it has been observed that none of the predictions was significantly different from results with the Kaplan-Meier method and they appeared more comparable towards the last months (fifth year). However, we observed better accuracy using the neural network with 5 nodes in the hidden layer. Using the Cox proportional hazards and a neural network with 3 nodes in the hidden layer, we found enhanced accuracy with the neural network model. Neural networks can provide more accurate predictions for survival probabilities compared to the Cox proportional hazards mode, especially now that advances in computer sciences have eliminated limitations associated with complex computations. It is not recommended in order to adding too many hidden layer nodes because sample size related effects can reduce the accuracy. We recommend increasing the number of nodes to a point that increased accuracy continues (decrease in mean standard error), however increasing nodes should cease when a change in this trend is observed.

  18. Optimal no-go theorem on hidden-variable predictions of effect expectations

    NASA Astrophysics Data System (ADS)

    Blass, Andreas; Gurevich, Yuri

    2018-03-01

    No-go theorems prove that, under reasonable assumptions, classical hidden-variable theories cannot reproduce the predictions of quantum mechanics. Traditional no-go theorems proved that hidden-variable theories cannot predict correctly the values of observables. Recent expectation no-go theorems prove that hidden-variable theories cannot predict the expectations of observables. We prove the strongest expectation-focused no-go theorem to date. It is optimal in the sense that the natural weakenings of the assumptions and the natural strengthenings of the conclusion make the theorem fail. The literature on expectation no-go theorems strongly suggests that the expectation-focused approach is more general than the value-focused one. We establish that the expectation approach is not more general.

  19. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    PubMed

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  20. Modeling T-cell activation using gene expression profiling and state-space models.

    PubMed

    Rangel, Claudia; Angus, John; Ghahramani, Zoubin; Lioumi, Maria; Sotheran, Elizabeth; Gaiba, Alessia; Wild, David L; Falciani, Francesco

    2004-06-12

    We have used state-space models to reverse engineer transcriptional networks from highly replicated gene expression profiling time series data obtained from a well-established model of T-cell activation. State space models are a class of dynamic Bayesian networks that assume that the observed measurements depend on some hidden state variables that evolve according to Markovian dynamics. These hidden variables can capture effects that cannot be measured in a gene expression profiling experiment, e.g. genes that have not been included in the microarray, levels of regulatory proteins, the effects of messenger RNA and protein degradation, etc. Bootstrap confidence intervals are developed for parameters representing 'gene-gene' interactions over time. Our models represent the dynamics of T-cell activation and provide a methodology for the development of rational and experimentally testable hypotheses. Supplementary data and Matlab computer source code will be made available on the web at the URL given below. http://public.kgi.edu/~wild/LDS/index.htm

  1. EPR and Bell's theorem: A critical review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stapp, H.P.

    1991-01-01

    The argument of Einstein, Podolsky, and Rosen is reviewed with attention to logical structure and character of assumptions. Bohr's reply is discussed. Bell's contribution is formulated without use of hidden variables, and efforts to equate hidden variables to realism are critically examined. An alternative derivation of nonlocality that makes no use of hidden variables, microrealism, counterfactual definiteness, or any other assumption alien to orthodox quantum thinking is described in detail, with particular attention to the quartet or broken-square question.

  2. Daily Rainfall Simulation Using Climate Variables and Nonhomogeneous Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jung, J.; Kim, H. S.; Joo, H. J.; Han, D.

    2017-12-01

    Markov chain is an easy method to handle when we compare it with other ones for the rainfall simulation. However, it also has limitations in reflecting seasonal variability of rainfall or change on rainfall patterns caused by climate change. This study applied a Nonhomogeneous Hidden Markov Model(NHMM) to consider these problems. The NHMM compared with a Hidden Markov Model(HMM) for the evaluation of a goodness of the model. First, we chose Gum river basin in Korea to apply the models and collected daily rainfall data from the stations. Also, the climate variables of geopotential height, temperature, zonal wind, and meridional wind date were collected from NCEP/NCAR reanalysis data to consider external factors affecting the rainfall event. We conducted a correlation analysis between rainfall and climate variables then developed a linear regression equation using the climate variables which have high correlation with rainfall. The monthly rainfall was obtained by the regression equation and it became input data of NHMM. Finally, the daily rainfall by NHMM was simulated and we evaluated the goodness of fit and prediction capability of NHMM by comparing with those of HMM. As a result of simulation by HMM, the correlation coefficient and root mean square error of daily/monthly rainfall were 0.2076 and 10.8243/131.1304mm each. In case of NHMM, the correlation coefficient and root mean square error of daily/monthly rainfall were 0.6652 and 10.5112/100.9865mm each. We could verify that the error of daily and monthly rainfall simulated by NHMM was improved by 2.89% and 22.99% compared with HMM. Therefore, it is expected that the results of the study could provide more accurate data for hydrologic analysis. Acknowledgements This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning(2017R1A2B3005695)

  3. A Hidden Markov Model for Analysis of Frontline Veterinary Data for Emerging Zoonotic Disease Surveillance

    PubMed Central

    Robertson, Colin; Sawford, Kate; Gunawardana, Walimunige S. N.; Nelson, Trisalyn A.; Nathoo, Farouk; Stephen, Craig

    2011-01-01

    Surveillance systems tracking health patterns in animals have potential for early warning of infectious disease in humans, yet there are many challenges that remain before this can be realized. Specifically, there remains the challenge of detecting early warning signals for diseases that are not known or are not part of routine surveillance for named diseases. This paper reports on the development of a hidden Markov model for analysis of frontline veterinary sentinel surveillance data from Sri Lanka. Field veterinarians collected data on syndromes and diagnoses using mobile phones. A model for submission patterns accounts for both sentinel-related and disease-related variability. Models for commonly reported cattle diagnoses were estimated separately. Region-specific weekly average prevalence was estimated for each diagnoses and partitioned into normal and abnormal periods. Visualization of state probabilities was used to indicate areas and times of unusual disease prevalence. The analysis suggests that hidden Markov modelling is a useful approach for surveillance datasets from novel populations and/or having little historical baselines. PMID:21949763

  4. Experimental test of state-independent quantum contextuality of an indivisible quantum system

    NASA Astrophysics Data System (ADS)

    Li, Meng; Huang, Yun-Feng; Cao, Dong-Yang; Zhang, Chao; Zhang, Yong-Sheng; Liu, Bi-Heng; Li, Chuan-Feng; Guo, Guang-Can

    2014-05-01

    Since the quantum mechanics was born, quantum mechanics was argued among scientists because the differences between quantum mechanics and the classical physics. Because of this, some people give hidden variable theory. One of the hidden variable theory is non-contextual hidden variable theory, and KS inequalities are famous in non-contextual hidden variable theory. But the original KS inequalities have 117 directions to measure, so it is almost impossible to test the KS inequalities in experiment. However bout two years ago, Sixia Yu and C.H. Oh point out that for a single qutrit, we only need to measure 13 directions, then we can test the KS inequalities. This makes it possible to test the KS inequalities in experiment. We use the polarization and the path of single photon to construct a qutrit, and we use the half-wave plates, the beam displacers and polar beam splitters to prepare the quantum state and finish the measurement. And the result prove that quantum mechanics is right and non-contextual hidden variable theory is wrong.

  5. Estimating Density and Temperature Dependence of Juvenile Vital Rates Using a Hidden Markov Model

    PubMed Central

    McElderry, Robert M.

    2017-01-01

    Organisms in the wild have cryptic life stages that are sensitive to changing environmental conditions and can be difficult to survey. In this study, I used mark-recapture methods to repeatedly survey Anaea aidea (Nymphalidae) caterpillars in nature, then modeled caterpillar demography as a hidden Markov process to assess if temporal variability in temperature and density influence the survival and growth of A. aidea over time. Individual encounter histories result from the joint likelihood of being alive and observed in a particular stage, and I have included hidden states by separating demography and observations into parallel and independent processes. I constructed a demographic matrix containing the probabilities of all possible fates for each stage, including hidden states, e.g., eggs and pupae. I observed both dead and live caterpillars with high probability. Peak caterpillar abundance attracted multiple predators, and survival of fifth instars declined as per capita predation rate increased through spring. A time lag between predator and prey abundance was likely the cause of improved fifth instar survival estimated at high density. Growth rates showed an increase with temperature, but the preferred model did not include temperature. This work illustrates how state-space models can include unobservable stages and hidden state processes to evaluate how environmental factors influence vital rates of cryptic life stages in the wild. PMID:28505138

  6. Simultaneous escaping of explicit and hidden free energy barriers: application of the orthogonal space random walk strategy in generalized ensemble based conformational sampling.

    PubMed

    Zheng, Lianqing; Chen, Mengen; Yang, Wei

    2009-06-21

    To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.

  7. Fast and robust group-wise eQTL mapping using sparse graphical models.

    PubMed

    Cheng, Wei; Shi, Yu; Zhang, Xiang; Wang, Wei

    2015-01-16

    Genome-wide expression quantitative trait loci (eQTL) studies have emerged as a powerful tool to understand the genetic basis of gene expression and complex traits. The traditional eQTL methods focus on testing the associations between individual single-nucleotide polymorphisms (SNPs) and gene expression traits. A major drawback of this approach is that it cannot model the joint effect of a set of SNPs on a set of genes, which may correspond to hidden biological pathways. We introduce a new approach to identify novel group-wise associations between sets of SNPs and sets of genes. Such associations are captured by hidden variables connecting SNPs and genes. Our model is a linear-Gaussian model and uses two types of hidden variables. One captures the set associations between SNPs and genes, and the other captures confounders. We develop an efficient optimization procedure which makes this approach suitable for large scale studies. Extensive experimental evaluations on both simulated and real datasets demonstrate that the proposed methods can effectively capture both individual and group-wise signals that cannot be identified by the state-of-the-art eQTL mapping methods. Considering group-wise associations significantly improves the accuracy of eQTL mapping, and the successful multi-layer regression model opens a new approach to understand how multiple SNPs interact with each other to jointly affect the expression level of a group of genes.

  8. Optimization of artificial neural network models through genetic algorithms for surface ozone concentration forecasting.

    PubMed

    Pires, J C M; Gonçalves, B; Azevedo, F G; Carneiro, A P; Rego, N; Assembleia, A J B; Lima, J F B; Silva, P A; Alves, C; Martins, F G

    2012-09-01

    This study proposes three methodologies to define artificial neural network models through genetic algorithms (GAs) to predict the next-day hourly average surface ozone (O(3)) concentrations. GAs were applied to define the activation function in hidden layer and the number of hidden neurons. Two of the methodologies define threshold models, which assume that the behaviour of the dependent variable (O(3) concentrations) changes when it enters in a different regime (two and four regimes were considered in this study). The change from one regime to another depends on a specific value (threshold value) of an explanatory variable (threshold variable), which is also defined by GAs. The predictor variables were the hourly average concentrations of carbon monoxide (CO), nitrogen oxide, nitrogen dioxide (NO(2)), and O(3) (recorded in the previous day at an urban site with traffic influence) and also meteorological data (hourly averages of temperature, solar radiation, relative humidity and wind speed). The study was performed for the period from May to August 2004. Several models were achieved and only the best model of each methodology was analysed. In threshold models, the variables selected by GAs to define the O(3) regimes were temperature, CO and NO(2) concentrations, due to their importance in O(3) chemistry in an urban atmosphere. In the prediction of O(3) concentrations, the threshold model that considers two regimes was the one that fitted the data most efficiently.

  9. Dynamic Alignment Models for Neural Coding

    PubMed Central

    Kollmorgen, Sepp; Hahnloser, Richard H. R.

    2014-01-01

    Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448

  10. Einstein-Podolsky-Rosen correlations and Bell correlations in the simplest scenario

    NASA Astrophysics Data System (ADS)

    Quan, Quan; Zhu, Huangjun; Fan, Heng; Yang, Wen-Li

    2017-06-01

    Einstein-Podolsky-Rosen (EPR) steering is an intermediate type of quantum nonlocality which sits between entanglement and Bell nonlocality. A set of correlations is Bell nonlocal if it does not admit a local hidden variable (LHV) model, while it is EPR nonlocal if it does not admit a local hidden variable-local hidden state (LHV-LHS) model. It is interesting to know what states can generate EPR-nonlocal correlations in the simplest nontrivial scenario, that is, two projective measurements for each party sharing a two-qubit state. Here we show that a two-qubit state can generate EPR-nonlocal full correlations (excluding marginal statistics) in this scenario if and only if it can generate Bell-nonlocal correlations. If full statistics (including marginal statistics) is taken into account, surprisingly, the same scenario can manifest the simplest one-way steering and the strongest hierarchy between steering and Bell nonlocality. To illustrate these intriguing phenomena in simple setups, several concrete examples are discussed in detail, which facilitates experimental demonstration. In the course of study, we introduce the concept of restricted LHS models and thereby derive a necessary and sufficient semidefinite-programming criterion to determine the steerability of any bipartite state under given measurements. Analytical criteria are further derived in several scenarios of strong theoretical and experimental interest.

  11. A novel framework to simulating non-stationary, non-linear, non-Normal hydrological time series using Markov Switching Autoregressive Models

    NASA Astrophysics Data System (ADS)

    Birkel, C.; Paroli, R.; Spezia, L.; Tetzlaff, D.; Soulsby, C.

    2012-12-01

    In this paper we present a novel model framework using the class of Markov Switching Autoregressive Models (MSARMs) to examine catchments as complex stochastic systems that exhibit non-stationary, non-linear and non-Normal rainfall-runoff and solute dynamics. Hereby, MSARMs are pairs of stochastic processes, one observed and one unobserved, or hidden. We model the unobserved process as a finite state Markov chain and assume that the observed process, given the hidden Markov chain, is conditionally autoregressive, which means that the current observation depends on its recent past (system memory). The model is fully embedded in a Bayesian analysis based on Markov Chain Monte Carlo (MCMC) algorithms for model selection and uncertainty assessment. Hereby, the autoregressive order and the dimension of the hidden Markov chain state-space are essentially self-selected. The hidden states of the Markov chain represent unobserved levels of variability in the observed process that may result from complex interactions of hydroclimatic variability on the one hand and catchment characteristics affecting water and solute storage on the other. To deal with non-stationarity, additional meteorological and hydrological time series along with a periodic component can be included in the MSARMs as covariates. This extension allows identification of potential underlying drivers of temporal rainfall-runoff and solute dynamics. We applied the MSAR model framework to streamflow and conservative tracer (deuterium and oxygen-18) time series from an intensively monitored 2.3 km2 experimental catchment in eastern Scotland. Statistical time series analysis, in the form of MSARMs, suggested that the streamflow and isotope tracer time series are not controlled by simple linear rules. MSARMs showed that the dependence of current observations on past inputs observed by transport models often in form of the long-tailing of travel time and residence time distributions can be efficiently explained by non-stationarity either of the system input (climatic variability) and/or the complexity of catchment storage characteristics. The statistical model is also capable of reproducing short (event) and longer-term (inter-event) and wet and dry dynamical "hydrological states". These reflect the non-linear transport mechanisms of flow pathways induced by transient climatic and hydrological variables and modified by catchment characteristics. We conclude that MSARMs are a powerful tool to analyze the temporal dynamics of hydrological data, allowing for explicit integration of non-stationary, non-linear and non-Normal characteristics.

  12. Improving Forecasts Through Realistic Uncertainty Estimates: A Novel Data Driven Method for Model Uncertainty Quantification in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.

    2016-12-01

    Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.

  13. EPR Steering inequalities with Communication Assistance

    PubMed Central

    Nagy, Sándor; Vértesi, Tamás

    2016-01-01

    In this paper, we investigate the communication cost of reproducing Einstein-Podolsky-Rosen (EPR) steering correlations arising from bipartite quantum systems. We characterize the set of bipartite quantum states which admits a local hidden state model augmented with c bits of classical communication from an untrusted party (Alice) to a trusted party (Bob). In case of one bit of information (c = 1), we show that this set has a nontrivial intersection with the sets admitting a local hidden state and a local hidden variables model for projective measurements. On the other hand, we find that an infinite amount of classical communication is required from an untrusted Alice to a trusted Bob to simulate the EPR steering correlations produced by a two-qubit maximally entangled state. It is conjectured that a state-of-the-art quantum experiment would be able to falsify two bits of communication this way. PMID:26880376

  14. Inference for finite-sample trajectories in dynamic multi-state site-occupancy models using hidden Markov model smoothing

    USGS Publications Warehouse

    Fiske, Ian J.; Royle, J. Andrew; Gross, Kevin

    2014-01-01

    Ecologists and wildlife biologists increasingly use latent variable models to study patterns of species occurrence when detection is imperfect. These models have recently been generalized to accommodate both a more expansive description of state than simple presence or absence, and Markovian dynamics in the latent state over successive sampling seasons. In this paper, we write these multi-season, multi-state models as hidden Markov models to find both maximum likelihood estimates of model parameters and finite-sample estimators of the trajectory of the latent state over time. These estimators are especially useful for characterizing population trends in species of conservation concern. We also develop parametric bootstrap procedures that allow formal inference about latent trend. We examine model behavior through simulation, and we apply the model to data from the North American Amphibian Monitoring Program.

  15. Computational analysis of cell-to-cell heterogeneity in single-cell RNA-sequencing data reveals hidden subpopulations of cells.

    PubMed

    Buettner, Florian; Natarajan, Kedar N; Casale, F Paolo; Proserpio, Valentina; Scialdone, Antonio; Theis, Fabian J; Teichmann, Sarah A; Marioni, John C; Stegle, Oliver

    2015-02-01

    Recent technical developments have enabled the transcriptomes of hundreds of cells to be assayed in an unbiased manner, opening up the possibility that new subpopulations of cells can be found. However, the effects of potential confounding factors, such as the cell cycle, on the heterogeneity of gene expression and therefore on the ability to robustly identify subpopulations remain unclear. We present and validate a computational approach that uses latent variable models to account for such hidden factors. We show that our single-cell latent variable model (scLVM) allows the identification of otherwise undetectable subpopulations of cells that correspond to different stages during the differentiation of naive T cells into T helper 2 cells. Our approach can be used not only to identify cellular subpopulations but also to tease apart different sources of gene expression heterogeneity in single-cell transcriptomes.

  16. Inference for dynamics of continuous variables: the extended Plefka expansion with hidden nodes

    NASA Astrophysics Data System (ADS)

    Bravi, B.; Sollich, P.

    2017-06-01

    We consider the problem of a subnetwork of observed nodes embedded into a larger bulk of unknown (i.e. hidden) nodes, where the aim is to infer these hidden states given information about the subnetwork dynamics. The biochemical networks underlying many cellular and metabolic processes are important realizations of such a scenario as typically one is interested in reconstructing the time evolution of unobserved chemical concentrations starting from the experimentally more accessible ones. We present an application to this problem of a novel dynamical mean field approximation, the extended Plefka expansion, which is based on a path integral description of the stochastic dynamics. As a paradigmatic model we study the stochastic linear dynamics of continuous degrees of freedom interacting via random Gaussian couplings. The resulting joint distribution is known to be Gaussian and this allows us to fully characterize the posterior statistics of the hidden nodes. In particular the equal-time hidden-to-hidden variance—conditioned on observations—gives the expected error at each node when the hidden time courses are predicted based on the observations. We assess the accuracy of the extended Plefka expansion in predicting these single node variances as well as error correlations over time, focussing on the role of the system size and the number of observed nodes.

  17. Von Neumann's impossibility proof: Mathematics in the service of rhetorics

    NASA Astrophysics Data System (ADS)

    Dieks, Dennis

    2017-11-01

    According to what has become a standard history of quantum mechanics, in 1932 von Neumann persuaded the physics community that hidden variables are impossible as a matter of principle, after which leading proponents of the Copenhagen interpretation put the situation to good use by arguing that the completeness of quantum mechanics was undeniable. This state of affairs lasted, so the story continues, until Bell in 1966 exposed von Neumann's proof as obviously wrong. The realization that von Neumann's proof was fallacious then rehabilitated hidden variables and made serious foundational research possible again. It is often added in recent accounts that von Neumann's error had been spotted almost immediately by Grete Hermann, but that her discovery was of no effect due to the dominant Copenhagen Zeitgeist. We shall attempt to tell a story that is more historically accurate and less ideologically charged. Most importantly, von Neumann never claimed to have shown the impossibility of hidden variables tout court, but argued that hidden-variable theories must possess a structure that deviates fundamentally from that of quantum mechanics. Both Hermann and Bell appear to have missed this point; moreover, both raised unjustified technical objections to the proof. Von Neumann's argument was basically that hidden-variables schemes must violate the ;quantum principle; that physical quantities are to be represented by operators in a Hilbert space. As a consequence, hidden-variables schemes, though possible in principle, necessarily exhibit a certain kind of contextuality. As we shall illustrate, early reactions to Bohm's theory are in agreement with this account. Leading physicists pointed out that Bohm's theory has the strange feature that pre-existing particle properties do not generally reveal themselves in measurements, in accordance with von Neumann's result. They did not conclude that the ;impossible was done; and that von Neumann had been shown wrong.

  18. Heisenberg (and Schrödinger, and Pauli) on hidden variables

    NASA Astrophysics Data System (ADS)

    Bacciagaluppi, Guido; Crull, Elise

    In this paper, we discuss various aspects of Heisenberg's thought on hidden variables in the period 1927-1935. We also compare Heisenberg's approach to others current at the time, specifically that embodied by von Neumann's impossibility proof, but also views expressed mainly in correspondence by Pauli and by Schrödinger. We shall base ourselves mostly on published and unpublished materials that are known but little-studied, among others Heisenberg's own draft response to the EPR paper. Our aim will be not only to clarify Heisenberg's thought on the hidden-variables question, but in part also to clarify how this question was understood more generally at the time.

  19. A Bayesian approach to estimating hidden variables as well as missing and wrong molecular interactions in ordinary differential equation-based mathematical models.

    PubMed

    Engelhardt, Benjamin; Kschischo, Maik; Fröhlich, Holger

    2017-06-01

    Ordinary differential equations (ODEs) are a popular approach to quantitatively model molecular networks based on biological knowledge. However, such knowledge is typically restricted. Wrongly modelled biological mechanisms as well as relevant external influence factors that are not included into the model are likely to manifest in major discrepancies between model predictions and experimental data. Finding the exact reasons for such observed discrepancies can be quite challenging in practice. In order to address this issue, we suggest a Bayesian approach to estimate hidden influences in ODE-based models. The method can distinguish between exogenous and endogenous hidden influences. Thus, we can detect wrongly specified as well as missed molecular interactions in the model. We demonstrate the performance of our Bayesian dynamic elastic-net with several ordinary differential equation models from the literature, such as human JAK-STAT signalling, information processing at the erythropoietin receptor, isomerization of liquid α -Pinene, G protein cycling in yeast and UV-B triggered signalling in plants. Moreover, we investigate a set of commonly known network motifs and a gene-regulatory network. Altogether our method supports the modeller in an algorithmic manner to identify possible sources of errors in ODE-based models on the basis of experimental data. © 2017 The Author(s).

  20. Field Extension of Real Values of Physical Observables in Classical Theory can Help Attain Quantum Results

    NASA Astrophysics Data System (ADS)

    Wang, Hai; Kumar, Asutosh; Cho, Minhyung; Wu, Junde

    2018-04-01

    Physical quantities are assumed to take real values, which stems from the fact that an usual measuring instrument that measures a physical observable always yields a real number. Here we consider the question of what would happen if physical observables are allowed to assume complex values. In this paper, we show that by allowing observables in the Bell inequality to take complex values, a classical physical theory can actually get the same upper bound of the Bell expression as quantum theory. Also, by extending the real field to the quaternionic field, we can puzzle out the GHZ problem using local hidden variable model. Furthermore, we try to build a new type of hidden-variable theory of a single qubit based on the result.

  1. Interferometric Computation Beyond Quantum Theory

    NASA Astrophysics Data System (ADS)

    Garner, Andrew J. P.

    2018-03-01

    There are quantum solutions for computational problems that make use of interference at some stage in the algorithm. These stages can be mapped into the physical setting of a single particle travelling through a many-armed interferometer. There has been recent foundational interest in theories beyond quantum theory. Here, we present a generalized formulation of computation in the context of a many-armed interferometer, and explore how theories can differ from quantum theory and still perform distributed calculations in this set-up. We shall see that quaternionic quantum theory proves a suitable candidate, whereas box-world does not. We also find that a classical hidden variable model first presented by Spekkens (Phys Rev A 75(3): 32100, 2007) can also be used for this type of computation due to the epistemic restriction placed on the hidden variable.

  2. Violation of Bell's Inequality Using Continuous Variable Measurements

    NASA Astrophysics Data System (ADS)

    Thearle, Oliver; Janousek, Jiri; Armstrong, Seiji; Hosseini, Sara; Schünemann Mraz, Melanie; Assad, Syed; Symul, Thomas; James, Matthew R.; Huntington, Elanor; Ralph, Timothy C.; Lam, Ping Koy

    2018-01-01

    A Bell inequality is a fundamental test to rule out local hidden variable model descriptions of correlations between two physically separated systems. There have been a number of experiments in which a Bell inequality has been violated using discrete-variable systems. We demonstrate a violation of Bell's inequality using continuous variable quadrature measurements. By creating a four-mode entangled state with homodyne detection, we recorded a clear violation with a Bell value of B =2.31 ±0.02 . This opens new possibilities for using continuous variable states for device independent quantum protocols.

  3. Investigation of an HMM/ANN hybrid structure in pattern recognition application using cepstral analysis of dysarthric (distorted) speech signals.

    PubMed

    Polur, Prasad D; Miller, Gerald E

    2006-10-01

    Computer speech recognition of individuals with dysarthria, such as cerebral palsy patients requires a robust technique that can handle conditions of very high variability and limited training data. In this study, application of a 10 state ergodic hidden Markov model (HMM)/artificial neural network (ANN) hybrid structure for a dysarthric speech (isolated word) recognition system, intended to act as an assistive tool, was investigated. A small size vocabulary spoken by three cerebral palsy subjects was chosen. The effect of such a structure on the recognition rate of the system was investigated by comparing it with an ergodic hidden Markov model as a control tool. This was done in order to determine if this modified technique contributed to enhanced recognition of dysarthric speech. The speech was sampled at 11 kHz. Mel frequency cepstral coefficients were extracted from them using 15 ms frames and served as training input to the hybrid model setup. The subsequent results demonstrated that the hybrid model structure was quite robust in its ability to handle the large variability and non-conformity of dysarthric speech. The level of variability in input dysarthric speech patterns sometimes limits the reliability of the system. However, its application as a rehabilitation/control tool to assist dysarthric motor impaired individuals holds sufficient promise.

  4. The Misapplication of Probability Theory in Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Racicot, Ronald

    2014-03-01

    This article is a revision of two papers submitted to the APS in the past two and a half years. In these papers, arguments and proofs are summarized for the following: (1) The wrong conclusion by EPR that Quantum Mechanics is incomplete, perhaps requiring the addition of ``hidden variables'' for completion. Theorems that assume such ``hidden variables,'' such as Bell's theorem, are also wrong. (2) Quantum entanglement is not a realizable physical phenomenon and is based entirely on assuming a probability superposition model for quantum spin. Such a model directly violates conservation of angular momentum. (3) Simultaneous multiple-paths followed by a quantum particle traveling through space also cannot possibly exist. Besides violating Noether's theorem, the multiple-paths theory is based solely on probability calculations. Probability calculations by themselves cannot possibly represent simultaneous physically real events. None of the reviews of the submitted papers actually refuted the arguments and evidence that was presented. These analyses should therefore be carefully evaluated since the conclusions reached have such important impact in quantum mechanics and quantum information theory.

  5. From the Kochen-Specker theorem to noncontextuality inequalities without assuming determinism.

    PubMed

    Kunjwal, Ravi; Spekkens, Robert W

    2015-09-11

    The Kochen-Specker theorem demonstrates that it is not possible to reproduce the predictions of quantum theory in terms of a hidden variable model where the hidden variables assign a value to every projector deterministically and noncontextually. A noncontextual value assignment to a projector is one that does not depend on which other projectors-the context-are measured together with it. Using a generalization of the notion of noncontextuality that applies to both measurements and preparations, we propose a scheme for deriving inequalities that test whether a given set of experimental statistics is consistent with a noncontextual model. Unlike previous inequalities inspired by the Kochen-Specker theorem, we do not assume that the value assignments are deterministic and therefore in the face of a violation of our inequality, the possibility of salvaging noncontextuality by abandoning determinism is no longer an option. Our approach is operational in the sense that it does not presume quantum theory: a violation of our inequality implies the impossibility of a noncontextual model for any operational theory that can account for the experimental observations, including any successor to quantum theory.

  6. Use of Machine Learning Techniques for Identification of Robust Teleconnections to East African Rainfall Variability

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Robertson, F. R.; Funk, C.

    2014-01-01

    Hidden Markov models can be used to investigate structure of subseasonal variability. East African short rain variability has connections to large-scale tropical variability. MJO - Intraseasonal variations connected with appearance of "wet" and "dry" states. ENSO/IOZM SST and circulation anomalies are apparent during years of anomalous residence time in the subseasonal "wet" state. Similar results found in previous studies, but we can interpret this with respect to variations of subseasonal wet and dry modes. Reveal underlying connections between MJO/IOZM/ENSO with respect to East African rainfall.

  7. Computational study of peptide permeation through membrane: searching for hidden slow variables

    NASA Astrophysics Data System (ADS)

    Cardenas, Alfredo E.; Elber, Ron

    2013-12-01

    Atomically detailed molecular dynamics trajectories in conjunction with Milestoning are used to analyse the different contributions of coarse variables to the permeation process of a small peptide (N-acetyl-l-tryptophanamide, NATA) through a 1,2-dioleoyl-sn-glycero-3-phosphocholine membrane. The peptide reverses its overall orientation as it permeates through the biological bilayer. The large change in orientation is investigated explicitly but is shown to impact the free energy landscape and permeation time only moderately. Nevertheless, a significant difference in permeation properties of the two halves of the membrane suggests the presence of other hidden slow variables. We speculate, based on calculation of the potential of mean force, that a conformational transition of NATA makes significant contribution to these differences. Other candidates for hidden slow variables may include water permeation and collective motions of phospholipids.

  8. What is the Effect of Interannual Hydroclimatic Variability on Water Supply Reservoir Operations?

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Turner, S. W. D.

    2015-12-01

    Rather than deriving from a single distribution and uniform persistence structure, hydroclimatic data exhibit significant trends and shifts in their mean, variance, and lagged correlation through time. Consequentially, observed and reconstructed streamflow records are often characterized by features of interannual variability, including long-term persistence and prolonged droughts. This study examines the effect of these features on the operating performance of water supply reservoirs. We develop a Stochastic Dynamic Programming (SDP) model that can incorporate a regime-shifting climate variable. We then compare the performance of operating policies—designed with and without climate variable—to quantify the contribution of interannual variability to standard policy sub-optimality. The approach uses a discrete-time Markov chain to partition the reservoir inflow time series into small number of 'hidden' climate states. Each state defines a distinct set of inflow transition probability matrices, which are used by the SDP model to condition the release decisions on the reservoir storage, current-period inflow and hidden climate state. The experimental analysis is carried out on 99 hypothetical water supply reservoirs fed from pristine catchments in Australia—all impacted by the Millennium drought. Results show that interannual hydroclimatic variability is a major cause of sub-optimal hedging decisions. The practical import is that conventional optimization methods may misguide operators, particularly in regions susceptible to multi-year droughts.

  9. A possible loophole in the theorem of Bell.

    PubMed

    Hess, K; Philipp, W

    2001-12-04

    The celebrated inequalities of Bell are based on the assumption that local hidden parameters exist. When combined with conflicting experimental results, these inequalities appear to prove that local hidden parameters cannot exist. This contradiction suggests to many that only instantaneous action at a distance can explain the Einstein, Podolsky, and Rosen type of experiments. We show that, in addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions that contribute to his being able to obtain the desired contradiction. For instance, Bell assumes that the hidden parameters do not depend on time and are governed by a single probability measure independent of the analyzer settings. We argue that the exclusion of time has neither a physical nor a mathematical basis but is based on Bell's translation of the concept of Einstein locality into the language of probability theory. Our additional set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does not permit Bell-type proofs to go forward.

  10. A Proposal for Testing Local Realism Without Using Assumptions Related to Hidden Variable States

    NASA Technical Reports Server (NTRS)

    Ryff, Luiz Carlos

    1996-01-01

    A feasible experiment is discussed which allows us to prove a Bell's theorem for two particles without using an inequality. The experiment could be used to test local realism against quantum mechanics without the introduction of additional assumptions related to hidden variables states. Only assumptions based on direct experimental observation are needed.

  11. Clustering Multivariate Time Series Using Hidden Markov Models

    PubMed Central

    Ghassempour, Shima; Girosi, Federico; Maeder, Anthony

    2014-01-01

    In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs), where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers. PMID:24662996

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khrennikov, Andrei

    We present fundamentals of a prequantum model with hidden variables of the classical field type. In some sense this is the comeback of classical wave mechanics. Our approach also can be considered as incorporation of quantum mechanics into classical signal theory. All quantum averages (including correlations of entangled systems) can be represented as classical signal averages and correlations.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D. J.

    It is shown that a weak measurement of a quantum system produces a new state of the quantum system which depends on the prior state, as well as the (uncontrollable) measured position of the pointer variable of the weak-measurement apparatus. The result imposes a constraint on hidden-variable theories which assign a different state to a quantum system than standard quantum mechanics. The constraint means that a crypto-nonlocal hidden-variable theory can be ruled out in a more direct way than previously done.

  14. Parsing Social Network Survey Data from Hidden Populations Using Stochastic Context-Free Grammars

    PubMed Central

    Poon, Art F. Y.; Brouwer, Kimberly C.; Strathdee, Steffanie A.; Firestone-Cruz, Michelle; Lozada, Remedios M.; Kosakovsky Pond, Sergei L.; Heckathorn, Douglas D.; Frost, Simon D. W.

    2009-01-01

    Background Human populations are structured by social networks, in which individuals tend to form relationships based on shared attributes. Certain attributes that are ambiguous, stigmatized or illegal can create a ÔhiddenÕ population, so-called because its members are difficult to identify. Many hidden populations are also at an elevated risk of exposure to infectious diseases. Consequently, public health agencies are presently adopting modern survey techniques that traverse social networks in hidden populations by soliciting individuals to recruit their peers, e.g., respondent-driven sampling (RDS). The concomitant accumulation of network-based epidemiological data, however, is rapidly outpacing the development of computational methods for analysis. Moreover, current analytical models rely on unrealistic assumptions, e.g., that the traversal of social networks can be modeled by a Markov chain rather than a branching process. Methodology/Principal Findings Here, we develop a new methodology based on stochastic context-free grammars (SCFGs), which are well-suited to modeling tree-like structure of the RDS recruitment process. We apply this methodology to an RDS case study of injection drug users (IDUs) in Tijuana, México, a hidden population at high risk of blood-borne and sexually-transmitted infections (i.e., HIV, hepatitis C virus, syphilis). Survey data were encoded as text strings that were parsed using our custom implementation of the inside-outside algorithm in a publicly-available software package (HyPhy), which uses either expectation maximization or direct optimization methods and permits constraints on model parameters for hypothesis testing. We identified significant latent variability in the recruitment process that violates assumptions of Markov chain-based methods for RDS analysis: firstly, IDUs tended to emulate the recruitment behavior of their own recruiter; and secondly, the recruitment of like peers (homophily) was dependent on the number of recruits. Conclusions SCFGs provide a rich probabilistic language that can articulate complex latent structure in survey data derived from the traversal of social networks. Such structure that has no representation in Markov chain-based models can interfere with the estimation of the composition of hidden populations if left unaccounted for, raising critical implications for the prevention and control of infectious disease epidemics. PMID:19738904

  15. Adaptive distributed source coding.

    PubMed

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  16. State-dependent rotations of spins by weak measurements

    NASA Astrophysics Data System (ADS)

    Miller, D. J.

    2011-03-01

    It is shown that a weak measurement of a quantum system produces a new state of the quantum system which depends on the prior state, as well as the (uncontrollable) measured position of the pointer variable of the weak-measurement apparatus. The result imposes a constraint on hidden-variable theories which assign a different state to a quantum system than standard quantum mechanics. The constraint means that a crypto-nonlocal hidden-variable theory can be ruled out in a more direct way than previously done.

  17. What Is Going on Inside the Arrows? Discovering the Hidden Springs in Causal Models

    PubMed Central

    Murray-Watters, Alexander; Glymour, Clark

    2016-01-01

    Using Gebharter's (2014) representation, we consider aspects of the problem of discovering the structure of unmeasured sub-mechanisms when the variables in those sub-mechanisms have not been measured. Exploiting an early insight of Sober's (1998), we provide a correct algorithm for identifying latent, endogenous structure—sub-mechanisms—for a restricted class of structures. The algorithm can be merged with other methods for discovering causal relations among unmeasured variables, and feedback relations between measured variables and unobserved causes can sometimes be learned. PMID:27313331

  18. Hardy's argument and successive spin-s measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahanj, Ali

    2010-07-15

    We consider a hidden-variable theoretic description of successive measurements of noncommuting spin observables on an input spin-s state. In this scenario, the hidden-variable theory leads to a Hardy-type argument that quantum predictions violate it. We show that the maximum probability of success of Hardy's argument in quantum theory is ((1/2)){sup 4s}, which is more than in the spatial case.

  19. Sparse covariance estimation in heterogeneous samples*

    PubMed Central

    Rodríguez, Abel; Lenkoski, Alex; Dobra, Adrian

    2015-01-01

    Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the pre-Euro era. PMID:26925189

  20. Estimating the periodic components of a biomedical signal through inverse problem modelling and Bayesian inference with sparsity enforcing prior

    NASA Astrophysics Data System (ADS)

    Dumitru, Mircea; Djafari, Ali-Mohammad

    2015-01-01

    The recent developments in chronobiology need a periodic components variation analysis for the signals expressing the biological rhythms. A precise estimation of the periodic components vector is required. The classical approaches, based on FFT methods, are inefficient considering the particularities of the data (short length). In this paper we propose a new method, using the sparsity prior information (reduced number of non-zero values components). The considered law is the Student-t distribution, viewed as a marginal distribution of a Infinite Gaussian Scale Mixture (IGSM) defined via a hidden variable representing the inverse variances and modelled as a Gamma Distribution. The hyperparameters are modelled using the conjugate priors, i.e. using Inverse Gamma Distributions. The expression of the joint posterior law of the unknown periodic components vector, hidden variables and hyperparameters is obtained and then the unknowns are estimated via Joint Maximum A Posteriori (JMAP) and Posterior Mean (PM). For the PM estimator, the expression of the posterior law is approximated by a separable one, via the Bayesian Variational Approximation (BVA), using the Kullback-Leibler (KL) divergence. Finally we show the results on synthetic data in cancer treatment applications.

  1. Implementation of neural network for color properties of polycarbonates

    NASA Astrophysics Data System (ADS)

    Saeed, U.; Ahmad, S.; Alsadi, J.; Ross, D.; Rizvi, G.

    2014-05-01

    In present paper, the applicability of artificial neural networks (ANN) is investigated for color properties of plastics. The neural networks toolbox of Matlab 6.5 is used to develop and test the ANN model on a personal computer. An optimal design is completed for 10, 12, 14,16,18 & 20 hidden neurons on single hidden layer with five different algorithms: batch gradient descent (GD), batch variable learning rate (GDX), resilient back-propagation (RP), scaled conjugate gradient (SCG), levenberg-marquardt (LM) in the feed forward back-propagation neural network model. The training data for ANN is obtained from experimental measurements. There were twenty two inputs including resins, additives & pigments while three tristimulus color values L*, a* and b* were used as output layer. Statistical analysis in terms of Root-Mean-Squared (RMS), absolute fraction of variance (R squared), as well as mean square error is used to investigate the performance of ANN. LM algorithm with fourteen neurons on hidden layer in Feed Forward Back-Propagation of ANN model has shown best result in the present study. The degree of accuracy of the ANN model in reduction of errors is proven acceptable in all statistical analysis and shown in results. However, it was concluded that ANN provides a feasible method in error reduction in specific color tristimulus values.

  2. Quasi-Bell inequalities from symmetrized products of noncommuting qubit observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamel, Omar E.; Fleming, Graham R.

    Noncommuting observables cannot be simultaneously measured; however, under local hidden variable models, they must simultaneously hold premeasurement values, implying the existence of a joint probability distribution. We study the joint distributions of noncommuting observables on qubits, with possible criteria of positivity and the Fréchet bounds limiting the joint probabilities, concluding that the latter may be negative. We use symmetrization, justified heuristically and then more carefully via the Moyal characteristic function, to find the quantum operator corresponding to the product of noncommuting observables. This is then used to construct Quasi-Bell inequalities, Bell inequalities containing products of noncommuting observables, on two qubits.more » These inequalities place limits on the local hidden variable models that define joint probabilities for noncommuting observables. We also found that the Quasi-Bell inequalities have a quantum to classical violation as high as 3/2 on two qubit, higher than conventional Bell inequalities. Our result demonstrates the theoretical importance of noncommutativity in the nonlocality of quantum mechanics and provides an insightful generalization of Bell inequalities.« less

  3. Quasi-Bell inequalities from symmetrized products of noncommuting qubit observables

    DOE PAGES

    Gamel, Omar E.; Fleming, Graham R.

    2017-05-01

    Noncommuting observables cannot be simultaneously measured; however, under local hidden variable models, they must simultaneously hold premeasurement values, implying the existence of a joint probability distribution. We study the joint distributions of noncommuting observables on qubits, with possible criteria of positivity and the Fréchet bounds limiting the joint probabilities, concluding that the latter may be negative. We use symmetrization, justified heuristically and then more carefully via the Moyal characteristic function, to find the quantum operator corresponding to the product of noncommuting observables. This is then used to construct Quasi-Bell inequalities, Bell inequalities containing products of noncommuting observables, on two qubits.more » These inequalities place limits on the local hidden variable models that define joint probabilities for noncommuting observables. We also found that the Quasi-Bell inequalities have a quantum to classical violation as high as 3/2 on two qubit, higher than conventional Bell inequalities. Our result demonstrates the theoretical importance of noncommutativity in the nonlocality of quantum mechanics and provides an insightful generalization of Bell inequalities.« less

  4. Hidden Statistics of Schroedinger Equation

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2011-01-01

    Work was carried out in determination of the mathematical origin of randomness in quantum mechanics and creating a hidden statistics of Schr dinger equation; i.e., to expose the transitional stochastic process as a "bridge" to the quantum world. The governing equations of hidden statistics would preserve such properties of quantum physics as superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods.

  5. Mathematical and physical meaning of the Bell inequalities

    NASA Astrophysics Data System (ADS)

    Santos, Emilio

    2016-09-01

    It is shown that the Bell inequalities are closely related to the triangle inequalities involving distance functions amongst pairs of random variables with values \\{0,1\\}. A hidden variables model may be defined as a mapping between a set of quantum projection operators and a set of random variables. The model is noncontextual if there is a joint probability distribution. The Bell inequalities are necessary conditions for its existence. The inequalities are most relevant when measurements are performed at space-like separation, thus showing a conflict between quantum mechanics and local realism (Bell's theorem). The relations of the Bell inequalities with contextuality, the Kochen-Specker theorem, and quantum entanglement are briefly discussed.

  6. Speakable and Unspeakable in Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Bell, J. S.; Aspect, Introduction by Alain

    2004-06-01

    List of papers on quantum philosophy by J. S. Bell; Preface; Acknowledgements; Introduction by Alain Aspect; 1. On the problem of hidden variables in quantum mechanics; 2. On the Einstein-Rosen-Podolsky paradox; 3. The moral aspects of quantum mechanics; 4. Introduction to the hidden-variable question; 5. Subject and object; 6. On wave packet reduction in the Coleman-Hepp model; 7. The theory of local beables; 8. Locality in quantum mechanics: reply to critics; 9. How to teach special relativity; 10. Einstein-Podolsky-Rosen experiments; 11. The measurement theory of Everett and de Broglie's pilot wave; 12. Free variables and local causality; 13. Atomic-cascade photons and quantum-mechanical nonlocality; 14. de Broglie-Bohm delayed choice double-slit experiments and density matrix; 15. Quantum mechanics for cosmologists; 16. Bertlmann's socks and the nature of reality; 17. On the impossible pilot wave; 18. Speakable and unspeakable in quantum mechanics; 19. Beables for quantum field theory; 20. Six possible worlds of quantum mechanics; 21. EPR correlations and EPR distributions; 22. Are there quantum jumps?; 23. Against 'measurement'; 24. La Nouvelle cuisine.

  7. Python Environment for Bayesian Learning: Inferring the Structure of Bayesian Networks from Knowledge and Data

    PubMed Central

    Shah, Abhik; Woolf, Peter

    2009-01-01

    Summary In this paper, we introduce pebl, a Python library and application for learning Bayesian network structure from data and prior knowledge that provides features unmatched by alternative software packages: the ability to use interventional data, flexible specification of structural priors, modeling with hidden variables and exploitation of parallel processing. PMID:20161541

  8. A Bell-type theorem without hidden variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stapp, Henry P.

    2003-09-12

    It is shown that no theory that satisfies certain premises can exclude faster-than-light influences. The premises include neither the existence of hidden variables nor counterfactual definiteness, nor any premise that effectively entails the general existence of outcomes of unperformed local measurements. All the premises are compatible with Copenhagen philosophy and the principles and predictions of relativistic quantum field theory. The present proof is contrasted with an earlier one with the same objective.

  9. Using Bayesian Nonparametric Hidden Semi-Markov Models to Disentangle Affect Processes during Marital Interaction

    PubMed Central

    Griffin, William A.; Li, Xun

    2016-01-01

    Sequential affect dynamics generated during the interaction of intimate dyads, such as married couples, are associated with a cascade of effects—some good and some bad—on each partner, close family members, and other social contacts. Although the effects are well documented, the probabilistic structures associated with micro-social processes connected to the varied outcomes remain enigmatic. Using extant data we developed a method of classifying and subsequently generating couple dynamics using a Hierarchical Dirichlet Process Hidden semi-Markov Model (HDP-HSMM). Our findings indicate that several key aspects of existing models of marital interaction are inadequate: affect state emissions and their durations, along with the expected variability differences between distressed and nondistressed couples are present but highly nuanced; and most surprisingly, heterogeneity among highly satisfied couples necessitate that they be divided into subgroups. We review how this unsupervised learning technique generates plausible dyadic sequences that are sensitive to relationship quality and provide a natural mechanism for computational models of behavioral and affective micro-social processes. PMID:27187319

  10. Zipf exponent of trajectory distribution in the hidden Markov model

    NASA Astrophysics Data System (ADS)

    Bochkarev, V. V.; Lerner, E. Yu

    2014-03-01

    This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.

  11. A composite model for the 750 GeV diphoton excess

    DOE PAGES

    Harigaya, Keisuke; Nomura, Yasunori

    2016-03-14

    We study a simple model in which the recently reported 750 GeV diphoton excess arises from a composite pseudo Nambu-Goldstone boson — hidden pion — produced by gluon fusion and decaying into two photons. The model only introduces an extra hidden gauge group at the TeV scale with a vectorlike quark in the bifundamental representation of the hidden and standard model gauge groups. We calculate the masses of all the hidden pions and analyze their experimental signatures and constraints. We find that two colored hidden pions must be near the current experimental limits, and hence are probed in the nearmore » future. We study physics of would-be stable particles — the composite states that do not decay purely by the hidden and standard model gauge dynamics — in detail, including constraints from cosmology. We discuss possible theoretical structures above the TeV scale, e.g. conformal dynamics and supersymmetry, and their phenomenological implications. We also discuss an extension of the minimal model in which there is an extra hidden quark that is singlet under the standard model and has a mass smaller than the hidden dynamical scale. This provides two standard model singlet hidden pions that can both be viewed as diphoton/diboson resonances produced by gluon fusion. We discuss several scenarios in which these (and other) resonances can be used to explain various excesses seen in the LHC data.« less

  12. A Top-down versus a Bottom-up Hidden-variables Description of the Stern-Gerlach Experiment

    NASA Astrophysics Data System (ADS)

    Arsenijević, M.; Jeknić-Dugić, J.; Dugić, M.

    We employ the Stern-Gerlach experiment to highlight the basics of a minimalist, non-interpretational top-down approach to quantum foundations. Certain benefits of the "quantum structural studies" (QSS) highlightedhere are detected and discussed. While the top-down approach can be described without making any reference to the fundamental structure of a closed system, the hidden variables (HV) theory á la Bohm proves to be more subtle than it is typically regarded.

  13. Evaluating disease management programme effectiveness: an introduction to instrumental variables.

    PubMed

    Linden, Ariel; Adams, John L

    2006-04-01

    This paper introduces the concept of instrumental variables (IVs) as a means of providing an unbiased estimate of treatment effects in evaluating disease management (DM) programme effectiveness. Model development is described using zip codes as the IV. Three diabetes DM outcomes were evaluated: annual diabetes costs, emergency department (ED) visits and hospital days. Both ordinary least squares (OLS) and IV estimates showed a significant treatment effect for diabetes costs (P = 0.011) but neither model produced a significant treatment effect for ED visits. However, the IV estimate showed a significant treatment effect for hospital days (P = 0.006) whereas the OLS model did not. These results illustrate the utility of IV estimation when the OLS model is sensitive to the confounding effect of hidden bias.

  14. Phenomenology of pure-gauge hidden valleys at hadron colliders

    NASA Astrophysics Data System (ADS)

    Juknevich, Jose E.

    Expectations for new physics at the LHC have been greatly influenced by the Hierarchy problem of electroweak symmetry breaking. However, there are reasons to believe that the LHC may still discover new physics, but not directly related to the resolution of the Hierarchy problem. To ensure that such a physics does not go undiscovered requires precise understanding of how new phenomena will reveal themselves in the current and future generation of particle-physics experiments. Given this fact it seems sensible to explore other approaches to this problem; we study three alternatives here. In this thesis I argue for the plausibility that the standard model is coupled, through new massive charged or colored particles, to a hidden sector whose low energy dynamics is controlled by a pure Yang-Mills theory, with no light matter. Such a sector would have numerous metastable "hidden glueballs" built from the hidden gluons. These states would decay to particles of the standard model. I consider the phenomenology of this scenario, and find formulas for the lifetimes and branching ratios of the most important of these states. The dominant decays are to two standard model gauge bosons or to fermion-antifermion pairs, or by radiative decays with photon or Higgs emission, leading to jet- and photon-rich signals, and some occasional leptons. The presence of effective operators of different mass dimensions, often competing with each other, together with a great diversity of states, leads to a great variability in the lifetimes and decay modes of the hidden glueballs. I find that most of the operators considered in this work are not heavily constrained by precision electroweak physics, therefore leaving plenty of room in the parameter space to be explored by the future experiments at the LHC. Finally, I discuss several issues on the phenomenology of the new massive particles as well as an outlook for experimental searches.

  15. Use of Partial Least Squares improves the efficacy of removing unwanted variability in differential expression analyses based on RNA-Seq data.

    PubMed

    Chakraborty, Sutirtha

    2018-05-26

    RNA-Seq technology has revolutionized the face of gene expression profiling by generating read count data measuring the transcript abundances for each queried gene on multiple experimental subjects. But on the downside, the underlying technical artefacts and hidden biological profiles of the samples generate a wide variety of latent effects that may potentially distort the actual transcript/gene expression signals. Standard normalization techniques fail to correct for these hidden variables and lead to flawed downstream analyses. In this work I demonstrate the use of Partial Least Squares (built as an R package 'SVAPLSseq') to correct for the traces of extraneous variability in RNA-Seq data. A novel and thorough comparative analysis of the PLS based method is presented along with some of the other popularly used approaches for latent variable correction in RNA-Seq. Overall, the method is found to achieve a substantially improved estimation of the hidden effect signatures in the RNA-Seq transcriptome expression landscape compared to other available techniques. Copyright © 2017. Published by Elsevier Inc.

  16. Memory and foraging theory: Chimpanzee utilization of optimality heuristics in the rank-order recovery of hidden foods

    PubMed Central

    Sayers, Ken; Menzel, Charles R.

    2012-01-01

    Many models from foraging theory and movement ecology assume that resources are encountered randomly. If food locations, types and values are retained in memory, however, search time could be significantly reduced, with concurrent effects on biological fitness. Despite this, little is known about what specific characteristics of foods, particularly those relevant to profitability, nonhuman animals can remember. Building upon previous observations, we hypothesized that chimpanzees (Pan troglodytes), after observing foods being hidden in a large wooded test area they could not enter, and after long delays, would direct (through gesture and vocalization) experimentally naïve humans to the reward locations in an order that could be predicted beforehand by the spatial and physical characteristics of those items. In the main experiment, various quantities of almonds, both in and out of shells and sealed in transparent bags, were hidden in the test area. The chimpanzees later directed searchers to those items in a nonrandom order related to quantity, shell presence/absence, and the distance they were hidden from the subject. The recovery sequences were closely related to the actual e/h profitability of the foods. Predicted recovery orders, based on the energetic value of almonds and independently-measured, individual-specific expected pursuit and processing times, were closely related to observed recovery orders. We argue that the information nonhuman animals possess regarding their environment can be extensive, and that further comparative study is vital for incorporating realistic cognitive variables into models of foraging and movement. PMID:23226837

  17. Building Simple Hidden Markov Models. Classroom Notes

    ERIC Educational Resources Information Center

    Ching, Wai-Ki; Ng, Michael K.

    2004-01-01

    Hidden Markov models (HMMs) are widely used in bioinformatics, speech recognition and many other areas. This note presents HMMs via the framework of classical Markov chain models. A simple example is given to illustrate the model. An estimation method for the transition probabilities of the hidden states is also discussed.

  18. Modeling Protein Expression and Protein Signaling Pathways

    PubMed Central

    Telesca, Donatello; Müller, Peter; Kornblau, Steven M.; Suchard, Marc A.; Ji, Yuan

    2015-01-01

    High-throughput functional proteomic technologies provide a way to quantify the expression of proteins of interest. Statistical inference centers on identifying the activation state of proteins and their patterns of molecular interaction formalized as dependence structure. Inference on dependence structure is particularly important when proteins are selected because they are part of a common molecular pathway. In that case, inference on dependence structure reveals properties of the underlying pathway. We propose a probability model that represents molecular interactions at the level of hidden binary latent variables that can be interpreted as indicators for active versus inactive states of the proteins. The proposed approach exploits available expert knowledge about the target pathway to define an informative prior on the hidden conditional dependence structure. An important feature of this prior is that it provides an instrument to explicitly anchor the model space to a set of interactions of interest, favoring a local search approach to model determination. We apply our model to reverse-phase protein array data from a study on acute myeloid leukemia. Our inference identifies relevant subpathways in relation to the unfolding of the biological process under study. PMID:26246646

  19. Final Technical Report for "Collaborative Research: Regional climate-change projections through next-generation empirical and dynamical models"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, A.W.; Ghil, M.; Kravtsov, K.

    2011-04-08

    This project was a continuation of previous work under DOE CCPP funding in which we developed a twin approach of non-homogeneous hidden Markov models (NHMMs) and coupled ocean-atmosphere (O-A) intermediate-complexity models (ICMs) to identify the potentially predictable modes of climate variability, and to investigate their impacts on the regional-scale. We have developed a family of latent-variable NHMMs to simulate historical records of daily rainfall, and used them to downscale seasonal predictions. We have also developed empirical mode reduction (EMR) models for gaining insight into the underlying dynamics in observational data and general circulation model (GCM) simulations. Using coupled O-A ICMs,more » we have identified a new mechanism of interdecadal climate variability, involving the midlatitude oceans mesoscale eddy field and nonlinear, persistent atmospheric response to the oceanic anomalies. A related decadal mode is also identified, associated with the oceans thermohaline circulation. The goal of the continuation was to build on these ICM results and NHMM/EMR model developments and software to strengthen two key pillars of support for the development and application of climate models for climate change projections on time scales of decades to centuries, namely: (a) dynamical and theoretical understanding of decadal-to-interdecadal oscillations and their predictability; and (b) an interface from climate models to applications, in order to inform societal adaptation strategies to climate change at the regional scale, including model calibration, correction, downscaling and, most importantly, assessment and interpretation of spread and uncertainties in multi-model ensembles. Our main results from the grant consist of extensive further development of the hidden Markov models for rainfall simulation and downscaling specifically within the non-stationary climate change context together with the development of parallelized software; application of NHMMs to downscaling of rainfall projections over India; identification and analysis of decadal climate signals in data and models; and, studies of climate variability in terms of the dynamics of atmospheric flow regimes. Each of these project components is elaborated on below, followed by a list of publications resulting from the grant.« less

  20. Final Technical Report for "Collaborative Research. Regional climate-change projections through next-generation empirical and dynamical models"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kravtsov, S.; Robertson, Andrew W.; Ghil, Michael

    2011-04-08

    This project was a continuation of previous work under DOE CCPP funding in which we developed a twin approach of non-homogeneous hidden Markov models (NHMMs) and coupled ocean-atmosphere (O-A) intermediate-complexity models (ICMs) to identify the potentially predictable modes of climate variability, and to investigate their impacts on the regional-scale. We have developed a family of latent-variable NHMMs to simulate historical records of daily rainfall, and used them to downscale seasonal predictions. We have also developed empirical mode reduction (EMR) models for gaining insight into the underlying dynamics in observational data and general circulation model (GCM) simulations. Using coupled O-A ICMs,more » we have identified a new mechanism of interdecadal climate variability, involving the midlatitude oceans mesoscale eddy field and nonlinear, persistent atmospheric response to the oceanic anomalies. A related decadal mode is also identified, associated with the oceans thermohaline circulation. The goal of the continuation was to build on these ICM results and NHMM/EMR model developments and software to strengthen two key pillars of support for the development and application of climate models for climate change projections on time scales of decades to centuries, namely: (a) dynamical and theoretical understanding of decadal-to-interdecadal oscillations and their predictability; and (b) an interface from climate models to applications, in order to inform societal adaptation strategies to climate change at the regional scale, including model calibration, correction, downscaling and, most importantly, assessment and interpretation of spread and uncertainties in multi-model ensembles. Our main results from the grant consist of extensive further development of the hidden Markov models for rainfall simulation and downscaling specifically within the non-stationary climate change context together with the development of parallelized software; application of NHMMs to downscaling of rainfall projections over India; identification and analysis of decadal climate signals in data and models; and, studies of climate variability in terms of the dynamics of atmospheric flow regimes. Each of these project components is elaborated on below, followed by a list of publications resulting from the grant.« less

  1. A Self-Organizing Incremental Spatiotemporal Associative Memory Networks Model for Problems with Hidden State

    PubMed Central

    2016-01-01

    Identifying the hidden state is important for solving problems with hidden state. We prove any deterministic partially observable Markov decision processes (POMDP) can be represented by a minimal, looping hidden state transition model and propose a heuristic state transition model constructing algorithm. A new spatiotemporal associative memory network (STAMN) is proposed to realize the minimal, looping hidden state transition model. STAMN utilizes the neuroactivity decay to realize the short-term memory, connection weights between different nodes to represent long-term memory, presynaptic potentials, and synchronized activation mechanism to complete identifying and recalling simultaneously. Finally, we give the empirical illustrations of the STAMN and compare the performance of the STAMN model with that of other methods. PMID:27891146

  2. Multi-Observation Continuous Density Hidden Markov Models for Anomaly Detection in Full Motion Video

    DTIC Science & Technology

    2012-06-01

    response profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.5 Method for measuring angular movement versus average direction...of movement 49 3.6 Method for calculating Angular Deviation, Θ . . . . . . . . . . . . . . . . . . 50 4.1 HMM produced by K Means Learning for agent H... Angular Deviation. A random variable, the difference in heading (in degrees) from the overall direction of movement over the sequence • S : Speed. A

  3. Reply to ''Comment on 'Mutually unbiased bases, orthogonal Latin squares, and hidden-variable models'''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paterek, Tomasz; Dakic, Borivoje; Brukner, Caslav

    In this Reply to the preceding Comment by Hall and Rao [Phys. Rev. A 83, 036101 (2011)], we motivate terminology of our original paper and point out that further research is needed in order to (dis)prove the claimed link between every orthogonal Latin square of order being a power of a prime and a mutually unbiased basis.

  4. Driving style recognition method using braking characteristics based on hidden Markov model

    PubMed Central

    Wu, Chaozhong; Lyu, Nengchao; Huang, Zhen

    2017-01-01

    Since the advantage of hidden Markov model in dealing with time series data and for the sake of identifying driving style, three driving style (aggressive, moderate and mild) are modeled reasonably through hidden Markov model based on driver braking characteristics to achieve efficient driving style. Firstly, braking impulse and the maximum braking unit area of vacuum booster within a certain time are collected from braking operation, and then general braking and emergency braking characteristics are extracted to code the braking characteristics. Secondly, the braking behavior observation sequence is used to describe the initial parameters of hidden Markov model, and the generation of the hidden Markov model for differentiating and an observation sequence which is trained and judged by the driving style is introduced. Thirdly, the maximum likelihood logarithm could be implied from the observable parameters. The recognition accuracy of algorithm is verified through experiments and two common pattern recognition algorithms. The results showed that the driving style discrimination based on hidden Markov model algorithm could realize effective discriminant of driving style. PMID:28837580

  5. Discriminative latent models for recognizing contextual group activities.

    PubMed

    Lan, Tian; Wang, Yang; Yang, Weilong; Robinovitch, Stephen N; Mori, Greg

    2012-08-01

    In this paper, we go beyond recognizing the actions of individuals and focus on group activities. This is motivated from the observation that human actions are rarely performed in isolation; the contextual information of what other people in the scene are doing provides a useful cue for understanding high-level activities. We propose a novel framework for recognizing group activities which jointly captures the group activity, the individual person actions, and the interactions among them. Two types of contextual information, group-person interaction and person-person interaction, are explored in a latent variable framework. In particular, we propose three different approaches to model the person-person interaction. One approach is to explore the structures of person-person interaction. Differently from most of the previous latent structured models, which assume a predefined structure for the hidden layer, e.g., a tree structure, we treat the structure of the hidden layer as a latent variable and implicitly infer it during learning and inference. The second approach explores person-person interaction in the feature level. We introduce a new feature representation called the action context (AC) descriptor. The AC descriptor encodes information about not only the action of an individual person in the video, but also the behavior of other people nearby. The third approach combines the above two. Our experimental results demonstrate the benefit of using contextual information for disambiguating group activities.

  6. Discriminative Latent Models for Recognizing Contextual Group Activities

    PubMed Central

    Lan, Tian; Wang, Yang; Yang, Weilong; Robinovitch, Stephen N.; Mori, Greg

    2012-01-01

    In this paper, we go beyond recognizing the actions of individuals and focus on group activities. This is motivated from the observation that human actions are rarely performed in isolation; the contextual information of what other people in the scene are doing provides a useful cue for understanding high-level activities. We propose a novel framework for recognizing group activities which jointly captures the group activity, the individual person actions, and the interactions among them. Two types of contextual information, group-person interaction and person-person interaction, are explored in a latent variable framework. In particular, we propose three different approaches to model the person-person interaction. One approach is to explore the structures of person-person interaction. Differently from most of the previous latent structured models, which assume a predefined structure for the hidden layer, e.g., a tree structure, we treat the structure of the hidden layer as a latent variable and implicitly infer it during learning and inference. The second approach explores person-person interaction in the feature level. We introduce a new feature representation called the action context (AC) descriptor. The AC descriptor encodes information about not only the action of an individual person in the video, but also the behavior of other people nearby. The third approach combines the above two. Our experimental results demonstrate the benefit of using contextual information for disambiguating group activities. PMID:22144516

  7. Supervised artificial neural network-based method for conversion of solar radiation data (case study: Algeria)

    NASA Astrophysics Data System (ADS)

    Laidi, Maamar; Hanini, Salah; Rezrazi, Ahmed; Yaiche, Mohamed Redha; El Hadj, Abdallah Abdallah; Chellali, Farouk

    2017-04-01

    In this study, a backpropagation artificial neural network (BP-ANN) model is used as an alternative approach to predict solar radiation on tilted surfaces (SRT) using a number of variables involved in physical process. These variables are namely the latitude of the site, mean temperature and relative humidity, Linke turbidity factor and Angstrom coefficient, extraterrestrial solar radiation, solar radiation data measured on horizontal surfaces (SRH), and solar zenith angle. Experimental solar radiation data from 13 stations spread all over Algeria around the year (2004) were used for training/validation and testing the artificial neural networks (ANNs), and one station was used to make the interpolation of the designed ANN. The ANN model was trained, validated, and tested using 60, 20, and 20 % of all data, respectively. The configuration 8-35-1 (8 inputs, 35 hidden, and 1 output neurons) presented an excellent agreement between the prediction and the experimental data during the test stage with determination coefficient of 0.99 and root meat squared error of 5.75 Wh/m2, considering a three-layer feedforward backpropagation neural network with Levenberg-Marquardt training algorithm, a hyperbolic tangent sigmoid and linear transfer function at the hidden and the output layer, respectively. This novel model could be used by researchers or scientists to design high-efficiency solar devices that are usually tilted at an optimum angle to increase the solar incident on the surface.

  8. From Wang-Chen System with Only One Stable Equilibrium to a New Chaotic System Without Equilibrium

    NASA Astrophysics Data System (ADS)

    Pham, Viet-Thanh; Wang, Xiong; Jafari, Sajad; Volos, Christos; Kapitaniak, Tomasz

    2017-06-01

    Wang-Chen system with only one stable equilibrium as well as the coexistence of hidden attractors has attracted increasing interest due to its striking features. In this work, the effect of state feedback on Wang-Chen system is investigated by introducing a further state variable. It is worth noting that a new chaotic system without equilibrium is obtained. We believe that the system is an interesting example to illustrate the conversion of hidden attractors with one stable equilibrium to hidden attractors without equilibrium.

  9. Hidden sector dark matter and the Galactic Center gamma-ray excess: a closer look

    DOE PAGES

    Escudero, Miguel; Witte, Samuel J.; Hooper, Dan

    2017-11-24

    Stringent constraints from direct detection experiments and the Large Hadron Collider motivate us to consider models in which the dark matter does not directly couple to the Standard Model, but that instead annihilates into hidden sector particles which ultimately decay through small couplings to the Standard Model. We calculate the gamma-ray emission generated within the context of several such hidden sector models, including those in which the hidden sector couples to the Standard Model through the vector portal (kinetic mixing with Standard Model hypercharge), through the Higgs portal (mixing with the Standard Model Higgs boson), or both. In each case,more » we identify broad regions of parameter space in which the observed spectrum and intensity of the Galactic Center gamma-ray excess can easily be accommodated, while providing an acceptable thermal relic abundance and remaining consistent with all current constraints. Here, we also point out that cosmic-ray antiproton measurements could potentially discriminate some hidden sector models from more conventional dark matter scenarios.« less

  10. Hidden Sector Dark Matter and the Galactic Center Gamma-Ray Excess: A Closer Look

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Escudero, Miguel; Witte, Samuel J.; Hooper, Dan

    2017-09-20

    Stringent constraints from direct detection experiments and the Large Hadron Collider motivate us to consider models in which the dark matter does not directly couple to the Standard Model, but that instead annihilates into hidden sector particles which ultimately decay through small couplings to the Standard Model. We calculate the gamma-ray emission generated within the context of several such hidden sector models, including those in which the hidden sector couples to the Standard Model through the vector portal (kinetic mixing with Standard Model hypercharge), through the Higgs portal (mixing with the Standard Model Higgs boson), or both. In each case,more » we identify broad regions of parameter space in which the observed spectrum and intensity of the Galactic Center gamma-ray excess can easily be accommodated, while providing an acceptable thermal relic abundance and remaining consistent with all current constraints. We also point out that cosmic-ray antiproton measurements could potentially discriminate some hidden sector models from more conventional dark matter scenarios.« less

  11. Hidden sector dark matter and the Galactic Center gamma-ray excess: a closer look

    NASA Astrophysics Data System (ADS)

    Escudero, Miguel; Witte, Samuel J.; Hooper, Dan

    2017-11-01

    Stringent constraints from direct detection experiments and the Large Hadron Collider motivate us to consider models in which the dark matter does not directly couple to the Standard Model, but that instead annihilates into hidden sector particles which ultimately decay through small couplings to the Standard Model. We calculate the gamma-ray emission generated within the context of several such hidden sector models, including those in which the hidden sector couples to the Standard Model through the vector portal (kinetic mixing with Standard Model hypercharge), through the Higgs portal (mixing with the Standard Model Higgs boson), or both. In each case, we identify broad regions of parameter space in which the observed spectrum and intensity of the Galactic Center gamma-ray excess can easily be accommodated, while providing an acceptable thermal relic abundance and remaining consistent with all current constraints. We also point out that cosmic-ray antiproton measurements could potentially discriminate some hidden sector models from more conventional dark matter scenarios.

  12. Hidden sector dark matter and the Galactic Center gamma-ray excess: a closer look

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Escudero, Miguel; Witte, Samuel J.; Hooper, Dan

    Stringent constraints from direct detection experiments and the Large Hadron Collider motivate us to consider models in which the dark matter does not directly couple to the Standard Model, but that instead annihilates into hidden sector particles which ultimately decay through small couplings to the Standard Model. We calculate the gamma-ray emission generated within the context of several such hidden sector models, including those in which the hidden sector couples to the Standard Model through the vector portal (kinetic mixing with Standard Model hypercharge), through the Higgs portal (mixing with the Standard Model Higgs boson), or both. In each case,more » we identify broad regions of parameter space in which the observed spectrum and intensity of the Galactic Center gamma-ray excess can easily be accommodated, while providing an acceptable thermal relic abundance and remaining consistent with all current constraints. Here, we also point out that cosmic-ray antiproton measurements could potentially discriminate some hidden sector models from more conventional dark matter scenarios.« less

  13. Steering, or maybe why Einstein did not go all the way to Bellʼs argument

    NASA Astrophysics Data System (ADS)

    Werner, R. F.

    2014-10-01

    It is shown that a main source of conflict between Einstein and the mainstream quantum physicists was his insistence that wave functions, like classical probability distributions, do not refer to individual particles and, in particular, do not describe individual systems completely. The EPR paper was written to argue for this position. By aiming at showing that wave functions are unsuitable as local hidden variables, the authors failed to see that a slight extension could have ruled out such local hidden variables in general. As background for this analysis of the EPR argument the notion of steering is described, and a version of the Bell argument is proved which emphasizes non-local signalling aspects. Finally, some background is given concerning a well-known paper by the present author, which is celebrating 25 years this year, and in which the first non-steering models were constructed. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘50 years of Bell’s theorem’.

  14. A hidden variable in shear transformation zone volume versus Poisson's ratio relation in metallic glasses

    NASA Astrophysics Data System (ADS)

    Kim, S. Y.; Oh, H. S.; Park, E. S.

    2017-10-01

    Herein, we elucidate a hidden variable in a shear transformation zone (STZ) volume (Ω) versus Poisson's ratio (ν) relation and clarify the correlation between STZ characteristics and the plasticity of metallic glasses (MGs). On the basis of cooperative shear model and atomic stress theories, we carefully formulate Ω as a function of molar volume (Vm) and ν. The twofold trend in Ω and ν is attributed to a relatively large variation of Vm as compared to that of ν as well as an inverse relation between Vm and ν. Indeed, the derived equation reveals that the number of atoms in an STZ instead of Ω is a microstructural characteristic which has a close relationship with plasticity since it reflects the preference of atomistic behaviors between cooperative shearing and the generation of volume strain fluctuation under stress. The results would deepen our understanding of the correlation between microscopic behaviors (STZ activation) and macroscopic properties (plasticity) in MGs and enable a quantitative approach in associating various STZ-related macroscopic behaviors with intrinsic properties of MGs.

  15. High pressure air compressor valve fault diagnosis using feedforward neural networks

    NASA Astrophysics Data System (ADS)

    James Li, C.; Yu, Xueli

    1995-09-01

    Feedforward neural networks (FNNs) are developed and implemented to classify a four-stage high pressure air compressor into one of the following conditions: baseline, suction or exhaust valve faults. These FNNs are used for the compressor's automatic condition monitoring and fault diagnosis. Measurements of 39 variables are obtained under different baseline conditions and third-stage suction and exhaust valve faults. These variables include pressures and temperatures at all stages, voltage between phase aand phase b, voltage between phase band phase c, total three-phase real power, cooling water flow rate, etc. To reduce the number of variables, the amount of their discriminatory information is quantified by scattering matrices to identify statistical significant ones. Measurements of the selected variables are then used by a fully automatic structural and weight learning algorithm to construct three-layer FNNs to classify the compressor's condition. This learning algorithm requires neither guesses of initial weight values nor number of neurons in the hidden layer of an FNN. It takes an incremental approach in which a hidden neuron is trained by exemplars and then augmented to the existing network. These exemplars are then made orthogonal to the newly identified hidden neuron. They are subsequently used for the training of the next hidden neuron. The betterment continues until a desired accuracy is reached. After the neural networks are established, novel measurements from various conditions that haven't been previously seen by the FNNs are then used to evaluate their ability in fault diagnosis. The trained neural networks provide very accurate diagnosis for suction and discharge valve defects.

  16. Variables in psychology: a critique of quantitative psychology.

    PubMed

    Toomela, Aaro

    2008-09-01

    Mind is hidden from direct observation; it can be studied only by observing behavior. Variables encode information about behaviors. There is no one-to-one correspondence between behaviors and mental events underlying the behaviors, however. In order to understand mind it would be necessary to understand exactly what information is represented in variables. This aim cannot be reached after variables are already encoded. Therefore, statistical data analysis can be very misleading in studies aimed at understanding mind that underlies behavior. In this article different kinds of information that can be represented in variables are described. It is shown how informational ambiguity of variables leads to problems of theoretically meaningful interpretation of the results of statistical data analysis procedures in terms of hidden mental processes. Reasons are provided why presence of dependence between variables does not imply causal relationship between events represented by variables and absence of dependence between variables cannot rule out the causal dependence of events represented by variables. It is concluded that variable-psychology has a very limited range of application for the development of a theory of mind-psychology.

  17. Data-Driven Model Uncertainty Estimation in Hydrologic Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S.; Moradkhani, H.; Marshall, L.; Sharma, A.; Geenens, G.

    2018-02-01

    The increasing availability of earth observations necessitates mathematical methods to optimally combine such data with hydrologic models. Several algorithms exist for such purposes, under the umbrella of data assimilation (DA). However, DA methods are often applied in a suboptimal fashion for complex real-world problems, due largely to several practical implementation issues. One such issue is error characterization, which is known to be critical for a successful assimilation. Mischaracterized errors lead to suboptimal forecasts, and in the worst case, to degraded estimates even compared to the no assimilation case. Model uncertainty characterization has received little attention relative to other aspects of DA science. Traditional methods rely on subjective, ad hoc tuning factors or parametric distribution assumptions that may not always be applicable. We propose a novel data-driven approach (named SDMU) to model uncertainty characterization for DA studies where (1) the system states are partially observed and (2) minimal prior knowledge of the model error processes is available, except that the errors display state dependence. It includes an approach for estimating the uncertainty in hidden model states, with the end goal of improving predictions of observed variables. The SDMU is therefore suited to DA studies where the observed variables are of primary interest. Its efficacy is demonstrated through a synthetic case study with low-dimensional chaotic dynamics and a real hydrologic experiment for one-day-ahead streamflow forecasting. In both experiments, the proposed method leads to substantial improvements in the hidden states and observed system outputs over a standard method involving perturbation with Gaussian noise.

  18. Subtleties of Hidden Quantifiers in Implication

    ERIC Educational Resources Information Center

    Shipman, Barbara A.

    2016-01-01

    Mathematical conjectures and theorems are most often of the form P(x) ? Q(x), meaning ?x,P(x) ? Q(x). The hidden quantifier ?x is crucial in understanding the implication as a statement with a truth value. Here P(x) and Q(x) alone are only predicates, without truth values, since they contain unquantified variables. But standard textbook…

  19. Geophysical Investigations at Hidden Dam, Raymond, California Flow Simulations

    USGS Publications Warehouse

    Minsley, Burke J.; Ikard, Scott

    2010-01-01

    Numerical flow modeling and analysis of observation-well data at Hidden Dam are carried out to supplement recent geophysical field investigations at the site (Minsley and others, 2010). This work also is complementary to earlier seepage-related studies at Hidden Dam documented by Cedergren (1980a, b). Known seepage areas on the northwest right abutment area of the downstream side of the dam was documented by Cedergren (1980a, b). Subsequent to the 1980 seepage study, a drainage blanket with a sub-drain system was installed to mitigate downstream seepage. Flow net analysis provided by Cedergren (1980a, b) suggests that the primary seepage mechanism involves flow through the dam foundation due to normal reservoir pool elevations, which results in upflow that intersects the ground surface in several areas on the downstream side of the dam. In addition to the reservoir pool elevations and downstream surface topography, flow is also controlled by the existing foundation geology as well as the presence or absence of a horizontal drain in the downstream portion of the dam. The current modeling study is aimed at quantifying how variability in dam and foundation hydrologic properties influences seepage as a function of reservoir stage. Flow modeling is implemented using the COMSOL Multiphysics software package, which solves the partially saturated flow equations in a two-dimensional (2D) cross-section of Hidden Dam that also incorporates true downstream topography. Use of the COMSOL software package provides a more quantitative approach than the flow net analysis by Cedergren (1980a, b), and allows for rapid evaluation of the influence of various parameters such as reservoir level, dam structure and geometry, and hydrogeologic properties of the dam and foundation materials. Historical observation-well data are used to help validate the flow simulations by comparing observed and predicted water levels for a range of reservoir elevations. The flow models are guided by, and discussed in the context of, the geophysical work (Minsley and others, 2010) where appropriate.

  20. Algorithmic information theory and the hidden variable question

    NASA Technical Reports Server (NTRS)

    Fuchs, Christopher

    1992-01-01

    The admissibility of certain nonlocal hidden-variable theories are explained via information theory. Consider a pair of Stern-Gerlach devices with fixed nonparallel orientations that periodically perform spin measurements on identically prepared pairs of electrons in the singlet spin state. Suppose the outcomes are recorded as binary strings l and r (with l sub n and r sub n denoting their n-length prefixes). The hidden-variable theories considered here require that there exists a recursive function which may be used to transform l sub n into r sub n for any n. This note demonstrates that such a theory cannot reproduce all the statistical predictions of quantum mechanics. Specifically, consider an ensemble of outcome pairs (l,r). From the associated probability measure, the Shannon entropies H sub n and H bar sub n for strings l sub n and pairs (l sub n, r sub n) may be formed. It is shown that such a theory requires that the absolute value of H bar sub n - H sub n be bounded - contrasting the quantum mechanical prediction that it grow with n.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoban, Matty J.; Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford OX1 3QD; Wallman, Joel J.

    We consider general settings of Bell inequality experiments with many parties, where each party chooses from a finite number of measurement settings each with a finite number of outcomes. We investigate the constraints that Bell inequalities place upon the correlations possible in local hidden variable theories using a geometrical picture of correlations. We show that local hidden variable theories can be characterized in terms of limited computational expressiveness, which allows us to characterize families of Bell inequalities. The limited computational expressiveness for many settings (each with many outcomes) generalizes previous results about the many-party situation each with a choice ofmore » two possible measurements (each with two outcomes). Using this computational picture we present generalizations of the Popescu-Rohrlich nonlocal box for many parties and nonbinary inputs and outputs at each site. Finally, we comment on the effect of preprocessing on measurement data in our generalized setting and show that it becomes problematic outside of the binary setting, in that it allows local hidden variable theories to simulate maximally nonlocal correlations such as those of these generalized Popescu-Rohrlich nonlocal boxes.« less

  2. Hidden Markov Model-Based CNV Detection Algorithms for Illumina Genotyping Microarrays.

    PubMed

    Seiser, Eric L; Innocenti, Federico

    2014-01-01

    Somatic alterations in DNA copy number have been well studied in numerous malignancies, yet the role of germline DNA copy number variation in cancer is still emerging. Genotyping microarrays generate allele-specific signal intensities to determine genotype, but may also be used to infer DNA copy number using additional computational approaches. Numerous tools have been developed to analyze Illumina genotype microarray data for copy number variant (CNV) discovery, although commonly utilized algorithms freely available to the public employ approaches based upon the use of hidden Markov models (HMMs). QuantiSNP, PennCNV, and GenoCN utilize HMMs with six copy number states but vary in how transition and emission probabilities are calculated. Performance of these CNV detection algorithms has been shown to be variable between both genotyping platforms and data sets, although HMM approaches generally outperform other current methods. Low sensitivity is prevalent with HMM-based algorithms, suggesting the need for continued improvement in CNV detection methodologies.

  3. Reciprocal Markov Modeling of Feedback Mechanisms Between Emotion and Dietary Choice Using Experience-Sampling Data.

    PubMed

    Lu, Ji; Pan, Junhao; Zhang, Qiang; Dubé, Laurette; Ip, Edward H

    2015-01-01

    With intensively collected longitudinal data, recent advances in the experience-sampling method (ESM) benefit social science empirical research, but also pose important methodological challenges. As traditional statistical models are not generally well equipped to analyze a system of variables that contain feedback loops, this paper proposes the utility of an extended hidden Markov model to model reciprocal the relationship between momentary emotion and eating behavior. This paper revisited an ESM data set (Lu, Huet, & Dube, 2011) that observed 160 participants' food consumption and momentary emotions 6 times per day in 10 days. Focusing on the analyses on feedback loop between mood and meal-healthiness decision, the proposed reciprocal Markov model (RMM) can accommodate both hidden ("general" emotional states: positive vs. negative state) and observed states (meal: healthier, same or less healthy than usual) without presuming independence between observations and smooth trajectories of mood or behavior changes. The results of RMM analyses illustrated the reciprocal chains of meal consumption and mood as well as the effect of contextual factors that moderate the interrelationship between eating and emotion. A simulation experiment that generated data consistent with the empirical study further demonstrated that the procedure is promising in terms of recovering the parameters.

  4. Recovering hidden diagonal structures via non-negative matrix factorization with multiple constraints.

    PubMed

    Yang, Xi; Han, Guoqiang; Cai, Hongmin; Song, Yan

    2017-03-31

    Revealing data with intrinsically diagonal block structures is particularly useful for analyzing groups of highly correlated variables. Earlier researches based on non-negative matrix factorization (NMF) have been shown to be effective in representing such data by decomposing the observed data into two factors, where one factor is considered to be the feature and the other the expansion loading from a linear algebra perspective. If the data are sampled from multiple independent subspaces, the loading factor would possess a diagonal structure under an ideal matrix decomposition. However, the standard NMF method and its variants have not been reported to exploit this type of data via direct estimation. To address this issue, a non-negative matrix factorization with multiple constraints model is proposed in this paper. The constraints include an sparsity norm on the feature matrix and a total variational norm on each column of the loading matrix. The proposed model is shown to be capable of efficiently recovering diagonal block structures hidden in observed samples. An efficient numerical algorithm using the alternating direction method of multipliers model is proposed for optimizing the new model. Compared with several benchmark models, the proposed method performs robustly and effectively for simulated and real biological data.

  5. Hidden charged dark matter and chiral dark radiation

    NASA Astrophysics Data System (ADS)

    Ko, P.; Nagata, Natsumi; Tang, Yong

    2017-10-01

    In the light of recent possible tensions in the Hubble constant H0 and the structure growth rate σ8 between the Planck and other measurements, we investigate a hidden-charged dark matter (DM) model where DM interacts with hidden chiral fermions, which are charged under the hidden SU(N) and U(1) gauge interactions. The symmetries in this model assure these fermions to be massless. The DM in this model, which is a Dirac fermion and singlet under the hidden SU(N), is also assumed to be charged under the U(1) gauge symmetry, through which it can interact with the chiral fermions. Below the confinement scale of SU(N), the hidden quark condensate spontaneously breaks the U(1) gauge symmetry such that there remains a discrete symmetry, which accounts for the stability of DM. This condensate also breaks a flavor symmetry in this model and Nambu-Goldstone bosons associated with this flavor symmetry appear below the confinement scale. The hidden U(1) gauge boson and hidden quarks/Nambu-Goldstone bosons are components of dark radiation (DR) above/below the confinement scale. These light fields increase the effective number of neutrinos by δNeff ≃ 0.59 above the confinement scale for N = 2, resolving the tension in the measurements of the Hubble constant by Planck and Hubble Space Telescope if the confinement scale is ≲1 eV. DM and DR continuously scatter with each other via the hidden U(1) gauge interaction, which suppresses the matter power spectrum and results in a smaller structure growth rate. The DM sector couples to the Standard Model sector through the exchange of a real singlet scalar mixing with the Higgs boson, which makes it possible to probe our model in DM direct detection experiments. Variants of this model are also discussed, which may offer alternative ways to investigate this scenario.

  6. Bayesian state space models for dynamic genetic network construction across multiple tissues.

    PubMed

    Liang, Yulan; Kelemen, Arpad

    2016-08-01

    Construction of gene-gene interaction networks and potential pathways is a challenging and important problem in genomic research for complex diseases while estimating the dynamic changes of the temporal correlations and non-stationarity are the keys in this process. In this paper, we develop dynamic state space models with hierarchical Bayesian settings to tackle this challenge for inferring the dynamic profiles and genetic networks associated with disease treatments. We treat both the stochastic transition matrix and the observation matrix time-variant and include temporal correlation structures in the covariance matrix estimations in the multivariate Bayesian state space models. The unevenly spaced short time courses with unseen time points are treated as hidden state variables. Hierarchical Bayesian approaches with various prior and hyper-prior models with Monte Carlo Markov Chain and Gibbs sampling algorithms are used to estimate the model parameters and the hidden state variables. We apply the proposed Hierarchical Bayesian state space models to multiple tissues (liver, skeletal muscle, and kidney) Affymetrix time course data sets following corticosteroid (CS) drug administration. Both simulation and real data analysis results show that the genomic changes over time and gene-gene interaction in response to CS treatment can be well captured by the proposed models. The proposed dynamic Hierarchical Bayesian state space modeling approaches could be expanded and applied to other large scale genomic data, such as next generation sequence (NGS) combined with real time and time varying electronic health record (EHR) for more comprehensive and robust systematic and network based analysis in order to transform big biomedical data into predictions and diagnostics for precision medicine and personalized healthcare with better decision making and patient outcomes.

  7. Probabilistic hazard assessment for skin sensitization potency by dose–response modeling using feature elimination instead of quantitative structure–activity relationships

    PubMed Central

    McKim, James M.; Hartung, Thomas; Kleensang, Andre; Sá-Rocha, Vanessa

    2016-01-01

    Supervised learning methods promise to improve integrated testing strategies (ITS), but must be adjusted to handle high dimensionality and dose–response data. ITS approaches are currently fueled by the increasing mechanistic understanding of adverse outcome pathways (AOP) and the development of tests reflecting these mechanisms. Simple approaches to combine skin sensitization data sets, such as weight of evidence, fail due to problems in information redundancy and high dimension-ality. The problem is further amplified when potency information (dose/response) of hazards would be estimated. Skin sensitization currently serves as the foster child for AOP and ITS development, as legislative pressures combined with a very good mechanistic understanding of contact dermatitis have led to test development and relatively large high-quality data sets. We curated such a data set and combined a recursive variable selection algorithm to evaluate the information available through in silico, in chemico and in vitro assays. Chemical similarity alone could not cluster chemicals’ potency, and in vitro models consistently ranked high in recursive feature elimination. This allows reducing the number of tests included in an ITS. Next, we analyzed with a hidden Markov model that takes advantage of an intrinsic inter-relationship among the local lymph node assay classes, i.e. the monotonous connection between local lymph node assay and dose. The dose-informed random forest/hidden Markov model was superior to the dose-naive random forest model on all data sets. Although balanced accuracy improvement may seem small, this obscures the actual improvement in misclassifications as the dose-informed hidden Markov model strongly reduced "false-negatives" (i.e. extreme sensitizers as non-sensitizer) on all data sets. PMID:26046447

  8. Probabilistic hazard assessment for skin sensitization potency by dose-response modeling using feature elimination instead of quantitative structure-activity relationships.

    PubMed

    Luechtefeld, Thomas; Maertens, Alexandra; McKim, James M; Hartung, Thomas; Kleensang, Andre; Sá-Rocha, Vanessa

    2015-11-01

    Supervised learning methods promise to improve integrated testing strategies (ITS), but must be adjusted to handle high dimensionality and dose-response data. ITS approaches are currently fueled by the increasing mechanistic understanding of adverse outcome pathways (AOP) and the development of tests reflecting these mechanisms. Simple approaches to combine skin sensitization data sets, such as weight of evidence, fail due to problems in information redundancy and high dimensionality. The problem is further amplified when potency information (dose/response) of hazards would be estimated. Skin sensitization currently serves as the foster child for AOP and ITS development, as legislative pressures combined with a very good mechanistic understanding of contact dermatitis have led to test development and relatively large high-quality data sets. We curated such a data set and combined a recursive variable selection algorithm to evaluate the information available through in silico, in chemico and in vitro assays. Chemical similarity alone could not cluster chemicals' potency, and in vitro models consistently ranked high in recursive feature elimination. This allows reducing the number of tests included in an ITS. Next, we analyzed with a hidden Markov model that takes advantage of an intrinsic inter-relationship among the local lymph node assay classes, i.e. the monotonous connection between local lymph node assay and dose. The dose-informed random forest/hidden Markov model was superior to the dose-naive random forest model on all data sets. Although balanced accuracy improvement may seem small, this obscures the actual improvement in misclassifications as the dose-informed hidden Markov model strongly reduced " false-negatives" (i.e. extreme sensitizers as non-sensitizer) on all data sets. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Analysis of complex neural circuits with nonlinear multidimensional hidden state models

    PubMed Central

    Friedman, Alexander; Slocum, Joshua F.; Tyulmankov, Danil; Gibb, Leif G.; Altshuler, Alex; Ruangwises, Suthee; Shi, Qinru; Toro Arana, Sebastian E.; Beck, Dirk W.; Sholes, Jacquelyn E. C.; Graybiel, Ann M.

    2016-01-01

    A universal need in understanding complex networks is the identification of individual information channels and their mutual interactions under different conditions. In neuroscience, our premier example, networks made up of billions of nodes dynamically interact to bring about thought and action. Granger causality is a powerful tool for identifying linear interactions, but handling nonlinear interactions remains an unmet challenge. We present a nonlinear multidimensional hidden state (NMHS) approach that achieves interaction strength analysis and decoding of networks with nonlinear interactions by including latent state variables for each node in the network. We compare NMHS to Granger causality in analyzing neural circuit recordings and simulations, improvised music, and sociodemographic data. We conclude that NMHS significantly extends the scope of analyses of multidimensional, nonlinear networks, notably in coping with the complexity of the brain. PMID:27222584

  10. A constraint-based evolutionary learning approach to the expectation maximization for optimal estimation of the hidden Markov model for speech signal modeling.

    PubMed

    Huda, Shamsul; Yearwood, John; Togneri, Roberto

    2009-02-01

    This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).

  11. Modelling proteins' hidden conformations to predict antibiotic resistance

    NASA Astrophysics Data System (ADS)

    Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.

    2016-10-01

    TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM's specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models' prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design.

  12. A fast hidden line algorithm for plotting finite element models

    NASA Technical Reports Server (NTRS)

    Jones, G. K.

    1982-01-01

    Effective plotting of finite element models requires the use of fast hidden line plot techniques that provide interactive response. A high speed hidden line technique was developed to facilitate the plotting of NASTRAN finite element models. Based on testing using 14 different models, the new hidden line algorithm (JONES-D) appears to be very fast: its speed equals that for normal (all lines visible) plotting and when compared to other existing methods it appears to be substantially faster. It also appears to be very reliable: no plot errors were observed using the new method to plot NASTRAN models. The new algorithm was made part of the NPLOT NASTRAN plot package and was used by structural analysts for normal production tasks.

  13. A TWO-STATE MIXED HIDDEN MARKOV MODEL FOR RISKY TEENAGE DRIVING BEHAVIOR

    PubMed Central

    Jackson, John C.; Albert, Paul S.; Zhang, Zhiwei

    2016-01-01

    This paper proposes a joint model for longitudinal binary and count outcomes. We apply the model to a unique longitudinal study of teen driving where risky driving behavior and the occurrence of crashes or near crashes are measured prospectively over the first 18 months of licensure. Of scientific interest is relating the two processes and predicting crash and near crash outcomes. We propose a two-state mixed hidden Markov model whereby the hidden state characterizes the mean for the joint longitudinal crash/near crash outcomes and elevated g-force events which are a proxy for risky driving. Heterogeneity is introduced in both the conditional model for the count outcomes and the hidden process using a shared random effect. An estimation procedure is presented using the forward–backward algorithm along with adaptive Gaussian quadrature to perform numerical integration. The estimation procedure readily yields hidden state probabilities as well as providing for a broad class of predictors. PMID:27766124

  14. Multitask TSK fuzzy system modeling by mining intertask common hidden structure.

    PubMed

    Jiang, Yizhang; Chung, Fu-Lai; Ishibuchi, Hisao; Deng, Zhaohong; Wang, Shitong

    2015-03-01

    The classical fuzzy system modeling methods implicitly assume data generated from a single task, which is essentially not in accordance with many practical scenarios where data can be acquired from the perspective of multiple tasks. Although one can build an individual fuzzy system model for each task, the result indeed tells us that the individual modeling approach will get poor generalization ability due to ignoring the intertask hidden correlation. In order to circumvent this shortcoming, we consider a general framework for preserving the independent information among different tasks and mining hidden correlation information among all tasks in multitask fuzzy modeling. In this framework, a low-dimensional subspace (structure) is assumed to be shared among all tasks and hence be the hidden correlation information among all tasks. Under this framework, a multitask Takagi-Sugeno-Kang (TSK) fuzzy system model called MTCS-TSK-FS (TSK-FS for multiple tasks with common hidden structure), based on the classical L2-norm TSK fuzzy system, is proposed in this paper. The proposed model can not only take advantage of independent sample information from the original space for each task, but also effectively use the intertask common hidden structure among multiple tasks to enhance the generalization performance of the built fuzzy systems. Experiments on synthetic and real-world datasets demonstrate the applicability and distinctive performance of the proposed multitask fuzzy system model in multitask regression learning scenarios.

  15. Image segmentation using hidden Markov Gauss mixture models.

    PubMed

    Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M

    2007-07-01

    Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.

  16. Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2013-01-01

    Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.

  17. Adaptive quantification and longitudinal analysis of pulmonary emphysema with a hidden Markov measure field model.

    PubMed

    Hame, Yrjo; Angelini, Elsa D; Hoffman, Eric A; Barr, R Graham; Laine, Andrew F

    2014-07-01

    The extent of pulmonary emphysema is commonly estimated from CT scans by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols, and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the presented model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was applied on a longitudinal data set with 87 subjects and a total of 365 scans acquired with varying imaging protocols. The resulting emphysema estimates had very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. The generated emphysema delineations promise advantages for regional analysis of emphysema extent and progression.

  18. How transfer flights shape the structure of the airline network.

    PubMed

    Ryczkowski, Tomasz; Fronczak, Agata; Fronczak, Piotr

    2017-07-17

    In this paper, we analyse the gravity model in the global passenger air-transport network. We show that in the standard form, the model is inadequate for correctly describing the relationship between passenger flows and typical geo-economic variables that characterize connected countries. We propose a model for transfer flights that allows exploitation of these discrepancies in order to discover hidden subflows in the network. We illustrate its usefulness by retrieving the distance coefficient in the gravity model, which is one of the determinants of the globalization process. Finally, we discuss the correctness of the presented approach by comparing the distance coefficient to several well-known economic events.

  19. Learning and inference in a nonequilibrium Ising model with hidden nodes.

    PubMed

    Dunn, Benjamin; Roudi, Yasser

    2013-02-01

    We study inference and reconstruction of couplings in a partially observed kinetic Ising model. With hidden spins, calculating the likelihood of a sequence of observed spin configurations requires performing a trace over the configurations of the hidden ones. This, as we show, can be represented as a path integral. Using this representation, we demonstrate that systematic approximate inference and learning rules can be derived using dynamical mean-field theory. Although naive mean-field theory leads to an unstable learning rule, taking into account Gaussian corrections allows learning the couplings involving hidden nodes. It also improves learning of the couplings between the observed nodes compared to when hidden nodes are ignored.

  20. Estimation of Hidden State Variables of the Intracranial System Using Constrained Nonlinear Kalman Filters

    PubMed Central

    Nenov, Valeriy; Bergsneider, Marvin; Glenn, Thomas C.; Vespa, Paul; Martin, Neil

    2007-01-01

    Impeded by the rigid skull, assessment of physiological variables of the intracranial system is difficult. A hidden state estimation approach is used in the present work to facilitate the estimation of unobserved variables from available clinical measurements including intracranial pressure (ICP) and cerebral blood flow velocity (CBFV). The estimation algorithm is based on a modified nonlinear intracranial mathematical model, whose parameters are first identified in an offline stage using a nonlinear optimization paradigm. Following the offline stage, an online filtering process is performed using a nonlinear Kalman filter (KF)-like state estimator that is equipped with a new way of deriving the Kalman gain satisfying the physiological constraints on the state variables. The proposed method is then validated by comparing different state estimation methods and input/output (I/O) configurations using simulated data. It is also applied to a set of CBFV, ICP and arterial blood pressure (ABP) signal segments from brain injury patients. The results indicated that the proposed constrained nonlinear KF achieved the best performance among the evaluated state estimators and that the state estimator combined with the I/O configuration that has ICP as the measured output can potentially be used to estimate CBFV continuously. Finally, the state estimator combined with the I/O configuration that has both ICP and CBFV as outputs can potentially estimate the lumped cerebral arterial radii, which are not measurable in a typical clinical environment. PMID:17281533

  1. Experimental non-classicality of an indivisible quantum system.

    PubMed

    Lapkiewicz, Radek; Li, Peizhe; Schaeff, Christoph; Langford, Nathan K; Ramelow, Sven; Wieśniak, Marcin; Zeilinger, Anton

    2011-06-22

    In contrast to classical physics, quantum theory demands that not all properties can be simultaneously well defined; the Heisenberg uncertainty principle is a manifestation of this fact. Alternatives have been explored--notably theories relying on joint probability distributions or non-contextual hidden-variable models, in which the properties of a system are defined independently of their own measurement and any other measurements that are made. Various deep theoretical results imply that such theories are in conflict with quantum mechanics. Simpler cases demonstrating this conflict have been found and tested experimentally with pairs of quantum bits (qubits). Recently, an inequality satisfied by non-contextual hidden-variable models and violated by quantum mechanics for all states of two qubits was introduced and tested experimentally. A single three-state system (a qutrit) is the simplest system in which such a contradiction is possible; moreover, the contradiction cannot result from entanglement between subsystems, because such a three-state system is indivisible. Here we report an experiment with single photonic qutrits which provides evidence that no joint probability distribution describing the outcomes of all possible measurements--and, therefore, no non-contextual theory--can exist. Specifically, we observe a violation of the Bell-type inequality found by Klyachko, Can, Binicioğlu and Shumovsky. Our results illustrate a deep incompatibility between quantum mechanics and classical physics that cannot in any way result from entanglement.

  2. Modeling the contributions of global air temperature, synoptic-scale phenomena and soil moisture to near-surface static energy variability using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Pryor, Sara C.; Sullivan, Ryan C.; Schoof, Justin T.

    2017-12-01

    The static energy content of the atmosphere is increasing on a global scale, but exhibits important subglobal and subregional scales of variability and is a useful parameter for integrating the net effect of changes in the partitioning of energy at the surface and for improving understanding of the causes of so-called warming holes (i.e., locations with decreasing daily maximum air temperatures (T) or increasing trends of lower magnitude than the global mean). Further, measures of the static energy content (herein the equivalent potential temperature, θe) are more strongly linked to excess human mortality and morbidity than air temperature alone, and have great relevance in understanding causes of past heat-related excess mortality and making projections of possible future events that are likely to be associated with negative human health and economic consequences. New nonlinear statistical models for summertime daily maximum and minimum θe are developed and used to advance understanding of drivers of historical change and variability over the eastern USA. The predictor variables are an index of the daily global mean temperature, daily indices of the synoptic-scale meteorology derived from T and specific humidity (Q) at 850 and 500 hPa geopotential heights (Z), and spatiotemporally averaged soil moisture (SM). SM is particularly important in determining the magnitude of θe over regions that have previously been identified as exhibiting warming holes, confirming the key importance of SM in dictating the partitioning of net radiation into sensible and latent heat and dictating trends in near-surface T and θe. Consistent with our a priori expectations, models built using artificial neural networks (ANNs) out-perform linear models that do not permit interaction of the predictor variables (global T, synoptic-scale meteorological conditions and SM). This is particularly marked in regions with high variability in minimum and maximum θe, where more complex models built using ANN with multiple hidden layers are better able to capture the day-to-day variability in θe and the occurrence of extreme maximum θe. Over the entire domain, the ANN with three hidden layers exhibits high accuracy in predicting maximum θe > 347 K. The median hit rate for maximum θe > 347 K is > 0.60, while the median false alarm rate is ≈ 0.08.

  3. Modelling proteins’ hidden conformations to predict antibiotic resistance

    PubMed Central

    Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.

    2016-01-01

    TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM’s specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models’ prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design. PMID:27708258

  4. Free energy and hidden barriers of the β-sheet structure of prion protein.

    PubMed

    Paz, S Alexis; Abrams, Cameron F

    2015-10-13

    On-the-fly free-energy parametrization is a new collective variable biasing approach akin to metadynamics with one important distinction: rather than acquiring an accelerated distribution via a history-dependent bias potential, sampling on this distribution is achieved from the beginning of the simulation using temperature-accelerated molecular dynamics. In the present work, we compare the performance of both approaches to compute the free-energy profile along a scalar collective variable measuring the H-bond registry of the β-sheet structure of the mouse Prion protein. Both methods agree on the location of the free-energy minimum, but free-energy profiles from well-tempered metadynamics are subject to a much higher degree of statistical noise due to hidden barriers. The sensitivity of metadynamics to hidden barriers is shown to be a consequence of the history dependence of the bias potential, and we detail the nature of these barriers for the prion β-sheet. In contrast, on-the-fly parametrization is much less sensitive to these barriers and thus displays improved convergence behavior relative to that of metadynamics. While hidden barriers are a frequent and central issue in free-energy methods, on-the-fly free-energy parametrization appears to be a robust and preferable method to confront this issue.

  5. Reputation and Competition in a Hidden Action Model

    PubMed Central

    Fedele, Alessandro; Tedeschi, Piero

    2014-01-01

    The economics models of reputation and quality in markets can be classified in three categories. (i) Pure hidden action, where only one type of seller is present who can provide goods of different quality. (ii) Pure hidden information, where sellers of different types have no control over product quality. (iii) Mixed frameworks, which include both hidden action and hidden information. In this paper we develop a pure hidden action model of reputation and Bertrand competition, where consumers and firms interact repeatedly in a market with free entry. The price of the good produced by the firms is contractible, whilst the quality is noncontractible, hence it is promised by the firms when a contract is signed. Consumers infer future quality from all available information, i.e., both from what they know about past quality and from current prices. According to early contributions, competition should make reputation unable to induce the production of high-quality goods. We provide a simple solution to this problem by showing that high quality levels are sustained as an outcome of a stationary symmetric equilibrium. PMID:25329387

  6. Reputation and competition in a hidden action model.

    PubMed

    Fedele, Alessandro; Tedeschi, Piero

    2014-01-01

    The economics models of reputation and quality in markets can be classified in three categories. (i) Pure hidden action, where only one type of seller is present who can provide goods of different quality. (ii) Pure hidden information, where sellers of different types have no control over product quality. (iii) Mixed frameworks, which include both hidden action and hidden information. In this paper we develop a pure hidden action model of reputation and Bertrand competition, where consumers and firms interact repeatedly in a market with free entry. The price of the good produced by the firms is contractible, whilst the quality is noncontractible, hence it is promised by the firms when a contract is signed. Consumers infer future quality from all available information, i.e., both from what they know about past quality and from current prices. According to early contributions, competition should make reputation unable to induce the production of high-quality goods. We provide a simple solution to this problem by showing that high quality levels are sustained as an outcome of a stationary symmetric equilibrium.

  7. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    PubMed

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.

  8. Reciprocal Markov modeling of feedback mechanisms between emotion and dietary choice using experience sampling data

    PubMed Central

    Lu, Ji; Pan, Junhao; Zhang, Qiang; Dubé, Laurette; Ip, Edward H.

    2015-01-01

    With intensively collected longitudinal data, recent advances in Experience Sampling Method (ESM) benefit social science empirical research, but also pose important methodological challenges. As traditional statistical models are not generally well-equipped to analyze a system of variables that contain feedback loops, this paper proposes the utility of an extended hidden Markov model to model reciprocal relationship between momentary emotion and eating behavior. This paper revisited an ESM data set (Lu, Huet & Dube, 2011) that observed 160 participants’ food consumption and momentary emotions six times per day in 10 days. Focusing on the analyses on feedback loop between mood and meal healthiness decision, the proposed Reciprocal Markov Model (RMM) can accommodate both hidden (“general” emotional states: positive vs. negative state) and observed states (meal: healthier, same or less healthy than usual) without presuming independence between observations and smooth trajectories of mood or behavior changes. The results of RMM analyses illustrated the reciprocal chains of meal consumption and mood as well as the effect of contextual factors that moderate the interrelationship between eating and emotion. A simulation experiment that generated data consistent to the empirical study further demonstrated that the procedure is promising in terms of recovering the parameters. PMID:26717120

  9. Complex Sequencing Rules of Birdsong Can be Explained by Simple Hidden Markov Processes

    PubMed Central

    Katahira, Kentaro; Suzuki, Kenta; Okanoya, Kazuo; Okada, Masato

    2011-01-01

    Complex sequencing rules observed in birdsongs provide an opportunity to investigate the neural mechanism for generating complex sequential behaviors. To relate the findings from studying birdsongs to other sequential behaviors such as human speech and musical performance, it is crucial to characterize the statistical properties of the sequencing rules in birdsongs. However, the properties of the sequencing rules in birdsongs have not yet been fully addressed. In this study, we investigate the statistical properties of the complex birdsong of the Bengalese finch (Lonchura striata var. domestica). Based on manual-annotated syllable labeles, we first show that there are significant higher-order context dependencies in Bengalese finch songs, that is, which syllable appears next depends on more than one previous syllable. We then analyze acoustic features of the song and show that higher-order context dependencies can be explained using first-order hidden state transition dynamics with redundant hidden states. This model corresponds to hidden Markov models (HMMs), well known statistical models with a large range of application for time series modeling. The song annotation with these models with first-order hidden state dynamics agreed well with manual annotation, the score was comparable to that of a second-order HMM, and surpassed the zeroth-order model (the Gaussian mixture model; GMM), which does not use context information. Our results imply that the hierarchical representation with hidden state dynamics may underlie the neural implementation for generating complex behavioral sequences with higher-order dependencies. PMID:21915345

  10. Photoacoustic imaging of hidden dental caries by using a fiber-based probing system

    NASA Astrophysics Data System (ADS)

    Koyama, Takuya; Kakino, Satoko; Matsuura, Yuji

    2017-04-01

    Photoacoustic method to detect hidden dental caries is proposed. It was found that high frequency ultrasonic waves are generated from hidden carious part when radiating laser light to occlusal surface of model tooth. By making a map of intensity of these high frequency components, photoacoustic images of hidden caries were successfully obtained. A photoacoustic imaging system using a bundle of hollow optical fiber was fabricated for using clinical application, and clear photoacoustic image of hidden caries was also obtained by this system.

  11. Prognosis of Electrical Faults in Permanent Magnet AC Machines using the Hidden Markov Model

    DTIC Science & Technology

    2010-11-10

    time resolution and high frequency resolution Tiling is variable Wigner Ville Distribution Defined as W (t, ω) = ∫ s(t + τ 2 )s∗(t − τ 2 )e−jωτdτ...smoothed version of the Wigner distribution Amount of smoothing is controlled by σ Smoothing comes with a tradeoff of reduced resolution UNCLAS: Dist A...the Wigner or Choi-Williams distributions Although for Wigner and Choi-Williams distributions the probabilities are close for the early fault

  12. Extracting volatility signal using maximum a posteriori estimation

    NASA Astrophysics Data System (ADS)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  13. Hidden Semi-Markov Models and Their Application

    NASA Astrophysics Data System (ADS)

    Beyreuther, M.; Wassermann, J.

    2008-12-01

    In the framework of detection and classification of seismic signals there are several different approaches. Our choice for a more robust detection and classification algorithm is to adopt Hidden Markov Models (HMM), a technique showing major success in speech recognition. HMM provide a powerful tool to describe highly variable time series based on a double stochastic model and therefore allow for a broader class description than e.g. template based pattern matching techniques. Being a fully probabilistic model, HMM directly provide a confidence measure of an estimated classification. Furthermore and in contrast to classic artificial neuronal networks or support vector machines, HMM are incorporating the time dependence explicitly in the models thus providing a adequate representation of the seismic signal. As the majority of detection algorithms, HMM are not based on the time and amplitude dependent seismogram itself but on features estimated from the seismogram which characterize the different classes. Features, or in other words characteristic functions, are e.g. the sonogram bands, instantaneous frequency, instantaneous bandwidth or centroid time. In this study we apply continuous Hidden Semi-Markov Models (HSMM), an extension of continuous HMM. The duration probability of a HMM is an exponentially decaying function of the time, which is not a realistic representation of the duration of an earthquake. In contrast HSMM use Gaussians as duration probabilities, which results in an more adequate model. The HSMM detection and classification system is running online as an EARTHWORM module at the Bavarian Earthquake Service. Here the signals that are to be classified simply differ in epicentral distance. This makes it possible to easily decide whether a classification is correct or wrong and thus allows to better evaluate the advantages and disadvantages of the proposed algorithm. The evaluation is based on several month long continuous data and the results are additionally compared to the previously published discrete HMM, continuous HMM and a classic STA/LTA. The intermediate evaluation results are very promising.

  14. Discovering Hidden Controlling Parameters using Data Analytics and Dimensional Analysis

    NASA Astrophysics Data System (ADS)

    Del Rosario, Zachary; Lee, Minyong; Iaccarino, Gianluca

    2017-11-01

    Dimensional Analysis is a powerful tool, one which takes a priori information and produces important simplifications. However, if this a priori information - the list of relevant parameters - is missing a relevant quantity, then the conclusions from Dimensional Analysis will be incorrect. In this work, we present novel conclusions in Dimensional Analysis, which provide a means to detect this failure mode of missing or hidden parameters. These results are based on a restated form of the Buckingham Pi theorem that reveals a ridge function structure underlying all dimensionless physical laws. We leverage this structure by constructing a hypothesis test based on sufficient dimension reduction, allowing for an experimental data-driven detection of hidden parameters. Both theory and examples will be presented, using classical turbulent pipe flow as the working example. Keywords: experimental techniques, dimensional analysis, lurking variables, hidden parameters, buckingham pi, data analysis. First author supported by the NSF GRFP under Grant Number DGE-114747.

  15. A two particle hidden sector and the oscillations with photons

    NASA Astrophysics Data System (ADS)

    Alvarez, Pedro D.; Arias, Paola; Maldonado, Carlos

    2018-01-01

    We present a detailed study of the oscillations and optical properties for vacuum, in a model for the dark sector that contains axion-like particles and hidden photons. We provide bounds for the couplings versus the mass, using current results from ALPS-I and PVLAS. We also discuss the challenges for the detection of models with more than one hidden particle in light shining trough wall-like experiments.

  16. Synchronization behaviors of coupled systems composed of hidden attractors

    NASA Astrophysics Data System (ADS)

    Zhang, Ge; Wu, Fuqiang; Wang, Chunni; Ma, Jun

    2017-10-01

    Based on a class of chaotic system composed of hidden attractors, in which the equilibrium points are described by a circular function, complete synchronization between two identical systems, pattern formation and synchronization of network is investigated, respectively. A statistical factor of synchronization is defined and calculated by using the mean field theory, the dependence of synchronization on bifurcation parameters discussed in numerical way. By setting a chain network, which local kinetic is described by hidden attractors, synchronization approach is investigated. It is found that the synchronization and pattern formation are dependent on the coupling intensity and also the selection of coupling variables. In the end, open problems are proposed for readers’ extensive guidance and investigation.

  17. Birefringence and hidden photons

    NASA Astrophysics Data System (ADS)

    Arza, Ariel; Gamboa, J.

    2018-05-01

    We study a model where photons interact with hidden photons and millicharged particles through a kinetic mixing term. Particularly, we focus on vacuum birefringence effects and we find a bound for the millicharged parameter assuming that hidden photons are a piece of the local dark matter density.

  18. Tracking Skill Acquisition with Cognitive Diagnosis Models: A Higher-Order, Hidden Markov Model with Covariates

    ERIC Educational Resources Information Center

    Wang, Shiyu; Yang, Yan; Culpepper, Steven Andrew; Douglas, Jeffrey A.

    2018-01-01

    A family of learning models that integrates a cognitive diagnostic model and a higher-order, hidden Markov model in one framework is proposed. This new framework includes covariates to model skill transition in the learning environment. A Bayesian formulation is adopted to estimate parameters from a learning model. The developed methods are…

  19. Hidden Sector Dark Matter Models for the Galactic Center Gamma-Ray Excess

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlin, Asher; Gratia, Pierre; Hooper, Dan

    2014-07-24

    The gamma-ray excess observed from the Galactic Center can be interpreted as dark matter particles annihilating into Standard Model fermions with a cross section near that expected for a thermal relic. Although many particle physics models have been shown to be able to account for this signal, the fact that this particle has not yet been observed in direct detection experiments somewhat restricts the nature of its interactions. One way to suppress the dark matter's elastic scattering cross section with nuclei is to consider models in which the dark matter is part of a hidden sector. In such models, themore » dark matter can annihilate into other hidden sector particles, which then decay into Standard Model fermions through a small degree of mixing with the photon, Z, or Higgs bosons. After discussing the gamma-ray signal from hidden sector dark matter in general terms, we consider two concrete realizations: a hidden photon model in which the dark matter annihilates into a pair of vector gauge bosons that decay through kinetic mixing with the photon, and a scenario within the generalized NMSSM in which the dark matter is a singlino-like neutralino that annihilates into a pair of singlet Higgs bosons, which decay through their mixing with the Higgs bosons of the MSSM.« less

  20. Differential expression analysis for RNAseq using Poisson mixed models

    PubMed Central

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny

    2017-01-01

    Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632

  1. Singlet scalar top partners from accidental supersymmetry

    NASA Astrophysics Data System (ADS)

    Cheng, Hsin-Chia; Li, Lingfeng; Salvioni, Ennio; Verhaaren, Christopher B.

    2018-05-01

    We present a model wherein the Higgs mass is protected from the quadratic one-loop top quark corrections by scalar particles that are complete singlets under the Standard Model (SM) gauge group. While bearing some similarity to Folded Supersymmetry, the construction is purely four dimensional and enjoys more parametric freedom, allowing electroweak symmetry breaking to occur easily. The cancelation of the top loop quadratic divergence is ensured by a Z 3 symmetry that relates the SM top sector and two hidden top sectors, each charged under its own hidden color group. In addition to the singlet scalars, the hidden sectors contain electroweak-charged supermultiplets below the TeV scale, which provide the main access to this model at colliders. The phenomenology presents both differences and similarities with respect to other realizations of neutral naturalness. Generally, the glueballs of hidden color have longer decay lengths. The production of hidden sector particles results in quirk or squirk bound states, which later annihilate. We survey the possible signatures and corresponding experimental constraints.

  2. A Bayesian model for estimating population means using a link-tracing sampling design.

    PubMed

    St Clair, Katherine; O'Connell, Daniel

    2012-03-01

    Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied. © 2011, The International Biometric Society.

  3. Temporal competition between differentiation programs determines cell fate choice

    NASA Astrophysics Data System (ADS)

    Kuchina, Anna; Espinar, Lorena; Cagatay, Tolga; Balbin, Alejandro; Alvarado, Alma; Garcia-Ojalvo, Jordi; Suel, Gurol

    2011-03-01

    During pluripotent differentiation, cells adopt one of several distinct fates. The dynamics of this decision-making process are poorly understood, since cell fate choice may be governed by interactions between differentiation programs that are active at the same time. We studied the dynamics of decision-making in the model organism Bacillus subtilis by simultaneously measuring the activities of competing differentiation programs (sporulation and competence) in single cells. We discovered a precise switch-like point of cell fate choice previously hidden by cell-cell variability. Engineered artificial crosslinks between competence and sporulation circuits revealed that the precision of this choice is generated by temporal competition between the key players of two differentiation programs. Modeling suggests that variable progression towards a switch-like decision might represent a general strategy to maximize adaptability and robustness of cellular decision-making.

  4. Photoacoustic imaging of hidden dental caries by using a bundle of hollow optical fibers

    NASA Astrophysics Data System (ADS)

    Koyama, Takuya; Kakino, Satoko; Matsuura, Yuji

    2018-02-01

    Photoacoustic imaging system using a bundle of hollow-optical fibers to detect hidden dental caries is proposed. Firstly, we fabricated a hidden caries model with a brown pigment simulating a common color of caries lesion. It was found that high frequency ultrasonic waves are generated from hidden carious part when radiating Nd:YAG laser light with a 532 nm wavelength to occlusal surface of model tooth. We calculated by Fourier transform and found that the waveform from the carious part provides frequency components of approximately from 0.5 to 1.2 MHz. Then a photoacoustic imaging system using a bundle of hollow optical fiber was fabricated for clinical applications. From intensity map of frequency components in 0.5-1.2 MHz, photoacoustic images of hidden caries in the simulated samples were successfully obtained.

  5. On the LHC sensitivity for non-thermalised hidden sectors

    NASA Astrophysics Data System (ADS)

    Kahlhoefer, Felix

    2018-04-01

    We show under rather general assumptions that hidden sectors that never reach thermal equilibrium in the early Universe are also inaccessible for the LHC. In other words, any particle that can be produced at the LHC must either have been in thermal equilibrium with the Standard Model at some point or must be produced via the decays of another hidden sector particle that has been in thermal equilibrium. To reach this conclusion, we parametrise the cross section connecting the Standard Model to the hidden sector in a very general way and use methods from linear programming to calculate the largest possible number of LHC events compatible with the requirement of non-thermalisation. We find that even the HL-LHC cannot possibly produce more than a few events with energy above 10 GeV involving states from a non-thermalised hidden sector.

  6. A novel tree-based procedure for deciphering the genomic spectrum of clinical disease entities.

    PubMed

    Mbogning, Cyprien; Perdry, Hervé; Toussile, Wilson; Broët, Philippe

    2014-01-01

    Dissecting the genomic spectrum of clinical disease entities is a challenging task. Recursive partitioning (or classification trees) methods provide powerful tools for exploring complex interplay among genomic factors, with respect to a main factor, that can reveal hidden genomic patterns. To take confounding variables into account, the partially linear tree-based regression (PLTR) model has been recently published. It combines regression models and tree-based methodology. It is however computationally burdensome and not well suited for situations for which a large number of exploratory variables is expected. We developed a novel procedure that represents an alternative to the original PLTR procedure, and considered different selection criteria. A simulation study with different scenarios has been performed to compare the performances of the proposed procedure to the original PLTR strategy. The proposed procedure with a Bayesian Information Criterion (BIC) achieved good performances to detect the hidden structure as compared to the original procedure. The novel procedure was used for analyzing patterns of copy-number alterations in lung adenocarcinomas, with respect to Kirsten Rat Sarcoma Viral Oncogene Homolog gene (KRAS) mutation status, while controlling for a cohort effect. Results highlight two subgroups of pure or nearly pure wild-type KRAS tumors with particular copy-number alteration patterns. The proposed procedure with a BIC criterion represents a powerful and practical alternative to the original procedure. Our procedure performs well in a general framework and is simple to implement.

  7. Extracting Leading Nonlinear Modes of Changing Climate From Global SST Time Series

    NASA Astrophysics Data System (ADS)

    Mukhin, D.; Gavrilov, A.; Loskutov, E. M.; Feigin, A. M.; Kurths, J.

    2017-12-01

    Data-driven modeling of climate requires adequate principal variables extracted from observed high-dimensional data. For constructing such variables it is needed to find spatial-temporal patterns explaining a substantial part of the variability and comprising all dynamically related time series from the data. The difficulties of this task rise from the nonlinearity and non-stationarity of the climate dynamical system. The nonlinearity leads to insufficiency of linear methods of data decomposition for separating different processes entangled in the observed time series. On the other hand, various forcings, both anthropogenic and natural, make the dynamics non-stationary, and we should be able to describe the response of the system to such forcings in order to separate the modes explaining the internal variability. The method we present is aimed to overcome both these problems. The method is based on the Nonlinear Dynamical Mode (NDM) decomposition [1,2], but takes into account external forcing signals. An each mode depends on hidden, unknown a priori, time series which, together with external forcing time series, are mapped onto data space. Finding both the hidden signals and the mapping allows us to study the evolution of the modes' structure in changing external conditions and to compare the roles of the internal variability and forcing in the observed behavior. The method is used for extracting of the principal modes of SST variability on inter-annual and multidecadal time scales accounting the external forcings such as CO2, variations of the solar activity and volcanic activity. The structure of the revealed teleconnection patterns as well as their forecast under different CO2 emission scenarios are discussed.[1] Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. [2] Gavrilov, A., Mukhin, D., Loskutov, E., Volodin, E., Feigin, A., & Kurths, J. (2016). Method for reconstructing nonlinear modes with adaptive structure from multidimensional data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 26(12), 123101.

  8. Prediction of municipal solid waste generation using nonlinear autoregressive network.

    PubMed

    Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; Maulud, K N A

    2015-12-01

    Most of the developing countries have solid waste management problems. Solid waste strategic planning requires accurate prediction of the quality and quantity of the generated waste. In developing countries, such as Malaysia, the solid waste generation rate is increasing rapidly, due to population growth and new consumption trends that characterize society. This paper proposes an artificial neural network (ANN) approach using feedforward nonlinear autoregressive network with exogenous inputs (NARX) to predict annual solid waste generation in relation to demographic and economic variables like population number, gross domestic product, electricity demand per capita and employment and unemployment numbers. In addition, variable selection procedures are also developed to select a significant explanatory variable. The model evaluation was performed using coefficient of determination (R(2)) and mean square error (MSE). The optimum model that produced the lowest testing MSE (2.46) and the highest R(2) (0.97) had three inputs (gross domestic product, population and employment), eight neurons and one lag in the hidden layer, and used Fletcher-Powell's conjugate gradient as the training algorithm.

  9. Unsupervised deep learning reveals prognostically relevant subtypes of glioblastoma.

    PubMed

    Young, Jonathan D; Cai, Chunhui; Lu, Xinghua

    2017-10-03

    One approach to improving the personalized treatment of cancer is to understand the cellular signaling transduction pathways that cause cancer at the level of the individual patient. In this study, we used unsupervised deep learning to learn the hierarchical structure within cancer gene expression data. Deep learning is a group of machine learning algorithms that use multiple layers of hidden units to capture hierarchically related, alternative representations of the input data. We hypothesize that this hierarchical structure learned by deep learning will be related to the cellular signaling system. Robust deep learning model selection identified a network architecture that is biologically plausible. Our model selection results indicated that the 1st hidden layer of our deep learning model should contain about 1300 hidden units to most effectively capture the covariance structure of the input data. This agrees with the estimated number of human transcription factors, which is approximately 1400. This result lends support to our hypothesis that the 1st hidden layer of a deep learning model trained on gene expression data may represent signals related to transcription factor activation. Using the 3rd hidden layer representation of each tumor as learned by our unsupervised deep learning model, we performed consensus clustering on all tumor samples-leading to the discovery of clusters of glioblastoma multiforme with differential survival. One of these clusters contained all of the glioblastoma samples with G-CIMP, a known methylation phenotype driven by the IDH1 mutation and associated with favorable prognosis, suggesting that the hidden units in the 3rd hidden layer representations captured a methylation signal without explicitly using methylation data as input. We also found differentially expressed genes and well-known mutations (NF1, IDH1, EGFR) that were uniquely correlated with each of these clusters. Exploring these unique genes and mutations will allow us to further investigate the disease mechanisms underlying each of these clusters. In summary, we show that a deep learning model can be trained to represent biologically and clinically meaningful abstractions of cancer gene expression data. Understanding what additional relationships these hidden layer abstractions have with the cancer cellular signaling system could have a significant impact on the understanding and treatment of cancer.

  10. A Geometrical Approach to Bell's Theorem

    NASA Technical Reports Server (NTRS)

    Rubincam, David Parry

    2000-01-01

    Bell's theorem can be proved through simple geometrical reasoning, without the need for the Psi function, probability distributions, or calculus. The proof is based on N. David Mermin's explication of the Einstein-Podolsky-Rosen-Bohm experiment, which involves Stern-Gerlach detectors which flash red or green lights when detecting spin-up or spin-down. The statistics of local hidden variable theories for this experiment can be arranged in colored strips from which simple inequalities can be deduced. These inequalities lead to a demonstration of Bell's theorem. Moreover, all local hidden variable theories can be graphed in such a way as to enclose their statistics in a pyramid, with the quantum-mechanical result lying a finite distance beneath the base of the pyramid.

  11. STDP Installs in Winner-Take-All Circuits an Online Approximation to Hidden Markov Model Learning

    PubMed Central

    Kappel, David; Nessler, Bernhard; Maass, Wolfgang

    2014-01-01

    In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyramidal cells in Winner-Take-All circuits with lateral excitation. In fact, one can show that these motifs endow cortical microcircuits with functional properties of a hidden Markov model, a generic model for solving such tasks through probabilistic inference. Whereas in engineering applications this model is adapted to specific tasks through offline learning, we show here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards. We demonstrate the emergent computing capabilities of the model through several computer simulations. The full power of hidden Markov model learning can be attained through reward-gated STDP. This is due to the fact that these mechanisms enable a rejection sampling approximation to theoretically optimal learning. We investigate the possible performance gain that can be achieved with this more accurate learning method for an artificial grammar task. PMID:24675787

  12. Hidden symmetries of the extended Kitaev-Heisenberg model: Implications for the honeycomb-lattice iridates A2IrO3

    NASA Astrophysics Data System (ADS)

    Chaloupka, Jiří; Khaliullin, Giniyat

    2015-07-01

    We have explored the hidden symmetries of a generic four-parameter nearest-neighbor spin model, allowed in honeycomb-lattice compounds under trigonal compression. Our method utilizes a systematic algorithm to identify all dual transformations of the model that map the Hamiltonian on itself, changing the parameters and providing exact links between different points in its parameter space. We have found the complete set of points of hidden SU(2) symmetry at which a seemingly highly anisotropic model can be mapped back on the Heisenberg model and inherits therefore its properties such as the presence of gapless Goldstone modes. The procedure used to search for the hidden symmetries is quite general and may be extended to other bond-anisotropic spin models and other lattices, such as the triangular, kagome, hyperhoneycomb, or harmonic-honeycomb lattices. We apply our findings to the honeycomb-lattice iridates Na2IrO3 and Li2IrO3 , and illustrate how they help to identify plausible values of the model parameters that are compatible with the available experimental data.

  13. Hidden Markov models and other machine learning approaches in computational molecular biology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldi, P.

    1995-12-31

    This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Computational tools are increasingly needed to process the massive amounts of data, to organize and classify sequences, to detect weak similarities, to separate coding from non-coding regions, and reconstruct the underlying evolutionary history. The fundamental problem in machine learning is the same as in scientific reasoning in general, as well as statistical modeling: to come up with a good model for the data. In thismore » tutorial four classes of models are reviewed. They are: Hidden Markov models; artificial Neural Networks; Belief Networks; and Stochastic Grammars. When dealing with DNA and protein primary sequences, Hidden Markov models are one of the most flexible and powerful alignments and data base searches. In this tutorial, attention is focused on the theory of Hidden Markov Models, and how to apply them to problems in molecular biology.« less

  14. Intelligent data analysis to model and understand live cell time-lapse sequences.

    PubMed

    Paterson, Allan; Ashtari, M; Ribé, D; Stenbeck, G; Tucker, A

    2012-01-01

    One important aspect of cellular function, which is at the basis of tissue homeostasis, is the delivery of proteins to their correct destinations. Significant advances in live cell microscopy have allowed tracking of these pathways by following the dynamics of fluorescently labelled proteins in living cells. This paper explores intelligent data analysis techniques to model the dynamic behavior of proteins in living cells as well as to classify different experimental conditions. We use a combination of decision tree classification and hidden Markov models. In particular, we introduce a novel approach to "align" hidden Markov models so that hidden states from different models can be cross-compared. Our models capture the dynamics of two experimental conditions accurately with a stable hidden state for control data and multiple (less stable) states for the experimental data recapitulating the behaviour of particle trajectories within live cell time-lapse data. In addition to having successfully developed an automated framework for the classification of protein transport dynamics from live cell time-lapse data our model allows us to understand the dynamics of a complex trafficking pathway in living cells in culture.

  15. Evaluation of cracks with different hidden depths and shapes using surface magnetic field measurements based on semi-analytical modelling

    NASA Astrophysics Data System (ADS)

    Jiang, Feng; Liu, Shulin

    2018-03-01

    In this paper, we present a feasibility study for detecting cracks with different hidden depths and shapes using information contained in the magnetic field excited by a rectangular coil with a rectangular cross section. First, we solve for the eigenvalues and the unknown coefficients of the magnetic vector potential by imposing artificial and natural boundary conditions. Thus, a semi-analytical solution for the magnetic field distribution around the surface of a conducting plate that contains a long hidden crack is formulated. Next, based on the proposed modelling, the influences of the different hidden depth cracks on the surface magnetic field are analysed. The results show that the horizontal and vertical components of the magnetic field near the crack are becoming weaker and that the phase information of the magnetic field can be used to qualitatively determine the hidden depth of the crack. In addition, the model is optimised to improve its accuracy in classifying crack types. The relationship between signal features and crack shapes is subsequently established. The modified model is validated by using finite element simulations, visually indicating the change in the magnetic field near the crack.

  16. A Finite Element Analysis of a Class of Problems in Elasto-Plasticity with Hidden Variables.

    DTIC Science & Technology

    1985-09-01

    RD-R761 642 A FINITE ELEMENT ANALYSIS OF A CLASS OF PROBLEMS IN 1/2 ELASTO-PLASTICITY MIlT (U) TEXAS INST FOR COMPUTATIONAL MECHANICS AUSTIN J T ODEN...end Subtitle) S. TYPE OF REPORT & PERIOD COVERED A FINITE ELEMENT ANALYSIS OF A CLASS OF PROBLEMS Final Report IN ELASTO-PLASTICITY WITH HIDDEN...aieeoc ede It neceeeary nd Identify by block number) ;"Elastoplasticity, finite deformations; non-convex analysis ; finite element methods, metal forming

  17. Deriving Einstein-Podolsky-Rosen steering inequalities from the few-body Abner Shimony inequalities

    NASA Astrophysics Data System (ADS)

    Zhou, Jie; Meng, Hui-Xian; Jiang, Shu-Han; Xu, Zhen-Peng; Ren, Changliang; Su, Hong-Yi; Chen, Jing-Ling

    2018-04-01

    For the Abner Shimony (AS) inequalities, the simplest unified forms of directions attaining the maximum quantum violation are investigated. Based on these directions, a family of Einstein-Podolsky-Rosen (EPR) steering inequalities is derived from the AS inequalities in a systematic manner. For these inequalities, the local hidden state (LHS) bounds are strictly less than the local hidden variable (LHV) bounds. This means that the EPR steering is a form of quantum nonlocality strictly weaker than Bell nonlocality.

  18. Algorithms for Hidden Markov Models Restricted to Occurrences of Regular Expressions

    PubMed Central

    Tataru, Paula; Sand, Andreas; Hobolth, Asger; Mailund, Thomas; Pedersen, Christian N. S.

    2013-01-01

    Hidden Markov Models (HMMs) are widely used probabilistic models, particularly for annotating sequential data with an underlying hidden structure. Patterns in the annotation are often more relevant to study than the hidden structure itself. A typical HMM analysis consists of annotating the observed data using a decoding algorithm and analyzing the annotation to study patterns of interest. For example, given an HMM modeling genes in DNA sequences, the focus is on occurrences of genes in the annotation. In this paper, we define a pattern through a regular expression and present a restriction of three classical algorithms to take the number of occurrences of the pattern in the hidden sequence into account. We present a new algorithm to compute the distribution of the number of pattern occurrences, and we extend the two most widely used existing decoding algorithms to employ information from this distribution. We show experimentally that the expectation of the distribution of the number of pattern occurrences gives a highly accurate estimate, while the typical procedure can be biased in the sense that the identified number of pattern occurrences does not correspond to the true number. We furthermore show that using this distribution in the decoding algorithms improves the predictive power of the model. PMID:24833225

  19. Relations between Representational Consistency, Conceptual Understanding of the Force Concept, and Scientific Reasoning

    ERIC Educational Resources Information Center

    Nieminen, Pasi; Savinainen, Antti; Viiri, Jouni

    2012-01-01

    Previous physics education research has raised the question of "hidden variables" behind students' success in learning certain concepts. In the context of the force concept, it has been suggested that students' reasoning ability is one such variable. Strong positive correlations between students' preinstruction scores for reasoning…

  20. QRS complex detection based on continuous density hidden Markov models using univariate observations

    NASA Astrophysics Data System (ADS)

    Sotelo, S.; Arenas, W.; Altuve, M.

    2018-04-01

    In the electrocardiogram (ECG), the detection of QRS complexes is a fundamental step in the ECG signal processing chain since it allows the determination of other characteristics waves of the ECG and provides information about heart rate variability. In this work, an automatic QRS complex detector based on continuous density hidden Markov models (HMM) is proposed. HMM were trained using univariate observation sequences taken either from QRS complexes or their derivatives. The detection approach is based on the log-likelihood comparison of the observation sequence with a fixed threshold. A sliding window was used to obtain the observation sequence to be evaluated by the model. The threshold was optimized by receiver operating characteristic curves. Sensitivity (Sen), specificity (Spc) and F1 score were used to evaluate the detection performance. The approach was validated using ECG recordings from the MIT-BIH Arrhythmia database. A 6-fold cross-validation shows that the best detection performance was achieved with 2 states HMM trained with QRS complexes sequences (Sen = 0.668, Spc = 0.360 and F1 = 0.309). We concluded that these univariate sequences provide enough information to characterize the QRS complex dynamics from HMM. Future works are directed to the use of multivariate observations to increase the detection performance.

  1. Equivalence between contextuality and negativity of the Wigner function for qudits

    NASA Astrophysics Data System (ADS)

    Delfosse, Nicolas; Okay, Cihan; Bermejo-Vega, Juan; Browne, Dan E.; Raussendorf, Robert

    2017-12-01

    Understanding what distinguishes quantum mechanics from classical mechanics is crucial for quantum information processing applications. In this work, we consider two notions of non-classicality for quantum systems, negativity of the Wigner function and contextuality for Pauli measurements. We prove that these two notions are equivalent for multi-qudit systems with odd local dimension. For a single qudit, the equivalence breaks down. We show that there exist single qudit states that admit a non-contextual hidden variable model description and whose Wigner functions are negative.

  2. A hidden markov model derived structural alphabet for proteins.

    PubMed

    Camproux, A C; Gautier, R; Tufféry, P

    2004-06-04

    Understanding and predicting protein structures depends on the complexity and the accuracy of the models used to represent them. We have set up a hidden Markov model that discretizes protein backbone conformation as series of overlapping fragments (states) of four residues length. This approach learns simultaneously the geometry of the states and their connections. We obtain, using a statistical criterion, an optimal systematic decomposition of the conformational variability of the protein peptidic chain in 27 states with strong connection logic. This result is stable over different protein sets. Our model fits well the previous knowledge related to protein architecture organisation and seems able to grab some subtle details of protein organisation, such as helix sub-level organisation schemes. Taking into account the dependence between the states results in a description of local protein structure of low complexity. On an average, the model makes use of only 8.3 states among 27 to describe each position of a protein structure. Although we use short fragments, the learning process on entire protein conformations captures the logic of the assembly on a larger scale. Using such a model, the structure of proteins can be reconstructed with an average accuracy close to 1.1A root-mean-square deviation and for a complexity of only 3. Finally, we also observe that sequence specificity increases with the number of states of the structural alphabet. Such models can constitute a very relevant approach to the analysis of protein architecture in particular for protein structure prediction.

  3. Automated recognition of bird song elements from continuous recordings using dynamic time warping and hidden Markov models: a comparative study.

    PubMed

    Kogan, J A; Margoliash, D

    1998-04-01

    The performance of two techniques is compared for automated recognition of bird song units from continuous recordings. The advantages and limitations of dynamic time warping (DTW) and hidden Markov models (HMMs) are evaluated on a large database of male songs of zebra finches (Taeniopygia guttata) and indigo buntings (Passerina cyanea), which have different types of vocalizations and have been recorded under different laboratory conditions. Depending on the quality of recordings and complexity of song, the DTW-based technique gives excellent to satisfactory performance. Under challenging conditions such as noisy recordings or presence of confusing short-duration calls, good performance of the DTW-based technique requires careful selection of templates that may demand expert knowledge. Because HMMs are trained, equivalent or even better performance of HMMs can be achieved based only on segmentation and labeling of constituent vocalizations, albeit with many more training examples than DTW templates. One weakness in HMM performance is the misclassification of short-duration vocalizations or song units with more variable structure (e.g., some calls, and syllables of plastic songs). To address these and other limitations, new approaches for analyzing bird vocalizations are discussed.

  4. Machine learning in sentiment reconstruction of the simulated stock market

    NASA Astrophysics Data System (ADS)

    Goykhman, Mikhail; Teimouri, Ali

    2018-02-01

    In this paper we continue the study of the simulated stock market framework defined by the driving sentiment processes. We focus on the market environment driven by the buy/sell trading sentiment process of the Markov chain type. We apply the methodology of the Hidden Markov Models and the Recurrent Neural Networks to reconstruct the transition probabilities matrix of the Markov sentiment process and recover the underlying sentiment states from the observed stock price behavior. We demonstrate that the Hidden Markov Model can successfully recover the transition probabilities matrix for the hidden sentiment process of the Markov Chain type. We also demonstrate that the Recurrent Neural Network can successfully recover the hidden sentiment states from the observed simulated stock price time series.

  5. Convective heat transfer and pressure drop of aqua based TiO2 nanofluids at different diameters of nanoparticles: Data analysis and modeling with artificial neural network

    NASA Astrophysics Data System (ADS)

    Hemmat Esfe, Mohammad; Nadooshan, Afshin Ahmadi; Arshi, Ali; Alirezaie, Ali

    2018-03-01

    In this study, experimental data related to the Nusselt number and pressure drop of aqueous nanofluids of Titania is modeled and estimated by using ANN with 2 hidden layers and 8 neurons in each layer. Also in this study the effect of various effective variables in the Nusselt number and pressure drop is surveyed. This study indicated that the neural network modeling has been able to model experimental data with great accuracy. The modeling regression coefficient for the data of Nusselt number and relative pressure drop is 99.94% and 99.97% respectively. Besides, it represented that the increment of the Reynolds number and concentration made the increment of Nusselt number and pressure drop of aqueous nanofluid.

  6. Meta-modeling of the pesticide fate model MACRO for groundwater exposure assessments using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Stenemo, Fredrik; Lindahl, Anna M. L.; Gärdenäs, Annemieke; Jarvis, Nicholas

    2007-08-01

    Several simple index methods that use easily accessible data have been developed and included in decision-support systems to estimate pesticide leaching across larger areas. However, these methods often lack important process descriptions (e.g. macropore flow), which brings into question their reliability. Descriptions of macropore flow have been included in simulation models, but these are too complex and demanding for spatial applications. To resolve this dilemma, a neural network simulation meta-model of the dual-permeability macropore flow model MACRO was created for pesticide groundwater exposure assessment. The model was parameterized using pedotransfer functions that require as input the clay and sand content of the topsoil and subsoil, and the topsoil organic carbon content. The meta-model also requires the topsoil pesticide half-life and the soil organic carbon sorption coefficient as input. A fully connected feed-forward multilayer perceptron classification network with two hidden layers, linked to fully connected feed-forward multilayer perceptron neural networks with one hidden layer, trained on sub-sets of the target variable, was shown to be a suitable meta-model for the intended purpose. A Fourier amplitude sensitivity test showed that the model output (the 80th percentile average yearly pesticide concentration at 1 m depth for a 20 year simulation period) was sensitive to all input parameters. The two input parameters related to pesticide characteristics (i.e. soil organic carbon sorption coefficient and topsoil pesticide half-life) were the most influential, but texture in the topsoil was also quite important since it was assumed to control the mass exchange coefficient that regulates the strength of macropore flow. This is in contrast to models based on the advection-dispersion equation where soil texture is relatively unimportant. The use of the meta-model is exemplified with a case-study where the spatial variability of pesticide leaching is mapped for a small field. It was shown that the area of the field that contributes most to leaching depends on the properties of the compound in question. It is concluded that the simulation meta-model of MACRO should prove useful for mapping relative pesticide leaching risks at large scales.

  7. Construction of state-independent proofs for quantum contextuality

    NASA Astrophysics Data System (ADS)

    Tang, Weidong; Yu, Sixia

    2017-12-01

    Since the enlightening proofs of quantum contextuality first established by Kochen and Specker, and also by Bell, various simplified proofs have been constructed to exclude the noncontextual hidden variable theory of our nature at the microscopic scale. The conflict between the noncontextual hidden variable theory and quantum mechanics is commonly revealed by Kochen-Specker sets of yes-no tests, represented by projectors (or rays), via either logical contradictions or noncontextuality inequalities in a state-(in)dependent manner. Here we propose a systematic and programmable construction of a state-independent proof from a given set of nonspecific rays in C3 according to their Gram matrix. This approach brings us a greater convenience in the experimental arrangements. Besides, our proofs in C3 can also be generalized to any higher-dimensional systems by a recursive method.

  8. Quantum mechanics and hidden superconformal symmetry

    NASA Astrophysics Data System (ADS)

    Bonezzi, R.; Corradini, O.; Latini, E.; Waldron, A.

    2017-12-01

    Solvability of the ubiquitous quantum harmonic oscillator relies on a spectrum generating osp (1 |2 ) superconformal symmetry. We study the problem of constructing all quantum mechanical models with a hidden osp (1 |2 ) symmetry on a given space of states. This problem stems from interacting higher spin models coupled to gravity. In one dimension, we show that the solution to this problem is the Vasiliev-Plyushchay family of quantum mechanical models with hidden superconformal symmetry obtained by viewing the harmonic oscillator as a one dimensional Dirac system, so that Grassmann parity equals wave function parity. These models—both oscillator and particlelike—realize all possible unitary irreducible representations of osp (1 |2 ).

  9. Differential expression analysis for RNAseq using Poisson mixed models.

    PubMed

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang

    2017-06-20

    Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Detection of time delays and directional interactions based on time series from complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Ma, Huanfei; Leng, Siyang; Tao, Chenyang; Ying, Xiong; Kurths, Jürgen; Lai, Ying-Cheng; Lin, Wei

    2017-07-01

    Data-based and model-free accurate identification of intrinsic time delays and directional interactions is an extremely challenging problem in complex dynamical systems and their networks reconstruction. A model-free method with new scores is proposed to be generally capable of detecting single, multiple, and distributed time delays. The method is applicable not only to mutually interacting dynamical variables but also to self-interacting variables in a time-delayed feedback loop. Validation of the method is carried out using physical, biological, and ecological models and real data sets. Especially, applying the method to air pollution data and hospital admission records of cardiovascular diseases in Hong Kong reveals the major air pollutants as a cause of the diseases and, more importantly, it uncovers a hidden time delay (about 30-40 days) in the causal influence that previous studies failed to detect. The proposed method is expected to be universally applicable to ascertaining and quantifying subtle interactions (e.g., causation) in complex systems arising from a broad range of disciplines.

  11. An alternative approach for neural network evolution with a genetic algorithm: crossover by combinatorial optimization.

    PubMed

    García-Pedrajas, Nicolás; Ortiz-Boyer, Domingo; Hervás-Martínez, César

    2006-05-01

    In this work we present a new approach to crossover operator in the genetic evolution of neural networks. The most widely used evolutionary computation paradigm for neural network evolution is evolutionary programming. This paradigm is usually preferred due to the problems caused by the application of crossover to neural network evolution. However, crossover is the most innovative operator within the field of evolutionary computation. One of the most notorious problems with the application of crossover to neural networks is known as the permutation problem. This problem occurs due to the fact that the same network can be represented in a genetic coding by many different codifications. Our approach modifies the standard crossover operator taking into account the special features of the individuals to be mated. We present a new model for mating individuals that considers the structure of the hidden layer and redefines the crossover operator. As each hidden node represents a non-linear projection of the input variables, we approach the crossover as a problem on combinatorial optimization. We can formulate the problem as the extraction of a subset of near-optimal projections to create the hidden layer of the new network. This new approach is compared to a classical crossover in 25 real-world problems with an excellent performance. Moreover, the networks obtained are much smaller than those obtained with classical crossover operator.

  12. An hourly variation in zoo visitor interest: measurement and significance for animal welfare research.

    PubMed

    Davey, Gareth

    2006-01-01

    A methodological difficulty facing welfare research on nonhuman animals in the zoo is the large number of uncontrolled variables due to variation within and between study sites. Zoo visitors act as uncontrolled variables, with number, density, size, and behavior constantly changing. This is worrisome because previous research linked visitor variables to animal behavioral changes indicative of stress. There are implications for research design: Studies not accounting for visitors' effect on animal welfare risk confounding (visitor) variables distorting their findings. Zoos need methods to measure and minimize effects of visitor behavior and to ensure that there are no hidden variables in research models. This article identifies a previously unreported variable--hourly variation (decrease) in visitor interest--that may impinge on animal welfare and validates a methodology for measuring it. That visitor interest wanes across the course of the day has important implications for animal welfare management; visitor effects on animal welfare are likely to occur, or intensify, during the morning or in earlier visits when visitor interest is greatest. This article discusses this issue and possible solutions to reduce visitor effects on animal well-being.

  13. NPLOT: an Interactive Plotting Program for NASTRAN Finite Element Models

    NASA Technical Reports Server (NTRS)

    Jones, G. K.; Mcentire, K. J.

    1985-01-01

    The NPLOT (NASTRAN Plot) is an interactive computer graphics program for plotting undeformed and deformed NASTRAN finite element models. Developed at NASA's Goddard Space Flight Center, the program provides flexible element selection and grid point, ASET and SPC degree of freedom labelling. It is easy to use and provides a combination menu and command driven user interface. NPLOT also provides very fast hidden line and haloed line algorithms. The hidden line algorithm in NPLOT proved to be both very accurate and several times faster than other existing hidden line algorithms. A fast spatial bucket sort and horizon edge computation are used to achieve this high level of performance. The hidden line and the haloed line algorithms are the primary features that make NPLOT different from other plotting programs.

  14. HIPPI: highly accurate protein family classification with ensembles of HMMs.

    PubMed

    Nguyen, Nam-Phuong; Nute, Michael; Mirarab, Siavash; Warnow, Tandy

    2016-11-11

    Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification). HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .

  15. Hybrid approach combining chemometrics and likelihood ratio framework for reporting the evidential value of spectra.

    PubMed

    Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema

    2016-08-10

    Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    PubMed Central

    Jie, Shao

    2014-01-01

    A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance. PMID:25054172

  17. Hidden Process Models

    DTIC Science & Technology

    2009-12-18

    cannot be detected with univariate techniques, but require multivariate analysis instead (Kamitani and Tong [2005]). Two other time series analysis ...learning for time series analysis . The historical record of DBNs can be traced back to Dean and Kanazawa [1988] and Dean and Wellman [1991], with...Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: Hidden Process Models, probabilistic time series modeling, functional Magnetic Resonance Imaging

  18. Regime Behavior in Paleo-Reconstructed Streamflow: Attributions to Atmospheric Dynamics, Synoptic Circulation and Large-Scale Climate Teleconnection Patterns

    NASA Astrophysics Data System (ADS)

    Ravindranath, A.; Devineni, N.

    2017-12-01

    Studies have shown that streamflow behavior and dynamics have a significant link with climate and climate variability. Patterns of persistent regime behavior from extended streamflow records in many watersheds justify investigating large-scale climate mechanisms as potential drivers of hydrologic regime behavior and streamflow variability. Understanding such streamflow-climate relationships is crucial to forecasting/simulation systems and the planning and management of water resources. In this study, hidden Markov models are used with reconstructed streamflow to detect regime-like behaviors - the hidden states - and state transition phenomena. Individual extreme events and their spatial variability across the basin are then verified with the identified states. Wavelet analysis is performed to examine the signals over time in the streamflow records. Joint analyses of the climatic data in the 20th century and the identified states are undertaken to better understand the hydroclimatic connections within the basin as well as important teleconnections that influence water supply. Compositing techniques are used to identify atmospheric circulation patterns associated with identified states of streamflow. The grouping of such synoptic patterns and their frequency are then examined. Sliding time-window correlation analysis and cross-wavelet spectral analysis are performed to establish the synchronicity of basin flows to the identified synoptic and teleconnection patterns. The Missouri River Basin (MRB) is examined in this study, both as a means of better understanding the synoptic climate controls in this important watershed and as a case study for the techniques developed here. Initial wavelet analyses of reconstructed streamflow at major gauges in the MRB show multidecadal cycles in regime behavior.

  19. zipHMMlib: a highly optimised HMM library exploiting repetitions in the input to speed up the forward algorithm.

    PubMed

    Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas

    2013-11-22

    Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.

  20. Bounds on the number of hidden neurons in three-layer binary neural networks.

    PubMed

    Zhang, Zhaozhi; Ma, Xiaomin; Yang, Yixian

    2003-09-01

    This paper investigates an important problem concerning the complexity of three-layer binary neural networks (BNNs) with one hidden layer. The neuron in the studied BNNs employs a hard limiter activation function with only integer weights and an integer threshold. The studies are focused on implementations of arbitrary Boolean functions which map from [0, 1]n into [0, 1]. A deterministic algorithm called set covering algorithm (SCA) is proposed for the construction of a three-layer BNN to implement an arbitrary Boolean function. The SCA is based on a unit sphere covering (USC) of the Hamming space (HS) which is chosen in advance. It is proved that for the implementation of an arbitrary Boolean function of n-variables (n > or = 3) by using SCA, [3L/2] hidden neurons are necessary and sufficient, where L is the number of unit spheres contained in the chosen USC of the n-dimensional HS. It is shown that by using SCA, the number of hidden neurons required is much less than that by using a two-parallel hyperplane method. In order to indicate the potential ability of three-layer BNNs, a lower bound on the required number of hidden neurons which is derived by using the method of estimating the Vapnik-Chervonenkis (VC) dimension is also given.

  1. FIMP dark matter freeze-in gauge mediation and hidden sector

    NASA Astrophysics Data System (ADS)

    Tsao, Kuo-Hsing

    2018-07-01

    We explore the dark matter freeze-in mechanism within the gauge mediation framework, which involves a hidden feebly interacting massive particle (FIMP) coupling feebly with the messenger fields while the messengers are still in the thermal bath. The FIMP is the fermionic component of the pseudo-moduli in a generic metastable supersymmetry (SUSY) breaking model and resides in the hidden sector. The relic abundance and the mass of the FIMP are determined by the SUSY breaking scale and the feeble coupling. The gravitino, which is the canonical dark matter candidate in the gauge mediation framework, contributes to the dark matter relic abundance along with the freeze-in of the FIMP. The hidden sector thus becomes two-component with both the FIMP and gravitino lodging in the SUSY breaking hidden sector. We point out that the ratio between the FIMP and the gravitino is determined by how SUSY breaking is communicated to the messengers. In particular when the FIMP dominates the hidden sector, the gravitino becomes the minor contributor in the hidden sector. Meanwhile, the neutralino is assumed to be both the weakly interacting massive particle dark matter candidate in the freeze-out mechanism and the lightest observable SUSY particle. We further find out the neutralino has the sub-leading contribution to the current dark matter relic density in the parameter space of our freeze-in gauge mediation model. Our result links the SUSY breaking scale in the gauge mediation framework with the FIMP freeze-in production rate leading to a natural and predicting scenario for the studies of the dark matter in the hidden sector.

  2. Behavioral and Temporal Pattern Detection Within Financial Data With Hidden Information

    DTIC Science & Technology

    2012-02-01

    probabilistic pattern detector to monitor the pattern. 15. SUBJECT TERMS Runtime verification, Hidden data, Hidden Markov models, Formal specifications...sequences in many other fields besides financial systems [L, TV, LC, LZ ]. Rather, the technique suggested in this paper is positioned as a hybrid...operation of the pattern detector . Section 7 describes the operation of the probabilistic pattern-matching monitor, and section 8 describes three

  3. 2015 Cataloging Hidden Special Collections and Archives Unconference and Symposium: Innovation, Collaboration, and Models. Proceedings of the CLIR Cataloging Hidden Special Collections and Archives Symposium (Philadelphia, Pennsylvania, March 12-13, 2015)

    ERIC Educational Resources Information Center

    Oestreicher, Cheryl, Ed.

    2015-01-01

    The 2015 CLIR Unconference & Symposium was the capstone event to seven years of grant funding through CLIR's Cataloging Hidden Special Collections and Archives program. These proceedings group presentations by theme. Collaborations provides examples of multi-institutional projects, including one international collaboration; Student and Faculty…

  4. Hidden negative linear compressibility in lithium l-tartrate.

    PubMed

    Yeung, Hamish H-M; Kilmurray, Rebecca; Hobday, Claire L; McKellar, Scott C; Cheetham, Anthony K; Allan, David R; Moggach, Stephen A

    2017-02-01

    By decoupling the mechanical behaviour of building units for the first time in a wine-rack framework containing two different strut types, we show that lithium l-tartrate exhibits NLC with a maximum value, K max = -21 TPa -1 , and an overall NLC capacity, χ NLC = 5.1%, that are comparable to the most exceptional materials to date. Furthermore, the contributions from molecular strut compression and angle opening interplay to give rise to so-called "hidden" negative linear compressibility, in which NLC is absent at ambient pressure, switched on at 2 GPa and sustained up to the limit of our experiment, 5.5 GPa. Analysis of the changes in crystal structure using variable-pressure synchrotron X-ray diffraction reveals new chemical and geometrical design rules to assist the discovery of other materials with exciting hidden anomalous mechanical properties.

  5. Kochen-Specker theorem studied with neutron interferometer.

    PubMed

    Hasegawa, Yuji; Durstberger-Rennhofer, Katharina; Sponar, Stephan; Rauch, Helmut

    2011-04-01

    The Kochen-Specker theorem shows the incompatibility of noncontextual hidden variable theories with quantum mechanics. Quantum contextuality is a more general concept than quantum non-locality which is quite well tested in experiments using Bell inequalities. Within neutron interferometry we performed an experimental test of the Kochen-Specker theorem with an inequality, which identifies quantum contextuality, by using spin-path entanglement of single neutrons. Here entanglement is achieved not between different particles, but between degrees of freedom of a single neutron, i.e., between spin and path degree of freedom. Appropriate combinations of the spin analysis and the position of the phase shifter allow an experimental verification of the violation of an inequality derived from the Kochen-Specker theorem. The observed violation 2.291±0.008≰1 clearly shows that quantum mechanical predictions cannot be reproduced by noncontextual hidden variable theories.

  6. Artificial neural network modeling of dissolved oxygen in the Heihe River, Northwestern China.

    PubMed

    Wen, Xiaohu; Fang, Jing; Diao, Meina; Zhang, Chuanqi

    2013-05-01

    Identification and quantification of dissolved oxygen (DO) profiles of river is one of the primary concerns for water resources managers. In this research, an artificial neural network (ANN) was developed to simulate the DO concentrations in the Heihe River, Northwestern China. A three-layer back-propagation ANN was used with the Bayesian regularization training algorithm. The input variables of the neural network were pH, electrical conductivity, chloride (Cl(-)), calcium (Ca(2+)), total alkalinity, total hardness, nitrate nitrogen (NO3-N), and ammonical nitrogen (NH4-N). The ANN structure with 14 hidden neurons obtained the best selection. By making comparison between the results of the ANN model and the measured data on the basis of correlation coefficient (r) and root mean square error (RMSE), a good model-fitting DO values indicated the effectiveness of neural network model. It is found that the coefficient of correlation (r) values for the training, validation, and test sets were 0.9654, 0.9841, and 0.9680, respectively, and the respective values of RMSE for the training, validation, and test sets were 0.4272, 0.3667, and 0.4570, respectively. Sensitivity analysis was used to determine the influence of input variables on the dependent variable. The most effective inputs were determined as pH, NO3-N, NH4-N, and Ca(2+). Cl(-) was found to be least effective variables on the proposed model. The identified ANN model can be used to simulate the water quality parameters.

  7. Detecting hidden particles with MATHUSLA

    NASA Astrophysics Data System (ADS)

    Evans, Jared A.

    2018-03-01

    A hidden sector containing light long-lived particles provides a well-motivated place to find new physics. The recently proposed MATHUSLA experiment has the potential to be extremely sensitive to light particles originating from rare meson decays in the very long lifetime region. In this work, we illustrate this strength with the specific example of a light scalar mixed with the standard model-like Higgs boson, a model where MATHUSLA can further probe unexplored parameter space from exotic Higgs decays. Design augmentations should be considered in order to maximize the ability of MATHUSLA to discover very light hidden sector particles.

  8. How Genes Modulate Patterns of Aging-Related Changes on the Way to 100: Biodemographic Models and Methods in Genetic Analyses of Longitudinal Data

    PubMed Central

    Yashin, Anatoliy I.; Arbeev, Konstantin G.; Wu, Deqing; Arbeeva, Liubov; Kulminski, Alexander; Kulminskaya, Irina; Akushevich, Igor; Ukraintseva, Svetlana V.

    2016-01-01

    Background and Objective To clarify mechanisms of genetic regulation of human aging and longevity traits, a number of genome-wide association studies (GWAS) of these traits have been performed. However, the results of these analyses did not meet expectations of the researchers. Most detected genetic associations have not reached a genome-wide level of statistical significance, and suffered from the lack of replication in the studies of independent populations. The reasons for slow progress in this research area include low efficiency of statistical methods used in data analyses, genetic heterogeneity of aging and longevity related traits, possibility of pleiotropic (e.g., age dependent) effects of genetic variants on such traits, underestimation of the effects of (i) mortality selection in genetically heterogeneous cohorts, (ii) external factors and differences in genetic backgrounds of individuals in the populations under study, the weakness of conceptual biological framework that does not fully account for above mentioned factors. One more limitation of conducted studies is that they did not fully realize the potential of longitudinal data that allow for evaluating how genetic influences on life span are mediated by physiological variables and other biomarkers during the life course. The objective of this paper is to address these issues. Data and Methods We performed GWAS of human life span using different subsets of data from the original Framingham Heart Study cohort corresponding to different quality control (QC) procedures and used one subset of selected genetic variants for further analyses. We used simulation study to show that approach to combining data improves the quality of GWAS. We used FHS longitudinal data to compare average age trajectories of physiological variables in carriers and non-carriers of selected genetic variants. We used stochastic process model of human mortality and aging to investigate genetic influence on hidden biomarkers of aging and on dynamic interaction between aging and longevity. We investigated properties of genes related to selected variants and their roles in signaling and metabolic pathways. Results We showed that the use of different QC procedures results in different sets of genetic variants associated with life span. We selected 24 genetic variants negatively associated with life span. We showed that the joint analyses of genetic data at the time of bio-specimen collection and follow up data substantially improved significance of associations of selected 24 SNPs with life span. We also showed that aging related changes in physiological variables and in hidden biomarkers of aging differ for the groups of carriers and non-carriers of selected variants. Conclusions . The results of these analyses demonstrated benefits of using biodemographic models and methods in genetic association studies of these traits. Our findings showed that the absence of a large number of genetic variants with deleterious effects may make substantial contribution to exceptional longevity. These effects are dynamically mediated by a number of physiological variables and hidden biomarkers of aging. The results of these research demonstrated benefits of using integrative statistical models of mortality risks in genetic studies of human aging and longevity. PMID:27773987

  9. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    PubMed

    Xiao, WenBo; Nazario, Gina; Wu, HuaMing; Zhang, HuaMing; Cheng, Feng

    2017-01-01

    In this article, we introduced an artificial neural network (ANN) based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-), multi-crystalline (multi-), and amorphous (amor-) crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  10. Multiple Detector Optimization for Hidden Radiation Source Detection

    DTIC Science & Technology

    2015-03-26

    important in achieving operationally useful methods for optimizing detector emplacement, the 2-D attenuation model approach promises to speed up the...process of hidden source detection significantly. The model focused on detection of the full energy peak of a radiation source. Methods to optimize... radioisotope identification is possible without using a computationally intensive stochastic model such as the Monte Carlo n-Particle (MCNP) code

  11. Modeling Dyadic Processes Using Hidden Markov Models: A Time Series Approach to Mother-Infant Interactions during Infant Immunization

    ERIC Educational Resources Information Center

    Stifter, Cynthia A.; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…

  12. Low-lying 1/2- hidden strange pentaquark states in the constituent quark model

    NASA Astrophysics Data System (ADS)

    Li, Hui; Wu, Zong-Xiu; An, Chun-Sheng; Chen, Hong

    2017-12-01

    We investigate the spectrum of the low-lying 1/2- hidden strange pentaquark states, employing the constituent quark model, and looking at two ways within that model of mediating the hyperfine interaction between quarks - Goldstone boson exchange and one gluon exchange. Numerical results show that the lowest 1/2- hidden strange pentaquark state in the Goldstone boson exchange model lies at ˜1570 MeV, so this pentaquark configuration may form a notable component in S 11(1535) if the Goldstone boson exchange model is applied. This is consistent with the prediction that S 11(1535) couples very strongly to strangeness channels. Supported by National Natural Science Foundation of China (11675131, 11645002), Chongqing Natural Science Foundation (cstc2015jcyjA00032) and Fundamental Research Funds for the Central Universities (SWU115020)

  13. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  14. Hidden flows and waste processing--an analysis of illustrative futures.

    PubMed

    Schiller, F; Raffield, T; Angus, A; Herben, M; Young, P J; Longhurst, P J; Pollard, S J T

    2010-12-14

    An existing materials flow model is adapted (using Excel and AMBER model platforms) to account for waste and hidden material flows within a domestic environment. Supported by national waste data, the implications of legislative change, domestic resource depletion and waste technology advances are explored. The revised methodology offers additional functionality for economic parameters that influence waste generation and disposal. We explore this accounting system under hypothetical future waste and resource management scenarios, illustrating the utility of the model. A sensitivity analysis confirms that imports, domestic extraction and their associated hidden flows impact mostly on waste generation. The model offers enhanced utility for policy and decision makers with regard to economic mass balance and strategic waste flows, and may promote further discussion about waste technology choice in the context of reducing carbon budgets.

  15. Bounding the Set of Classical Correlations of a Many-Body System

    NASA Astrophysics Data System (ADS)

    Fadel, Matteo; Tura, Jordi

    2017-12-01

    We present a method to certify the presence of Bell correlations in experimentally observed statistics, and to obtain new Bell inequalities. Our approach is based on relaxing the conditions defining the set of correlations obeying a local hidden variable model, yielding a convergent hierarchy of semidefinite programs (SDP's). Because the size of these SDP's is independent of the number of parties involved, this technique allows us to characterize correlations in many-body systems. As an example, we illustrate our method with the experimental data presented in Science 352, 441 (2016), 10.1126/science.aad8665.

  16. Tight Bell Inequalities and Nonlocality in Weak Measurement

    NASA Astrophysics Data System (ADS)

    Waegell, Mordecai

    A general class of Bell inequalities is derived based on strict adherence to probabilistic entanglement correlations observed in nature. This derivation gives significantly tighter bounds on local hidden variable theories for the well-known Clauser-Horne-Shimony-Holt (CHSH) inequality, and also leads to new proofs of the Greenberger-Horne-Zeilinger (GHZ) theorem. This method is applied to weak measurements and reveals nonlocal correlations between the weak value and the post-selection, which rules out various classical models of weak measurement. Implications of these results are discussed. Fetzer-Franklin Fund of the John E. Fetzer Memorial Trust.

  17. Two-dimensional hidden semantic information model for target saliency detection and eyetracking identification

    NASA Astrophysics Data System (ADS)

    Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao

    2018-01-01

    Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.

  18. The hidden and informal curriculum across the continuum of training: A cross-sectional qualitative study.

    PubMed

    Doja, Asif; Bould, M Dylan; Clarkin, Chantalle; Eady, Kaylee; Sutherland, Stephanie; Writer, Hilary

    2016-04-01

    The hidden and informal curricula refer to learning in response to unarticulated processes and constraints, falling outside the formal medical curriculum. The hidden curriculum has been identified as requiring attention across all levels of learning. We sought to assess the knowledge and perceptions of the hidden and informal curricula across the continuum of learning at a single institution. Focus groups were held with undergraduate and postgraduate learners and faculty to explore knowledge and perceptions relating to the hidden and informal curricula. Thematic analysis was conducted both inductively by research team members and deductively using questions structured by the existing literature. Participants highlighted several themes related to the presence of the hidden and informal curricula in medical training and practice, including: the privileging of some specialties over others; the reinforcement of hierarchies within medicine; and a culture of tolerance towards unprofessional behaviors. Participants acknowledged the importance of role modeling in the development of professional identities and discussed the deterioration in idealism that occurs. Common issues pertaining to the hidden curriculum exist across all levels of learners, including faculty. Increased awareness of these issues could allow for the further development of methods to address learning within the hidden curriculum.

  19. The hidden and informal curriculum across the continuum of training: A cross-sectional qualitative study.

    PubMed

    Doja, Asif; Bould, M Dylan; Clarkin, Chantalle; Eady, Kaylee; Sutherland, Stephanie; Writer, Hilary

    2016-01-01

    The hidden and informal curricula refer to learning in response to unarticulated processes and constraints, falling outside the formal medical curriculum. The hidden curriculum has been identified as requiring attention across all levels of learning. We sought to assess the knowledge and perceptions of the hidden and informal curricula across the continuum of learning at a single institution. Focus groups were held with undergraduate and postgraduate learners and faculty to explore knowledge and perceptions relating to the hidden and informal curricula. Thematic analysis was conducted both inductively by research team members and deductively using questions structured by the existing literature. Participants highlighted several themes related to the presence of the hidden and informal curricula in medical training and practice, including: the privileging of some specialties over others; the reinforcement of hierarchies within medicine; and a culture of tolerance towards unprofessional behaviors. Participants acknowledged the importance of role modeling in the development of professional identities and discussed the deterioration in idealism that occurs. Common issues pertaining to the hidden curriculum exist across all levels of learners, including faculty. Increased awareness of these issues could allow for the further development of methods to address learning within the hidden curriculum.

  20. Hidden complexity of free energy surfaces for peptide (protein) folding.

    PubMed

    Krivov, Sergei V; Karplus, Martin

    2004-10-12

    An understanding of the thermodynamics and kinetics of protein folding requires a knowledge of the free energy surface governing the motion of the polypeptide chain. Because of the many degrees of freedom involved, surfaces projected on only one or two progress variables are generally used in descriptions of the folding reaction. Such projections result in relatively smooth surfaces, but they could mask the complexity of the unprojected surface. Here we introduce an approach to determine the actual (unprojected) free energy surface and apply it to the second beta-hairpin of protein G, which has been used as a model system for protein folding. The surface is represented by a disconnectivity graph calculated from a long equilibrium folding-unfolding trajectory. The denatured state is found to have multiple low free energy basins. Nevertheless, the peptide shows exponential kinetics in folding to the native basin. Projected surfaces obtained from the present analysis have a simple form in agreement with other studies of the beta-hairpin. The hidden complexity found for the beta-hairpin surface suggests that the standard funnel picture of protein folding should be revisited.

  1. Tracking Problem Solving by Multivariate Pattern Analysis and Hidden Markov Model Algorithms

    ERIC Educational Resources Information Center

    Anderson, John R.

    2012-01-01

    Multivariate pattern analysis can be combined with Hidden Markov Model algorithms to track the second-by-second thinking as people solve complex problems. Two applications of this methodology are illustrated with a data set taken from children as they interacted with an intelligent tutoring system for algebra. The first "mind reading" application…

  2. Ascertainment-adjusted parameter estimation approach to improve robustness against misspecification of health monitoring methods

    NASA Astrophysics Data System (ADS)

    Juesas, P.; Ramasso, E.

    2016-12-01

    Condition monitoring aims at ensuring system safety which is a fundamental requirement for industrial applications and that has become an inescapable social demand. This objective is attained by instrumenting the system and developing data analytics methods such as statistical models able to turn data into relevant knowledge. One difficulty is to be able to correctly estimate the parameters of those methods based on time-series data. This paper suggests the use of the Weighted Distribution Theory together with the Expectation-Maximization algorithm to improve parameter estimation in statistical models with latent variables with an application to health monotonic under uncertainty. The improvement of estimates is made possible by incorporating uncertain and possibly noisy prior knowledge on latent variables in a sound manner. The latent variables are exploited to build a degradation model of dynamical system represented as a sequence of discrete states. Examples on Gaussian Mixture Models, Hidden Markov Models (HMM) with discrete and continuous outputs are presented on both simulated data and benchmarks using the turbofan engine datasets. A focus on the application of a discrete HMM to health monitoring under uncertainty allows to emphasize the interest of the proposed approach in presence of different operating conditions and fault modes. It is shown that the proposed model depicts high robustness in presence of noisy and uncertain prior.

  3. Measurements of entanglement over a kilometric distance to test superluminal models of Quantum Mechanics: preliminary results.

    NASA Astrophysics Data System (ADS)

    Cocciaro, B.; Faetti, S.; Fronzoni, L.

    2017-08-01

    As shown in the EPR paper (Einstein, Podolsky e Rosen, 1935), Quantum Mechanics is a non-local Theory. The Bell theorem and the successive experiments ruled out the possibility of explaining quantum correlations using only local hidden variables models. Some authors suggested that quantum correlations could be due to superluminal communications that propagate isotropically with velocity vt > c in a preferred reference frame. For finite values of vt and in some special cases, Quantum Mechanics and superluminal models lead to different predictions. So far, no deviations from the predictions of Quantum Mechanics have been detected and only lower bounds for the superluminal velocities vt have been established. Here we describe a new experiment that increases the maximum detectable superluminal velocities and we give some preliminary results.

  4. Three Dimensional Object Recognition Using a Complex Autoregressive Model

    DTIC Science & Technology

    1993-12-01

    3.4.2 Template Matching Algorithm ...................... 3-16 3.4.3 K-Nearest-Neighbor ( KNN ) Techniques ................. 3-25 3.4.4 Hidden Markov Model...Neighbor ( KNN ) Test Results ...................... 4-13 4.2.1 Single-Look 1-NN Testing .......................... 4-14 4.2.2 Multiple-Look 1-NN Testing...4-15 4.2.3 Discussion of KNN Test Results ...................... 4-15 4.3 Hidden Markov Model (HMM) Test Results

  5. Decoding and modelling of time series count data using Poisson hidden Markov model and Markov ordinal logistic regression models.

    PubMed

    Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I

    2018-01-01

    Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.

  6. A Gradient-Field Pulsed Eddy Current Probe for Evaluation of Hidden Material Degradation in Conductive Structures Based on Lift-Off Invariance

    PubMed Central

    Li, Yong; Jing, Haoqing; Zainal Abidin, Ilham Mukriz; Yan, Bei

    2017-01-01

    Coated conductive structures are widely adopted in such engineering fields as aerospace, nuclear energy, etc. The hostile and corrosive environment leaves in-service coated conductive structures vulnerable to Hidden Material Degradation (HMD) occurring under the protection coating. It is highly demanded that HMD can be non-intrusively assessed using non-destructive evaluation techniques. In light of the advantages of Gradient-field Pulsed Eddy Current technique (GPEC) over other non-destructive evaluation methods in corrosion evaluation, in this paper the GPEC probe for quantitative evaluation of HMD is intensively investigated. Closed-form expressions of GPEC responses to HMD are formulated via analytical modeling. The Lift-off Invariance (LOI) in GPEC signals, which makes the HMD evaluation immune to the variation in thickness of the protection coating, is introduced and analyzed through simulations involving HMD with variable depths and conductivities. A fast inverse method employing magnitude and time of the LOI point in GPEC signals for simultaneously evaluating the conductivity and thickness of HMD region is proposed, and subsequently verified by finite element modeling and experiments. It has been found from the results that along with the proposed inverse method the GPEC probe is applicable to evaluation of HMD in coated conductive structures without much loss in accuracy. PMID:28441328

  7. A Gradient-Field Pulsed Eddy Current Probe for Evaluation of Hidden Material Degradation in Conductive Structures Based on Lift-Off Invariance.

    PubMed

    Li, Yong; Jing, Haoqing; Zainal Abidin, Ilham Mukriz; Yan, Bei

    2017-04-25

    Coated conductive structures are widely adopted in such engineering fields as aerospace, nuclear energy, etc. The hostile and corrosive environment leaves in-service coated conductive structures vulnerable to Hidden Material Degradation (HMD) occurring under the protection coating. It is highly demanded that HMD can be non-intrusively assessed using non-destructive evaluation techniques. In light of the advantages of Gradient-field Pulsed Eddy Current technique (GPEC) over other non-destructive evaluation methods in corrosion evaluation, in this paper the GPEC probe for quantitative evaluation of HMD is intensively investigated. Closed-form expressions of GPEC responses to HMD are formulated via analytical modeling. The Lift-off Invariance (LOI) in GPEC signals, which makes the HMD evaluation immune to the variation in thickness of the protection coating, is introduced and analyzed through simulations involving HMD with variable depths and conductivities. A fast inverse method employing magnitude and time of the LOI point in GPEC signals for simultaneously evaluating the conductivity and thickness of HMD region is proposed, and subsequently verified by finite element modeling and experiments. It has been found from the results that along with the proposed inverse method the GPEC probe is applicable to evaluation of HMD in coated conductive structures without much loss in accuracy.

  8. Several foundational and information theoretic implications of Bell’s theorem

    NASA Astrophysics Data System (ADS)

    Kar, Guruprasad; Banik, Manik

    2016-08-01

    In 1935, Albert Einstein and two colleagues, Boris Podolsky and Nathan Rosen (EPR) developed a thought experiment to demonstrate what they felt was a lack of completeness in quantum mechanics (QM). EPR also postulated the existence of more fundamental theory where physical reality of any system would be completely described by the variables/states of that fundamental theory. This variable is commonly called hidden variable and the theory is called hidden variable theory (HVT). In 1964, John Bell proposed an empirically verifiable criterion to test for the existence of these HVTs. He derived an inequality, which must be satisfied by any theory that fulfill the conditions of locality and reality. He also showed that QM, as it violates this inequality, is incompatible with any local-realistic theory. Later it has been shown that Bell’s inequality (BI) can be derived from different set of assumptions and it also find applications in useful information theoretic protocols. In this review, we will discuss various foundational as well as information theoretic implications of BI. We will also discuss about some restricted nonlocal feature of quantum nonlocality and elaborate the role of Uncertainty principle and Complementarity principle in explaining this feature.

  9. Hidden Markov models for evolution and comparative genomics analysis.

    PubMed

    Bykova, Nadezda A; Favorov, Alexander V; Mironov, Andrey A

    2013-01-01

    The problem of reconstruction of ancestral states given a phylogeny and data from extant species arises in a wide range of biological studies. The continuous-time Markov model for the discrete states evolution is generally used for the reconstruction of ancestral states. We modify this model to account for a case when the states of the extant species are uncertain. This situation appears, for example, if the states for extant species are predicted by some program and thus are known only with some level of reliability; it is common for bioinformatics field. The main idea is formulation of the problem as a hidden Markov model on a tree (tree HMM, tHMM), where the basic continuous-time Markov model is expanded with the introduction of emission probabilities of observed data (e.g. prediction scores) for each underlying discrete state. Our tHMM decoding algorithm allows us to predict states at the ancestral nodes as well as to refine states at the leaves on the basis of quantitative comparative genomics. The test on the simulated data shows that the tHMM approach applied to the continuous variable reflecting the probabilities of the states (i.e. prediction score) appears to be more accurate then the reconstruction from the discrete states assignment defined by the best score threshold. We provide examples of applying our model to the evolutionary analysis of N-terminal signal peptides and transcription factor binding sites in bacteria. The program is freely available at http://bioinf.fbb.msu.ru/~nadya/tHMM and via web-service at http://bioinf.fbb.msu.ru/treehmmweb.

  10. Adaptive Quantification and Longitudinal Analysis of Pulmonary Emphysema with a Hidden Markov Measure Field Model

    PubMed Central

    Häme, Yrjö; Angelini, Elsa D.; Hoffman, Eric A.; Barr, R. Graham; Laine, Andrew F.

    2014-01-01

    The extent of pulmonary emphysema is commonly estimated from CT images by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions in the lung and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the present model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was used to quantify emphysema on a cohort of 87 subjects, with repeated CT scans acquired over a time period of 8 years using different imaging protocols. The scans were acquired approximately annually, and the data set included a total of 365 scans. The results show that the emphysema estimates produced by the proposed method have very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. In addition, the generated emphysema delineations promise great advantages for regional analysis of emphysema extent and progression, possibly advancing disease subtyping. PMID:24759984

  11. Enhanced axion-photon coupling in GUT with hidden photon

    NASA Astrophysics Data System (ADS)

    Daido, Ryuji; Takahashi, Fuminobu; Yokozaki, Norimi

    2018-05-01

    We show that the axion coupling to photons can be enhanced in simple models with a single Peccei-Quinn field, if the gauge coupling unification is realized by a large kinetic mixing χ = O (0.1) between hypercharge and unbroken hidden U(1)H. The key observation is that the U(1)H gauge coupling should be rather strong to induce such large kinetic mixing, leading to enhanced contributions of hidden matter fields to the electromagnetic anomaly. We find that the axion-photon coupling is enhanced by about a factor of 10-100 with respect to the GUT-axion models with E / N = 8 / 3.

  12. Hidden order and unconventional superconductivity in URu2Si2

    NASA Astrophysics Data System (ADS)

    Rau, Jeffrey; Kee, Hae-Young

    2012-02-01

    The nature of the so-called hidden order in URu2Si2 and the subsequent superconducting phase have remained a puzzle for over two decades. Motivated by evidence for rotational symmetry breaking seen in recent magnetic torque measurements [Okazaki et al. Science 331, 439 (2011)], we derive a simple tight-binding model consistent with experimental Fermi surface probes and ab-initio calculations. From this model we use mean-field theory to examine the variety of hidden orders allowed by existing experimental results, including the torque measurements. We then construct a phase diagram in temperature and pressure and discuss relevant experimental consequences.

  13. Artificial neural network with backpropagation learning to predict mean monthly total ozone in Arosa, Switzerland

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Surajit; Bandyopadhyay, Goutami

    2007-01-01

    Present study deals with the mean monthly total ozone time series over Arosa, Switzerland. The study period is 1932-1971. First of all, the total ozone time series has been identified as a complex system and then Artificial Neural Networks models in the form of Multilayer Perceptron with back propagation learning have been developed. The models are Single-hidden-layer and Two-hidden-layer Perceptrons with sigmoid activation function. After sequential learning with learning rate 0.9 the peak total ozone period (February-May) concentrations of mean monthly total ozone have been predicted by the two neural net models. After training and validation, both of the models are found skillful. But, Two-hidden-layer Perceptron is found to be more adroit in predicting the mean monthly total ozone concentrations over the aforesaid period.

  14. Epistemic View of Quantum States and Communication Complexity of Quantum Channels

    NASA Astrophysics Data System (ADS)

    Montina, Alberto

    2012-09-01

    The communication complexity of a quantum channel is the minimal amount of classical communication required for classically simulating a process of state preparation, transmission through the channel and subsequent measurement. It establishes a limit on the power of quantum communication in terms of classical resources. We show that classical simulations employing a finite amount of communication can be derived from a special class of hidden variable theories where quantum states represent statistical knowledge about the classical state and not an element of reality. This special class has attracted strong interest very recently. The communication cost of each derived simulation is given by the mutual information between the quantum state and the classical state of the parent hidden variable theory. Finally, we find that the communication complexity for single qubits is smaller than 1.28 bits. The previous known upper bound was 1.85 bits.

  15. The missing history of Bohm's hidden variables theory: The Ninth Symposium of the Colston Research Society, Bristol, 1957

    NASA Astrophysics Data System (ADS)

    Kožnjak, Boris

    2018-05-01

    In this paper, I analyze the historical context, scientific and philosophical content, and the implications of the thus far historically largely neglected Ninth Symposium of the Colston Research Society held in Bristol at the beginning of April 1957, the first major international event after World War II gathering eminent physicists and philosophers to discuss the foundational questions of quantum mechanics, in respect to the early reception of the causal quantum theory program mapped and defended by David Bohm during the five years preceding the Symposium. As will be demonstrated, contrary to the almost unanimously negative and even hostile reception of Bohm's ideas on hidden variables in the early 1950s, in the close aftermath of the 1957 Colston Research Symposium Bohm's ideas received a more open-minded and ideologically relaxed critical rehabilitation, in which the Symposium itself played a vital and essential part.

  16. Comparison of neurofuzzy logic and decision trees in discovering knowledge from experimental data of an immediate release tablet formulation.

    PubMed

    Shao, Q; Rowe, R C; York, P

    2007-06-01

    Understanding of the cause-effect relationships between formulation ingredients, process conditions and product properties is essential for developing a quality product. However, the formulation knowledge is often hidden in experimental data and not easily interpretable. This study compares neurofuzzy logic and decision tree approaches in discovering hidden knowledge from an immediate release tablet formulation database relating formulation ingredients (silica aerogel, magnesium stearate, microcrystalline cellulose and sodium carboxymethylcellulose) and process variables (dwell time and compression force) to tablet properties (tensile strength, disintegration time, friability, capping and drug dissolution at various time intervals). Both approaches successfully generated useful knowledge in the form of either "if then" rules or decision trees. Although different strategies are employed by the two approaches in generating rules/trees, similar knowledge was discovered in most cases. However, as decision trees are not able to deal with continuous dependent variables, data discretisation procedures are generally required.

  17. Is wave-particle objectivity compatible with determinism and locality?

    PubMed

    Ionicioiu, Radu; Jennewein, Thomas; Mann, Robert B; Terno, Daniel R

    2014-09-26

    Wave-particle duality, superposition and entanglement are among the most counterintuitive features of quantum theory. Their clash with our classical expectations motivated hidden-variable (HV) theories. With the emergence of quantum technologies, we can test experimentally the predictions of quantum theory versus HV theories and put strong restrictions on their key assumptions. Here, we study an entanglement-assisted version of the quantum delayed-choice experiment and show that the extension of HV to the controlling devices only exacerbates the contradiction. We compare HV theories that satisfy the conditions of objectivity (a property of photons being either particles or waves, but not both), determinism and local independence of hidden variables with quantum mechanics. Any two of the above conditions are compatible with it. The conflict becomes manifest when all three conditions are imposed and persists for any non-zero value of entanglement. We propose an experiment to test our conclusions.

  18. Is wave–particle objectivity compatible with determinism and locality?

    PubMed Central

    Ionicioiu, Radu; Jennewein, Thomas; Mann, Robert B.; Terno, Daniel R.

    2014-01-01

    Wave–particle duality, superposition and entanglement are among the most counterintuitive features of quantum theory. Their clash with our classical expectations motivated hidden-variable (HV) theories. With the emergence of quantum technologies, we can test experimentally the predictions of quantum theory versus HV theories and put strong restrictions on their key assumptions. Here, we study an entanglement-assisted version of the quantum delayed-choice experiment and show that the extension of HV to the controlling devices only exacerbates the contradiction. We compare HV theories that satisfy the conditions of objectivity (a property of photons being either particles or waves, but not both), determinism and local independence of hidden variables with quantum mechanics. Any two of the above conditions are compatible with it. The conflict becomes manifest when all three conditions are imposed and persists for any non-zero value of entanglement. We propose an experiment to test our conclusions. PMID:25256419

  19. Model-independent indirect detection constraints on hidden sector dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elor, Gilly; Rodd, Nicholas L.; Slatyer, Tracy R.

    2016-06-10

    If dark matter inhabits an expanded “hidden sector”, annihilations may proceed through sequential decays or multi-body final states. We map out the potential signals and current constraints on such a framework in indirect searches, using a model-independent setup based on multi-step hierarchical cascade decays. While remaining agnostic to the details of the hidden sector model, our framework captures the generic broadening of the spectrum of secondary particles (photons, neutrinos, e{sup +}e{sup −} and p-barp) relative to the case of direct annihilation to Standard Model particles. We explore how indirect constraints on dark matter annihilation limit the parameter space for suchmore » cascade/multi-particle decays. We investigate limits from the cosmic microwave background by Planck, the Fermi measurement of photons from the dwarf galaxies, and positron data from AMS-02. The presence of a hidden sector can change the constraints on the dark matter by up to an order of magnitude in either direction (although the effect can be much smaller). We find that generally the bound from the Fermi dwarfs is most constraining for annihilations to photon-rich final states, while AMS-02 is most constraining for electron and muon final states; however in certain instances the CMB bounds overtake both, due to their approximate independence on the details of the hidden sector cascade. We provide the full set of cascade spectra considered here as publicly available code with examples at http://web.mit.edu/lns/research/CascadeSpectra.html.« less

  20. Model-independent indirect detection constraints on hidden sector dark matter

    DOE PAGES

    Elor, Gilly; Rodd, Nicholas L.; Slatyer, Tracy R.; ...

    2016-06-10

    If dark matter inhabits an expanded ``hidden sector'', annihilations may proceed through sequential decays or multi-body final states. We map out the potential signals and current constraints on such a framework in indirect searches, using a model-independent setup based on multi-step hierarchical cascade decays. While remaining agnostic to the details of the hidden sector model, our framework captures the generic broadening of the spectrum of secondary particles (photons, neutrinos, e +e - andmore » $$\\overline{p}$$ p) relative to the case of direct annihilation to Standard Model particles. We explore how indirect constraints on dark matter annihilation limit the parameter space for such cascade/multi-particle decays. We investigate limits from the cosmic microwave background by Planck, the Fermi measurement of photons from the dwarf galaxies, and positron data from AMS-02. The presence of a hidden sector can change the constraints on the dark matter by up to an order of magnitude in either direction (although the effect can be much smaller). We find that generally the bound from the Fermi dwarfs is most constraining for annihilations to photon-rich final states, while AMS-02 is most constraining for electron and muon final states; however in certain instances the CMB bounds overtake both, due to their approximate independence on the details of the hidden sector cascade. We provide the full set of cascade spectra considered here as publicly available code with examples at http://web.mit.edu/lns/research/CascadeSpectra.html.« less

  1. Preparation of name and address data for record linkage using hidden Markov models

    PubMed Central

    Churches, Tim; Christen, Peter; Lim, Kim; Zhu, Justin Xi

    2002-01-01

    Background Record linkage refers to the process of joining records that relate to the same entity or event in one or more data collections. In the absence of a shared, unique key, record linkage involves the comparison of ensembles of partially-identifying, non-unique data items between pairs of records. Data items with variable formats, such as names and addresses, need to be transformed and normalised in order to validly carry out these comparisons. Traditionally, deterministic rule-based data processing systems have been used to carry out this pre-processing, which is commonly referred to as "standardisation". This paper describes an alternative approach to standardisation, using a combination of lexicon-based tokenisation and probabilistic hidden Markov models (HMMs). Methods HMMs were trained to standardise typical Australian name and address data drawn from a range of health data collections. The accuracy of the results was compared to that produced by rule-based systems. Results Training of HMMs was found to be quick and did not require any specialised skills. For addresses, HMMs produced equal or better standardisation accuracy than a widely-used rule-based system. However, acccuracy was worse when used with simpler name data. Possible reasons for this poorer performance are discussed. Conclusion Lexicon-based tokenisation and HMMs provide a viable and effort-effective alternative to rule-based systems for pre-processing more complex variably formatted data such as addresses. Further work is required to improve the performance of this approach with simpler data such as names. Software which implements the methods described in this paper is freely available under an open source license for other researchers to use and improve. PMID:12482326

  2. A dynamic programming-based particle swarm optimization algorithm for an inventory management problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Zeng, Ziqiang; Han, Bernard; Lei, Xiao

    2013-07-01

    This article presents a dynamic programming-based particle swarm optimization (DP-based PSO) algorithm for solving an inventory management problem for large-scale construction projects under a fuzzy random environment. By taking into account the purchasing behaviour and strategy under rules of international bidding, a multi-objective fuzzy random dynamic programming model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform fuzzy random parameters into fuzzy variables that are subsequently defuzzified by using an expected value operator with optimistic-pessimistic index. The iterative nature of the authors' model motivates them to develop a DP-based PSO algorithm. More specifically, their approach treats the state variables as hidden parameters. This in turn eliminates many redundant feasibility checks during initialization and particle updates at each iteration. Results and sensitivity analysis are presented to highlight the performance of the authors' optimization method, which is very effective as compared to the standard PSO algorithm.

  3. Hidden Markov models for character recognition.

    PubMed

    Vlontzos, J A; Kung, S Y

    1992-01-01

    A hierarchical system for character recognition with hidden Markov model knowledge sources which solve both the context sensitivity problem and the character instantiation problem is presented. The system achieves 97-99% accuracy using a two-level architecture and has been implemented using a systolic array, thus permitting real-time (1 ms per character) multifont and multisize printed character recognition as well as handwriting recognition.

  4. Population decoding of motor cortical activity using a generalized linear model with hidden states.

    PubMed

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas; Paninski, Liam

    2010-06-15

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (reducing the mean square error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  5. Population Decoding of Motor Cortical Activity using a Generalized Linear Model with Hidden States

    PubMed Central

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas G.; Paninski, Liam

    2010-01-01

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (lowering the Mean Square Error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. PMID:20359500

  6. Network architectures and circuit function: testing alternative hypotheses in multifunctional networks.

    PubMed

    Leonard, J L

    2000-05-01

    Understanding how species-typical movement patterns are organized in the nervous system is a central question in neurobiology. The current explanations involve 'alphabet' models in which an individual neuron may participate in the circuit for several behaviors but each behavior is specified by a specific neural circuit. However, not all of the well-studied model systems fit the 'alphabet' model. The 'equation' model provides an alternative possibility, whereby a system of parallel motor neurons, each with a unique (but overlapping) field of innervation, can account for the production of stereotyped behavior patterns by variable circuits. That is, it is possible for such patterns to arise as emergent properties of a generalized neural network in the absence of feedback, a simple version of a 'self-organizing' behavioral system. Comparison of systems of identified neurons suggest that the 'alphabet' model may account for most observations where CPGs act to organize motor patterns. Other well-known model systems, involving architectures corresponding to feed-forward neural networks with a hidden layer, may organize patterned behavior in a manner consistent with the 'equation' model. Such architectures are found in the Mauthner and reticulospinal circuits, 'escape' locomotion in cockroaches, CNS control of Aplysia gill, and may also be important in the coordination of sensory information and motor systems in insect mushroom bodies and the vertebrate hippocampus. The hidden layer of such networks may serve as an 'internal representation' of the behavioral state and/or body position of the animal, allowing the animal to fine-tune oriented, or particularly context-sensitive, movements to the prevalent conditions. Experiments designed to distinguish between the two models in cases where they make mutually exclusive predictions provide an opportunity to elucidate the neural mechanisms by which behavior is organized in vivo and in vitro. Copyright 2000 S. Karger AG, Basel

  7. Statistical ecology comes of age.

    PubMed

    Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-12-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.

  8. Statistical ecology comes of age

    PubMed Central

    Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-01-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1–4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151

  9. Rare Z boson decays to a hidden sector

    DOE PAGES

    Blinov, Nikita; Izaguirre, Eder; Shuve, Brian

    2018-01-18

    We demonstrate that rare decays of the Standard Model Z boson can be used to discover and characterize the nature of new hidden-sector particles. We propose new searches for these particles in soft, high-multiplicity leptonic final states at the Large Hadron Collider. The proposed searches are sensitive to low-mass particles produced in Z decays, and we argue that these striking signatures can shed light on the hidden-sector couplings and mechanism for mass generation.

  10. Rare Z boson decays to a hidden sector

    DOE PAGES

    Blinov, Nikita; Izaguirre, Eder; Shuve, Brian

    2018-01-01

    We demonstrate that rare decays of the Standard Model Z boson can be used to discover and characterize the nature of new hidden-sector particles. We propose new searches for these particles in soft, high-multiplicity leptonic final states at the Large Hadron Collider. The proposed searches are sensitive to low-mass particles produced in Z decays, and we argue that these striking signatures can shed light on the hidden-sector couplings and mechanism for mass generation.

  11. Rare Z boson decays to a hidden sector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blinov, Nikita; Izaguirre, Eder; Shuve, Brian

    We demonstrate that rare decays of the Standard Model Z boson can be used to discover and characterize the nature of new hidden-sector particles. We propose new searches for these particles in soft, high-multiplicity leptonic final states at the Large Hadron Collider. The proposed searches are sensitive to low-mass particles produced in Z decays, and we argue that these striking signatures can shed light on the hidden-sector couplings and mechanism for mass generation.

  12. Optimization of Artificial Neural Network using Evolutionary Programming for Prediction of Cascading Collapse Occurrence due to the Hidden Failure Effect

    NASA Astrophysics Data System (ADS)

    Idris, N. H.; Salim, N. A.; Othman, M. M.; Yasin, Z. M.

    2018-03-01

    This paper presents the Evolutionary Programming (EP) which proposed to optimize the training parameters for Artificial Neural Network (ANN) in predicting cascading collapse occurrence due to the effect of protection system hidden failure. The data has been collected from the probability of hidden failure model simulation from the historical data. The training parameters of multilayer-feedforward with backpropagation has been optimized with objective function to minimize the Mean Square Error (MSE). The optimal training parameters consists of the momentum rate, learning rate and number of neurons in first hidden layer and second hidden layer is selected in EP-ANN. The IEEE 14 bus system has been tested as a case study to validate the propose technique. The results show the reliable prediction of performance validated through MSE and Correlation Coefficient (R).

  13. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.

    PubMed

    Ranganayaki, V; Deepa, S N

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.

  14. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems

    PubMed Central

    Ranganayaki, V.; Deepa, S. N.

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  15. Optimizing Experimental Designs: Finding Hidden Treasure.

    USDA-ARS?s Scientific Manuscript database

    Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...

  16. Boudinage in nature and experiment

    NASA Astrophysics Data System (ADS)

    Marques, Fernando O.; Fonseca, Pedro D.; Lechmann, Sarah; Burg, Jean-Pierre; Marques, Ana S.; Andrade, Alexandre J. M.; Alves, Carlos

    2012-03-01

    Deformation of rocks produces structures at all scales that are in many cases periodic (folding or boudinage), with variable amplitude and wavelength. Here we focus on boudinage, a process of primordial importance for tectonics. In the present study, we carried out measurements of natural boudins and experimentally tested the effects of two variables on boudinage: layer thickness and compression rate. The models were made of a competent layer (mostly brittle, as in nature) of either elastic (soft paper) or viscoelastoplastic (clay) material embedded in a ductile matrix of linear viscous silicone putty. The competent layer lied with its greatest surface normal to the principal shortening axis and greatest length parallel to the principal stretching axis. The model was then subjected to pure shear at constant piston velocity and variable competent layer thickness (Model 1), or at different piston velocity and constant layer thickness (Model 2). The results of Model 1 show an exponential dependence of boudin width on competent layer thickness, in disagreement with data from the studied natural occurrence. This indicates that variables other than competent layer thickness are hidden in the linear relationship obtained for the natural boudinage. The results of Model 2 show that the higher the velocity the smaller the boudin width, following a power-law with exponent very similar to that of analytical predictions. The studied natural boudinage occasionally occurs in two orthogonal directions. This chocolate tablet boudinage can be the result of two successive stages of deformation: buckling followed by stretching of competent sandstone layers, or buckling followed by rotation of reverse limbs into the extensional field of simple shear.

  17. A coupled hidden Markov model for disease interactions

    PubMed Central

    Sherlock, Chris; Xifara, Tatiana; Telfer, Sandra; Begon, Mike

    2013-01-01

    To investigate interactions between parasite species in a host, a population of field voles was studied longitudinally, with presence or absence of six different parasites measured repeatedly. Although trapping sessions were regular, a different set of voles was caught at each session, leading to incomplete profiles for all subjects. We use a discrete time hidden Markov model for each disease with transition probabilities dependent on covariates via a set of logistic regressions. For each disease the hidden states for each of the other diseases at a given time point form part of the covariate set for the Markov transition probabilities from that time point. This allows us to gauge the influence of each parasite species on the transition probabilities for each of the other parasite species. Inference is performed via a Gibbs sampler, which cycles through each of the diseases, first using an adaptive Metropolis–Hastings step to sample from the conditional posterior of the covariate parameters for that particular disease given the hidden states for all other diseases and then sampling from the hidden states for that disease given the parameters. We find evidence for interactions between several pairs of parasites and of an acquired immune response for two of the parasites. PMID:24223436

  18. Implicit emotion regulation in adolescent girls: An exploratory investigation of Hidden Markov Modeling and its neural correlates.

    PubMed

    Steele, James S; Bush, Keith; Stowe, Zachary N; James, George A; Smitherman, Sonet; Kilts, Clint D; Cisler, Josh

    2018-01-01

    Numerous data demonstrate that distracting emotional stimuli cause behavioral slowing (i.e. emotional conflict) and that behavior dynamically adapts to such distractors. However, the cognitive and neural mechanisms that mediate these behavioral findings are poorly understood. Several theoretical models have been developed that attempt to explain these phenomena, but these models have not been directly tested on human behavior nor compared. A potential tool to overcome this limitation is Hidden Markov Modeling (HMM), which is a computational approach to modeling indirectly observed systems. Here, we administered an emotional Stroop task to a sample of healthy adolescent girls (N = 24) during fMRI and used HMM to implement theoretical behavioral models. We then compared the model fits and tested for neural representations of the hidden states of the most supported model. We found that a modified variant of the model posited by Mathews et al. (1998) was most concordant with observed behavior and that brain activity was related to the model-based hidden states. Particularly, while the valences of the stimuli themselves were encoded primarily in the ventral visual cortex, the model-based detection of threatening targets was associated with increased activity in the bilateral anterior insula, while task effort (i.e. adaptation) was associated with reduction in the activity of these areas. These findings suggest that emotional target detection and adaptation are accomplished partly through increases and decreases, respectively, in the perceived immediate relevance of threatening cues and also demonstrate the efficacy of using HMM to apply theoretical models to human behavior.

  19. Implicit emotion regulation in adolescent girls: An exploratory investigation of Hidden Markov Modeling and its neural correlates

    PubMed Central

    Bush, Keith; Stowe, Zachary N.; James, George A.; Smitherman, Sonet; Kilts, Clint D.; Cisler, Josh

    2018-01-01

    Numerous data demonstrate that distracting emotional stimuli cause behavioral slowing (i.e. emotional conflict) and that behavior dynamically adapts to such distractors. However, the cognitive and neural mechanisms that mediate these behavioral findings are poorly understood. Several theoretical models have been developed that attempt to explain these phenomena, but these models have not been directly tested on human behavior nor compared. A potential tool to overcome this limitation is Hidden Markov Modeling (HMM), which is a computational approach to modeling indirectly observed systems. Here, we administered an emotional Stroop task to a sample of healthy adolescent girls (N = 24) during fMRI and used HMM to implement theoretical behavioral models. We then compared the model fits and tested for neural representations of the hidden states of the most supported model. We found that a modified variant of the model posited by Mathews et al. (1998) was most concordant with observed behavior and that brain activity was related to the model-based hidden states. Particularly, while the valences of the stimuli themselves were encoded primarily in the ventral visual cortex, the model-based detection of threatening targets was associated with increased activity in the bilateral anterior insula, while task effort (i.e. adaptation) was associated with reduction in the activity of these areas. These findings suggest that emotional target detection and adaptation are accomplished partly through increases and decreases, respectively, in the perceived immediate relevance of threatening cues and also demonstrate the efficacy of using HMM to apply theoretical models to human behavior. PMID:29489856

  20. Galactic center γ-ray excess in hidden sector DM models with dark gauge symmetries: local Z{sub 3} symmetry as an example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, P.; Tang, Yong

    We show that hidden sector dark matter (DM) models with local dark gauge symmetries make a natural playground for the possible γ-ray excess from the galactic center (GC). We first discuss in detail the GC γ-ray excess in a scalar dark matter (DM) model with local Z{sub 3} symmetry which was recently proposed by the present authors. Within this model, scalar DM with mass 30–70 GeV is allowed due to the newly-opened (semi-)annihilation channels of a DM pair into dark Higgs ϕ and/or dark photon Z′ pair, and the γ-ray spectrum from the GC can be fit within this model.more » Then we argue that the GC gamma ray excess can be easily accommodated within hidden sector dark matter models where DM is stabilized by local gauge symmetries, due to the presence of dark Higgs (and also dark photon for Abelian dark gauge symmetry)« less

  1. Efficient Learning of Continuous-Time Hidden Markov Models for Disease Progression

    PubMed Central

    Liu, Yu-Ying; Li, Shuang; Li, Fuxin; Song, Le; Rehg, James M.

    2016-01-01

    The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter learning algorithm for CT-HMM restricts its use to very small models or requires unrealistic constraints on the state transitions. In this paper, we present the first complete characterization of efficient EM-based learning methods for CT-HMM models. We demonstrate that the learning problem consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by adapting three approaches from the continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer’s disease dataset. PMID:27019571

  2. Life imitating art: depictions of the hidden curriculum in medical television programs.

    PubMed

    Stanek, Agatha; Clarkin, Chantalle; Bould, M Dylan; Writer, Hilary; Doja, Asif

    2015-09-26

    The hidden curriculum represents influences occurring within the culture of medicine that indirectly alter medical professionals' interactions, beliefs and clinical practices throughout their training. One approach to increase medical student awareness of the hidden curriculum is to provide them with readily available examples of how it is enacted in medicine; as such the purpose of this study was to examine depictions of the hidden curriculum in popular medical television programs. One full season of ER, Grey's Anatomy and Scrubs were selected for review. A summative content analysis was performed to ascertain the presence of depictions of the hidden curriculum, as well as to record the type, frequency and quality of examples. A second reviewer also viewed a random selection of episodes from each series to establish coding reliability. The most prevalent themes across all television programs were: the hierarchical nature of medicine; challenges during transitional stages in medicine; the importance of role modeling; patient dehumanization; faking or overstating one's capabilities; unprofessionalism; the loss of idealism; and difficulties with work-life balance. The hidden curriculum is frequently depicted in popular medical television shows. These examples of the hidden curriculum could serve as a valuable teaching resource in undergraduate medical programs.

  3. Gauge mediation scenario with hidden sector renormalization in MSSM

    NASA Astrophysics Data System (ADS)

    Arai, Masato; Kawai, Shinsuke; Okada, Nobuchika

    2010-02-01

    We study the hidden sector effects on the mass renormalization of a simplest gauge-mediated supersymmetry breaking scenario. We point out that possible hidden sector contributions render the soft scalar masses smaller, resulting in drastically different sparticle mass spectrum at low energy. In particular, in the 5+5¯ minimal gauge-mediated supersymmetry breaking with high messenger scale (that is favored by the gravitino cold dark matter scenario), we show that a stau can be the next lightest superparticle for moderate values of hidden sector self-coupling. This provides a very simple theoretical model of long-lived charged next lightest superparticles, which imply distinctive signals in ongoing and upcoming collider experiments.

  4. Variability in the combustion-derived fraction of urban humidity in Salt Lake City winter estimated from stable water vapor isotopes and its relationship to atmospheric stability and inversion structure

    NASA Astrophysics Data System (ADS)

    Fiorella, R.; Bares, R.; Lin, J. C.; Strong, C.; Bowen, G. J.

    2017-12-01

    Water released from the combustion of fossil fuels, while a negligible part of the global hydrological cycle, may be a significant contributor to urban humidity as fossil fuel emissions are strongly concentrated in space and time. The fraction of urban humidity comprised of combustion-derived vapor (CDV) cannot be observed through humidity measurements alone. However, the distinct stable isotopic composition of CDV, which arises from the reaction of 18O-enriched atmospheric O2 with 2H-depleted organic molecules, represents a promising method to apportion observed humidity between CDV and advected vapor. We apply stable water vapor isotopes to investigate variability in CDV amount and its relationship to atmospheric conditions in Salt Lake City, Utah. The Salt Lake Valley experiences several periods of atmospheric stratification during winter known as cold air pools, during which concentrations of CDV and pollutants can be markedly elevated due to reduced atmospheric mixing. Therefore, the SLV during winter is an ideal place to investigate variability in CDV fraction across a spectrum of boundary layer conditions, ranging from well-mixed to very stable. We present water vapor isotope data from four winters (2013-2017) from the top of a 30 m building on the University of Utah (U of U) Campus. Additionally, we present water vapor isotope data from the summit of Hidden Peak from the 2016-2017 winter, 25 km SE and 2000 m above the U of U site. The Hidden Peak site is consistently above the cold air pool emplaced in the SLV during stable events. We find the expression of the CDV signal in the valley is related to the atmospheric structure of the cold air pools in the SLV, and that the fraction of CDV inferred in the valley is likely related to the mixing height within the cold air pool. Furthermore, we find that patterns between the Hidden Peak and U of U sites during inversion events may record the large-scale atmospheric dynamics promoting emplacement of the cold air pool in the SLV. Further refinements of CDV estimation through stable isotope methods will bring improved mechanistic understanding of the role of CDV in the urban hydrological cycle and improve model simulations of urban environments.

  5. Anomalous neural circuit function in schizophrenia during a virtual Morris water task.

    PubMed

    Folley, Bradley S; Astur, Robert; Jagannathan, Kanchana; Calhoun, Vince D; Pearlson, Godfrey D

    2010-02-15

    Previous studies have reported learning and navigation impairments in schizophrenia patients during virtual reality allocentric learning tasks. The neural bases of these deficits have not been explored using functional MRI despite well-explored anatomic characterization of these paradigms in non-human animals. Our objective was to characterize the differential distributed neural circuits involved in virtual Morris water task performance using independent component analysis (ICA) in schizophrenia patients and controls. Additionally, we present behavioral data in order to derive relationships between brain function and performance, and we have included a general linear model-based analysis in order to exemplify the incremental and differential results afforded by ICA. Thirty-four individuals with schizophrenia and twenty-eight healthy controls underwent fMRI scanning during a block design virtual Morris water task using hidden and visible platform conditions. Independent components analysis was used to deconstruct neural contributions to hidden and visible platform conditions for patients and controls. We also examined performance variables, voxel-based morphometry and hippocampal subparcellation, and regional BOLD signal variation. Independent component analysis identified five neural circuits. Mesial temporal lobe regions, including the hippocampus, were consistently task-related across conditions and groups. Frontal, striatal, and parietal circuits were recruited preferentially during the visible condition for patients, while frontal and temporal lobe regions were more saliently recruited by controls during the hidden platform condition. Gray matter concentrations and BOLD signal in hippocampal subregions were associated with task performance in controls but not patients. Patients exhibited impaired performance on the hidden and visible conditions of the task, related to negative symptom severity. While controls showed coupling between neural circuits, regional neuroanatomy, and behavior, patients activated different task-related neural circuits, not associated with appropriate regional neuroanatomy. GLM analysis elucidated several comparable regions, with the exception of the hippocampus. Inefficient allocentric learning and memory in patients may be related to an inability to recruit appropriate task-dependent neural circuits. Copyright 2009 Elsevier Inc. All rights reserved.

  6. Reconstructing Mammalian Sleep Dynamics with Data Assimilation

    PubMed Central

    Sedigh-Sarvestani, Madineh; Schiff, Steven J.; Gluckman, Bruce J.

    2012-01-01

    Data assimilation is a valuable tool in the study of any complex system, where measurements are incomplete, uncertain, or both. It enables the user to take advantage of all available information including experimental measurements and short-term model forecasts of a system. Although data assimilation has been used to study other biological systems, the study of the sleep-wake regulatory network has yet to benefit from this toolset. We present a data assimilation framework based on the unscented Kalman filter (UKF) for combining sparse measurements together with a relatively high-dimensional nonlinear computational model to estimate the state of a model of the sleep-wake regulatory system. We demonstrate with simulation studies that a few noisy variables can be used to accurately reconstruct the remaining hidden variables. We introduce a metric for ranking relative partial observability of computational models, within the UKF framework, that allows us to choose the optimal variables for measurement and also provides a methodology for optimizing framework parameters such as UKF covariance inflation. In addition, we demonstrate a parameter estimation method that allows us to track non-stationary model parameters and accommodate slow dynamics not included in the UKF filter model. Finally, we show that we can even use observed discretized sleep-state, which is not one of the model variables, to reconstruct model state and estimate unknown parameters. Sleep is implicated in many neurological disorders from epilepsy to schizophrenia, but simultaneous observation of the many brain components that regulate this behavior is difficult. We anticipate that this data assimilation framework will enable better understanding of the detailed interactions governing sleep and wake behavior and provide for better, more targeted, therapies. PMID:23209396

  7. Dopamine reward prediction errors reflect hidden state inference across time

    PubMed Central

    Starkweather, Clara Kwon; Babayan, Benedicte M.; Uchida, Naoshige; Gershman, Samuel J.

    2017-01-01

    Midbrain dopamine neurons signal reward prediction error (RPE), or actual minus expected reward. The temporal difference (TD) learning model has been a cornerstone in understanding how dopamine RPEs could drive associative learning. Classically, TD learning imparts value to features that serially track elapsed time relative to observable stimuli. In the real world, however, sensory stimuli provide ambiguous information about the hidden state of the environment, leading to the proposal that TD learning might instead compute a value signal based on an inferred distribution of hidden states (a ‘belief state’). In this work, we asked whether dopaminergic signaling supports a TD learning framework that operates over hidden states. We found that dopamine signaling exhibited a striking difference between two tasks that differed only with respect to whether reward was delivered deterministically. Our results favor an associative learning rule that combines cached values with hidden state inference. PMID:28263301

  8. Dopamine reward prediction errors reflect hidden-state inference across time.

    PubMed

    Starkweather, Clara Kwon; Babayan, Benedicte M; Uchida, Naoshige; Gershman, Samuel J

    2017-04-01

    Midbrain dopamine neurons signal reward prediction error (RPE), or actual minus expected reward. The temporal difference (TD) learning model has been a cornerstone in understanding how dopamine RPEs could drive associative learning. Classically, TD learning imparts value to features that serially track elapsed time relative to observable stimuli. In the real world, however, sensory stimuli provide ambiguous information about the hidden state of the environment, leading to the proposal that TD learning might instead compute a value signal based on an inferred distribution of hidden states (a 'belief state'). Here we asked whether dopaminergic signaling supports a TD learning framework that operates over hidden states. We found that dopamine signaling showed a notable difference between two tasks that differed only with respect to whether reward was delivered in a deterministic manner. Our results favor an associative learning rule that combines cached values with hidden-state inference.

  9. An Integrated Hydro-Economic Model for Economy-Wide Climate Change Impact Assessment for Zambia

    NASA Astrophysics Data System (ADS)

    Zhu, T.; Thurlow, J.; Diao, X.

    2008-12-01

    Zambia is a landlocked country in Southern Africa, with a total population of about 11 million and a total area of about 752 thousand square kilometers. Agriculture in the country depends heavily on rainfall as the majority of cultivated land is rain-fed. Significant rainfall variability has been a huge challenge for the country to keep a sustainable agricultural growth, which is an important condition for the country to meet the United Nations Millennium Development Goals. The situation is expected to become even more complex as climate change would impose additional impacts on rainwater availability and crop water requirements, among other changes. To understand the impacts of climate variability and change on agricultural production and national economy, a soil hydrology model and a crop water production model are developed to simulate actual crop water uses and yield losses under water stress which provide annual shocks for a recursive dynamic computational general equilibrium (CGE) model developed for Zambia. Observed meteorological data of the past three decades are used in the integrated hydro-economic model for climate variability impact analysis, and as baseline climatology for climate change impact assessment together with several GCM-based climate change scenarios that cover a broad range of climate projections. We found that climate variability can explain a significant portion of the annual variations of agricultural production and GDP of Zambia in the past. Hidden beneath climate variability, climate change is found to have modest impacts on agriculture and national economy of Zambia around 2025 but the impacts would be pronounced in the far future if appropriate adaptations are not implemented. Policy recommendations are provided based on scenario analysis.

  10. Hidden asymmetry and forward-backward correlations

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Zalewski, K.

    2010-09-01

    A model-independent method of studying the forward-backward correlations in symmetric high-energy processes is developed. The method allows a systematic study of the properties of various particle sources and allows one to uncover asymmetric structures hidden in symmetric hadron-hadron and nucleus-nucleus inelastic reactions.

  11. Searching for confining hidden valleys at LHCb, ATLAS, and CMS

    NASA Astrophysics Data System (ADS)

    Pierce, Aaron; Shakya, Bibhushan; Tsai, Yuhsin; Zhao, Yue

    2018-05-01

    We explore strategies for probing hidden valley scenarios exhibiting confinement. Such scenarios lead to a moderate multiplicity of light hidden hadrons for generic showering and hadronization similar to QCD. Their decays are typically soft and displaced, making them challenging to probe with traditional LHC searches. We show that the low trigger requirements and excellent track and vertex reconstruction at LHCb provide a favorable environment to search for such signals. We propose novel search strategies in both muonic and hadronic channels. We also study existing ATLAS and CMS searches and compare them with our proposals at LHCb. We find that the reach at LHCb is generically better in the parameter space we consider here, even with optimistic background estimations for ATLAS and CMS searches. We discuss potential modifications at ATLAS and CMS that might make these experiments competitive with the LHCb reach. Our proposed searches can be applied to general hidden valley models as well as exotic Higgs boson decays, such as in twin Higgs models.

  12. Data-driven Climate Modeling and Prediction

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.; Chekroun, M.

    2016-12-01

    Global climate models aim to simulate a broad range of spatio-temporal scales of climate variability with state vector having many millions of degrees of freedom. On the other hand, while detailed weather prediction out to a few days requires high numerical resolution, it is fairly clear that a major fraction of large-scale climate variability can be predicted in a much lower-dimensional phase space. Low-dimensional models can simulate and predict this fraction of climate variability, provided they are able to account for linear and nonlinear interactions between the modes representing large scales of climate dynamics, as well as their interactions with a much larger number of modes representing fast and small scales. This presentation will highlight several new applications by Multilayered Stochastic Modeling (MSM) [Kondrashov, Chekroun and Ghil, 2015] framework that has abundantly proven its efficiency in the modeling and real-time forecasting of various climate phenomena. MSM is a data-driven inverse modeling technique that aims to obtain a low-order nonlinear system of prognostic equations driven by stochastic forcing, and estimates both the dynamical operator and the properties of the driving noise from multivariate time series of observations or a high-end model's simulation. MSM leads to a system of stochastic differential equations (SDEs) involving hidden (auxiliary) variables of fast-small scales ranked by layers, which interact with the macroscopic (observed) variables of large-slow scales to model the dynamics of the latter, and thus convey memory effects. New MSM climate applications focus on development of computationally efficient low-order models by using data-adaptive decomposition methods that convey memory effects by time-embedding techniques, such as Multichannel Singular Spectrum Analysis (M-SSA) [Ghil et al. 2002] and recently developed Data-Adaptive Harmonic (DAH) decomposition method [Chekroun and Kondrashov, 2016]. In particular, new results by DAH-MSM modeling and prediction of Arctic Sea Ice, as well as decadal predictions of near-surface Earth temperatures will be presented.

  13. You've got to know the rules to play the game: how medical students negotiate the hidden curriculum of surgical careers.

    PubMed

    Hill, Elspeth; Bowman, Katherine; Stalmeijer, Renée; Hart, Jo

    2014-09-01

    The hidden curriculum may be framed as the culture, beliefs and behaviours of a community that are passed to students outside formal course offerings. Medical careers involve diverse specialties, each with a different culture, yet how medical students negotiate these cultures has not been fully explored. Using surgery as a case study, we aimed to establish, first, whether a specialty-specific hidden curriculum existed for students, and second, how students encountered and negotiated surgical career options. Using a constructivist grounded theory approach, we explored students' thoughts, beliefs and experiences regarding career decisions and surgery. An exploratory questionnaire informed the discussion schedule for semi-structured individual interviews. Medical students were purposively sampled by year group, gender and career intentions in surgery. Data collection and analysis were iterative: analysis followed each interview and guided the adaptation of our discussion schedule to further our evolving model. Students held a clear sense of a hidden curriculum in surgery. To successfully negotiate a surgical career, students perceived that they must first build networks because careers information flows through relationships. They subsequently enacted what they learned by accruing the accolades ('ticking the boxes') and appropriating the dispositions ('walking the talk') of 'future surgeons'. This allowed them to identify themselves and to be identified by others as 'future surgeons' and to gain access to participation in the surgical world. Participation then enabled further network building and access to careers information in a positive feedback loop. For some, negotiating the hidden curriculum was more difficult, which, for them, rendered a surgical career unattractive or unattainable. Students perceive a clear surgery-specific hidden curriculum. Using a constructivist grounded theory approach, we have developed a model of how students encounter, uncover and enact this hidden curriculum to succeed. Drawing on concepts of Bourdieu, we discuss unequal access to the hidden curriculum, which was found to exclude many from the possibility of a surgical career. © 2014 John Wiley & Sons Ltd.

  14. Screening of the aerodynamic and biophysical properties of barley malt

    NASA Astrophysics Data System (ADS)

    Ghodsvali, Alireza; Farzaneh, Vahid; Bakhshabadi, Hamid; Zare, Zahra; Karami, Zahra; Mokhtarian, Mohsen; Carvalho, Isabel. S.

    2016-10-01

    An understanding of the aerodynamic and biophysical properties of barley malt is necessary for the appropriate design of equipment for the handling, shipping, dehydration, grading, sorting and warehousing of this strategic crop. Malting is a complex biotechnological process that includes steeping; germination and finally, the dehydration of cereal grains under controlled temperature and humidity conditions. In this investigation, the biophysical properties of barley malt were predicted using two models of artificial neural networks as well as response surface methodology. Stepping time and germination time were selected as the independent variables and 1 000 kernel weight, kernel density and terminal velocity were selected as the dependent variables (responses). The obtained outcomes showed that the artificial neural network model, with a logarithmic sigmoid activation function, presents more precise results than the response surface model in the prediction of the aerodynamic and biophysical properties of produced barley malt. This model presented the best result with 8 nodes in the hidden layer and significant correlation coefficient values of 0.783, 0.767 and 0.991 were obtained for responses one thousand kernel weight, kernel density, and terminal velocity, respectively. The outcomes indicated that this novel technique could be successfully applied in quantitative and qualitative monitoring within the malting process.

  15. Analysing the hidden curriculum: use of a cultural web

    PubMed Central

    Mossop, Liz; Dennick, Reg; Hammond, Richard; Robbé, Iain

    2013-01-01

    CONTEXT Major influences on learning about medical professionalism come from the hidden curriculum. These influences can contribute positively or negatively towards the professional enculturation of clinical students. The fact that there is no validated method for identifying the components of the hidden curriculum poses problems for educators considering professionalism. The aim of this study was to analyse whether a cultural web, adapted from a business context, might assist in the identification of elements of the hidden curriculum at a UK veterinary school. METHODS A qualitative approach was used. Seven focus groups consisting of three staff groups and four student groups were organised. Questioning was framed using the cultural web, which is a model used by business owners to assess their environment and consider how it affects their employees and customers. The focus group discussions were recorded, transcribed and analysed thematically using a combination of a priori and emergent themes. RESULTS The cultural web identified elements of the hidden curriculum for both students and staff. These included: core assumptions; routines; rituals; control systems; organisational factors; power structures, and symbols. Discussions occurred about how and where these issues may affect students’ professional identity development. CONCLUSIONS The cultural web framework functioned well to help participants identify elements of the hidden curriculum. These aspects aligned broadly with previously described factors such as role models and institutional slang. The influence of these issues on a student’s development of a professional identity requires discussion amongst faculty staff, and could be used to develop learning opportunities for students. The framework is promising for the analysis of the hidden curriculum and could be developed as an instrument for implementation in other clinical teaching environments. PMID:23323652

  16. [The Identification of the Origin of Chinese Wolfberry Based on Infrared Spectral Technology and the Artificial Neural Network].

    PubMed

    Li, Zhong; Liu, Ming-de; Ji, Shou-xiang

    2016-03-01

    The Fourier Transform Infrared Spectroscopy (FTIR) is established to find the geographic origins of Chinese wolfberry quickly. In the paper, the 45 samples of Chinese wolfberry from different places of Qinghai Province are to be surveyed by FTIR. The original data matrix of FTIR is pretreated with common preprocessing and wavelet transform. Compared with common windows shifting smoothing preprocessing, standard normal variation correction and multiplicative scatter correction, wavelet transform is an effective spectrum data preprocessing method. Before establishing model through the artificial neural networks, the spectra variables are compressed by means of the wavelet transformation so as to enhance the training speed of the artificial neural networks, and at the same time the related parameters of the artificial neural networks model are also discussed in detail. The survey shows even if the infrared spectroscopy data is compressed to 1/8 of its original data, the spectral information and analytical accuracy are not deteriorated. The compressed spectra variables are used for modeling parameters of the backpropagation artificial neural network (BP-ANN) model and the geographic origins of Chinese wolfberry are used for parameters of export. Three layers of neural network model are built to predict the 10 unknown samples by using the MATLAB neural network toolbox design error back propagation network. The number of hidden layer neurons is 5, and the number of output layer neuron is 1. The transfer function of hidden layer is tansig, while the transfer function of output layer is purelin. Network training function is trainl and the learning function of weights and thresholds is learngdm. net. trainParam. epochs=1 000, while net. trainParam. goal = 0.001. The recognition rate of 100% is to be achieved. It can be concluded that the method is quite suitable for the quick discrimination of producing areas of Chinese wolfberry. The infrared spectral analysis technology combined with the artificial neural networks is proved to be a reliable and new method for the identification of the original place of Traditional Chinese Medicine.

  17. Phasic Triplet Markov Chains.

    PubMed

    El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar

    2014-11-01

    Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data.

  18. Adaptive partially hidden Markov models with application to bilevel image coding.

    PubMed

    Forchhammer, S; Rasmussen, T S

    1999-01-01

    Partially hidden Markov models (PHMMs) have previously been introduced. The transition and emission/output probabilities from hidden states, as known from the HMMs, are conditioned on the past. This way, the HMM may be applied to images introducing the dependencies of the second dimension by conditioning. In this paper, the PHMM is extended to multiple sequences with a multiple token version and adaptive versions of PHMM coding are presented. The different versions of the PHMM are applied to lossless bilevel image coding. To reduce and optimize the model cost and size, the contexts are organized in trees and effective quantization of the parameters is introduced. The new coding methods achieve results that are better than the JBIG standard on selected test images, although at the cost of increased complexity. By the minimum description length principle, the methods presented for optimizing the code length may apply as guidance for training (P)HMMs for, e.g., segmentation or recognition purposes. Thereby, the PHMM models provide a new approach to image modeling.

  19. Application of a hybrid association rules/decision tree model for drought monitoring

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Molajou, Amir

    2017-12-01

    The previous researches have shown that the incorporation of the oceanic-atmospheric climate phenomena such as Sea Surface Temperature (SST) into hydro-climatic models could provide important predictive information about hydro-climatic variability. In this paper, the hybrid application of two data mining techniques (decision tree and association rules) was offered to discover affiliation between drought of Tabriz and Kermanshah synoptic stations (located in Iran) and de-trend SSTs of the Black, Mediterranean and Red Seas. Two major steps of the proposed model were the classification of de-trend SST data and selecting the most effective groups and extracting hidden information involved in the data. The techniques of decision tree which can identify the good traits from a data set for the classification purpose were used for classification and selecting the most effective groups and association rules were employed to extract the hidden predictive information from the large observed data. To examine the accuracy of the rules, confidence and Heidke Skill Score (HSS) measures were calculated and compared for different considering lag times. The computed measures confirm reliable performance of the proposed hybrid data mining method to forecast drought and the results show a relative correlation between the Mediterranean, Black and Red Sea de-trend SSTs and drought of Tabriz and Kermanshah synoptic stations so that the confidence between the monthly Standardized Precipitation Index (SPI) values and the de-trend SST of seas is higher than 70 and 80% respectively for Tabriz and Kermanshah synoptic stations.

  20. Hidden long evolutionary memory in a model biochemical network

    NASA Astrophysics Data System (ADS)

    Ali, Md. Zulfikar; Wingreen, Ned S.; Mukhopadhyay, Ranjan

    2018-04-01

    We introduce a minimal model for the evolution of functional protein-interaction networks using a sequence-based mutational algorithm, and apply the model to study neutral drift in networks that yield oscillatory dynamics. Starting with a functional core module, random evolutionary drift increases network complexity even in the absence of specific selective pressures. Surprisingly, we uncover a hidden order in sequence space that gives rise to long-term evolutionary memory, implying strong constraints on network evolution due to the topology of accessible sequence space.

  1. (abstract) Modeling Protein Families and Human Genes: Hidden Markov Models and a Little Beyond

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre

    1994-01-01

    We will first give a brief overview of Hidden Markov Models (HMMs) and their use in Computational Molecular Biology. In particular, we will describe a detailed application of HMMs to the G-Protein-Coupled-Receptor Superfamily. We will also describe a number of analytical results on HMMs that can be used in discrimination tests and database mining. We will then discuss the limitations of HMMs and some new directions of research. We will conclude with some recent results on the application of HMMs to human gene modeling and parsing.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vongehr, Sascha, E-mail: vongehr@usc.edu

    There are increasingly suggestions for computer simulations of quantum statistics which try to violate Bell type inequalities via classical, common cause correlations. The Clauser–Horne–Shimony–Holt (CHSH) inequality is very robust. However, we argue that with the Einstein–Podolsky–Rosen setup, the CHSH is inferior to the Bell inequality, although and because the latter must assume anti-correlation of entangled photon singlet states. We simulate how often quantum behavior violates both inequalities, depending on the number of photons. Violating Bell 99% of the time is argued to be an ideal benchmark. We present hidden variables that violate the Bell and CHSH inequalities with 50% probability,more » and ones which violate Bell 85% of the time when missing 13% anti-correlation. We discuss how to present the quantum correlations to a wide audience and conclude that, when defending against claims of hidden classicality, one should demand numerical simulations and insist on anti-correlation and the full amount of Bell violation. -- Highlights: •The widely assumed superiority of the CHSH fails in the EPR problem. •We simulate Bell type inequalities behavior depending on the number of photons. •The core of Bell’s theorem in the EPR setup is introduced in a simple way understandable to a wide audience. •We present hidden variables that violate both inequalities with 50% probability. •Algorithms have been supplied in form of Mathematica programs.« less

  3. A method of hidden Markov model optimization for use with geophysical data sets

    NASA Technical Reports Server (NTRS)

    Granat, R. A.

    2003-01-01

    Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.

  4. Sub-seasonal-to-seasonal Reservoir Inflow Forecast using Bayesian Hierarchical Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, S.; Arumugam, S.

    2017-12-01

    Sub-seasonal-to-seasonal (S2S) (15-90 days) streamflow forecasting is an emerging area of research that provides seamless information for reservoir operation from weather time scales to seasonal time scales. From an operational perspective, sub-seasonal inflow forecasts are highly valuable as these enable water managers to decide short-term releases (15-30 days), while holding water for seasonal needs (e.g., irrigation and municipal supply) and to meet end-of-the-season target storage at a desired level. We propose a Bayesian Hierarchical Hidden Markov Model (BHHMM) to develop S2S inflow forecasts for the Tennessee Valley Area (TVA) reservoir system. Here, the hidden states are predicted by relevant indices that influence the inflows at S2S time scale. The hidden Markov model also captures the both spatial and temporal hierarchy in predictors that operate at S2S time scale with model parameters being estimated as a posterior distribution using a Bayesian framework. We present our work in two steps, namely single site model and multi-site model. For proof of concept, we consider inflows to Douglas Dam, Tennessee, in the single site model. For multisite model we consider reservoirs in the upper Tennessee valley. Streamflow forecasts are issued and updated continuously every day at S2S time scale. We considered precipitation forecasts obtained from NOAA Climate Forecast System (CFSv2) GCM as predictors for developing S2S streamflow forecasts along with relevant indices for predicting hidden states. Spatial dependence of the inflow series of reservoirs are also preserved in the multi-site model. To circumvent the non-normality of the data, we consider the HMM in a Generalized Linear Model setting. Skill of the proposed approach is tested using split sample validation against a traditional multi-site canonical correlation model developed using the same set of predictors. From the posterior distribution of the inflow forecasts, we also highlight different system behavior under varied global and local scale climatic influences from the developed BHMM.

  5. A Regularized Linear Dynamical System Framework for Multivariate Time Series Analysis.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2015-01-01

    Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning Multivariate Time Series (MTS). However, in general, it is difficult to set the dimension of an LDS's hidden state space. A small number of hidden states may not be able to model the complexities of a MTS, while a large number of hidden states can lead to overfitting. In this paper, we study learning methods that impose various regularization penalties on the transition matrix of the LDS model and propose a regularized LDS learning framework (rLDS) which aims to (1) automatically shut down LDSs' spurious and unnecessary dimensions, and consequently, address the problem of choosing the optimal number of hidden states; (2) prevent the overfitting problem given a small amount of MTS data; and (3) support accurate MTS forecasting. To learn the regularized LDS from data we incorporate a second order cone program and a generalized gradient descent method into the Maximum a Posteriori framework and use Expectation Maximization to obtain a low-rank transition matrix of the LDS model. We propose two priors for modeling the matrix which lead to two instances of our rLDS. We show that our rLDS is able to recover well the intrinsic dimensionality of the time series dynamics and it improves the predictive performance when compared to baselines on both synthetic and real-world MTS datasets.

  6. Efficient implementation of a real-time estimation system for thalamocortical hidden Parkinsonian properties

    NASA Astrophysics Data System (ADS)

    Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.

    2017-01-01

    Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.

  7. hs-CRP is strongly associated with coronary heart disease (CHD): A data mining approach using decision tree algorithm.

    PubMed

    Tayefi, Maryam; Tajfard, Mohammad; Saffar, Sara; Hanachi, Parichehr; Amirabadizadeh, Ali Reza; Esmaeily, Habibollah; Taghipour, Ali; Ferns, Gordon A; Moohebati, Mohsen; Ghayour-Mobarhan, Majid

    2017-04-01

    Coronary heart disease (CHD) is an important public health problem globally. Algorithms incorporating the assessment of clinical biomarkers together with several established traditional risk factors can help clinicians to predict CHD and support clinical decision making with respect to interventions. Decision tree (DT) is a data mining model for extracting hidden knowledge from large databases. We aimed to establish a predictive model for coronary heart disease using a decision tree algorithm. Here we used a dataset of 2346 individuals including 1159 healthy participants and 1187 participant who had undergone coronary angiography (405 participants with negative angiography and 782 participants with positive angiography). We entered 10 variables of a total 12 variables into the DT algorithm (including age, sex, FBG, TG, hs-CRP, TC, HDL, LDL, SBP and DBP). Our model could identify the associated risk factors of CHD with sensitivity, specificity, accuracy of 96%, 87%, 94% and respectively. Serum hs-CRP levels was at top of the tree in our model, following by FBG, gender and age. Our model appears to be an accurate, specific and sensitive model for identifying the presence of CHD, but will require validation in prospective studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A class-based link prediction using Distance Dependent Chinese Restaurant Process

    NASA Astrophysics Data System (ADS)

    Andalib, Azam; Babamir, Seyed Morteza

    2016-08-01

    One of the important tasks in relational data analysis is link prediction which has been successfully applied on many applications such as bioinformatics, information retrieval, etc. The link prediction is defined as predicting the existence or absence of edges between nodes of a network. In this paper, we propose a novel method for link prediction based on Distance Dependent Chinese Restaurant Process (DDCRP) model which enables us to utilize the information of the topological structure of the network such as shortest path and connectivity of the nodes. We also propose a new Gibbs sampling algorithm for computing the posterior distribution of the hidden variables based on the training data. Experimental results on three real-world datasets show the superiority of the proposed method over other probabilistic models for link prediction problem.

  9. Deformed supersymmetric quantum mechanics with spin variables

    NASA Astrophysics Data System (ADS)

    Fedoruk, Sergey; Ivanov, Evgeny; Sidorov, Stepan

    2018-01-01

    We quantize the one-particle model of the SU(2|1) supersymmetric multiparticle mechanics with the additional semi-dynamical spin degrees of freedom. We find the relevant energy spectrum and the full set of physical states as functions of the mass-dimension deformation parameter m and SU(2) spin q\\in (Z_{>0,}1/2+Z_{≥0}) . It is found that the states at the fixed energy level form irreducible multiplets of the supergroup SU(2|1). Also, the hidden superconformal symmetry OSp(4|2) of the model is revealed in the classical and quantum cases. We calculate the OSp(4|2) Casimir operators and demonstrate that the full set of the physical states belonging to different energy levels at fixed q are unified into an irreducible OSp(4|2) multiplet.

  10. Detecting seismic waves using a binary hidden Markov model classifier

    NASA Astrophysics Data System (ADS)

    Ray, J.; Lefantzi, S.; Brogan, R. A.; Forrest, R.; Hansen, C. W.; Young, C. J.

    2016-12-01

    We explore the use of Hidden Markov Models (HMM) to detect the arrival of seismic waves using data captured by a seismogram. HMMs define the state of a station as a binary variable based on whether the station is receiving a signal or not. HMMs are simple and fast, allowing them to monitor multiple datastreams arising from a large distributed network of seismographs. In this study we examine the efficacy of HMM-based detectors with respect to their false positive and negative rates as well as the accuracy of the signal onset time as compared to the value determined by an expert analyst. The study uses 3 component International Monitoring System (IMS) data from a carefully analyzed 2 week period from May, 2010, for which our analyst tried to identify every signal. Part of this interval is used for training the HMM to recognize the transition between state from noise to signal, while the other is used for evaluating the effectiveness of our new detection algorithm. We compare our results with the STA/LTA detection processing applied by the IDC to assess potential for operational use. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. Body size affects the evolution of hidden colour signals in moths.

    PubMed

    Kang, Changku; Zahiri, Reza; Sherratt, Thomas N

    2017-08-30

    Many cryptic prey have also evolved hidden contrasting colour signals which are displayed to would-be predators. Given that these hidden contrasting signals may confer additional survival benefits to the prey by startling/intimidating predators, it is unclear why they have evolved in some species, but not in others. Here, we have conducted a comparative phylogenetic analysis of the evolution of colour traits in the family Erebidae (Lepidoptera), and found that the hidden contrasting colour signals are more likely to be found in larger species. To understand why this relationship occurs, we present a general mathematical model, demonstrating that selection for a secondary defence such as deimatic display will be stronger in large species when (i) the primary defence (crypsis) is likely to fail as its body size increases and/or (ii) the secondary defence is more effective in large prey. To test the model assumptions, we conducted behavioural experiments using a robotic moth which revealed that survivorship advantages were higher against wild birds when the moth has contrasting hindwings and large size. Collectively, our results suggest that the evolutionary association between large size and hidden contrasting signals has been driven by a combination of the need for a back-up defence and its efficacy. © 2017 The Author(s).

  12. Prediction of Narrow N* and {Lambda}* Resonances with Hidden Charm above 4 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu Jiajun; Departamento de Fisica Teorica and IFIC, Centro Mixto Universidad de Valencia-CSIC, Institutos de Investigacion de Paterna, Apartado 22085, 46071 Valencia; Molina, R.

    2010-12-03

    The interaction between various charmed mesons and charmed baryons is studied within the framework of the coupled-channel unitary approach with the local hidden gauge formalism. Several meson-baryon dynamically generated narrow N{sup *} and {Lambda}{sup *} resonances with hidden charm are predicted with mass above 4 GeV and width smaller than 100 MeV. The predicted new resonances definitely cannot be accommodated by quark models with three constituent quarks and can be looked for in the forthcoming PANDA/FAIR experiments.

  13. Desktop computer graphics for RMS/payload handling flight design

    NASA Technical Reports Server (NTRS)

    Homan, D. J.

    1984-01-01

    A computer program, the Multi-Adaptive Drawings, Renderings and Similitudes (MADRAS) program, is discussed. The modeling program, written for a desktop computer system (the Hewlett-Packard 9845/C), is written in BASIC and uses modular construction of objects while generating both wire-frame and hidden-line drawings from any viewpoint. The dimensions and placement of objects are user definable. Once the hidden-line calculations are made for a particular viewpoint, the viewpoint may be rotated in pan, tilt, and roll without further hidden-line calculations. The use and results of this program are discussed.

  14. A New Chaotic Flow with Hidden Attractor: The First Hyperjerk System with No Equilibrium

    NASA Astrophysics Data System (ADS)

    Ren, Shuili; Panahi, Shirin; Rajagopal, Karthikeyan; Akgul, Akif; Pham, Viet-Thanh; Jafari, Sajad

    2018-02-01

    Discovering unknown aspects of non-equilibrium systems with hidden strange attractors is an attractive research topic. A novel quadratic hyperjerk system is introduced in this paper. It is noteworthy that this non-equilibrium system can generate hidden chaotic attractors. The essential properties of such systems are investigated by means of equilibrium points, phase portrait, bifurcation diagram, and Lyapunov exponents. In addition, a fractional-order differential equation of this new system is presented. Moreover, an electronic circuit is also designed and implemented to verify the feasibility of the theoretical model.

  15. Gravitational lensing of photons coupled to massive particles

    NASA Astrophysics Data System (ADS)

    Glicenstein, J.-F.

    2018-04-01

    The gravitational deflection of massless and massive particles, both with and without spin, has been extensively studied. This paper discusses the lensing of a particle which oscillates between two interaction eigenstates. The deflection angle, lens equation and time delay between images are derived in a model of photon to hidden-photon oscillations. In the case of coherent oscillations, the coupled photon behaves as a massive particle with a mass equal to the product of the coupling constant and hidden-photon mass. The conditions for observing coherent photon-hidden photon lensing are discussed.

  16. Matrix Determination of Reflectance of Hidden Object via Indirect Photography

    DTIC Science & Technology

    2012-03-01

    the hidden object. This thesis provides an alternative method of processing the camera images by modeling the system as a set of transport and...Distribution Function ( BRDF ). Figure 1. Indirect photography with camera field of view dictated by point of illumination. 3 1.3 Research Focus In an...would need to be modeled using radiometric principles. A large amount of the improvement in this process was due to the use of a blind

  17. Quantile regression models of animal habitat relationships

    USGS Publications Warehouse

    Cade, Brian S.

    2003-01-01

    Typically, all factors that limit an organism are not measured and included in statistical models used to investigate relationships with their environment. If important unmeasured variables interact multiplicatively with the measured variables, the statistical models often will have heterogeneous response distributions with unequal variances. Quantile regression is an approach for estimating the conditional quantiles of a response variable distribution in the linear model, providing a more complete view of possible causal relationships between variables in ecological processes. Chapter 1 introduces quantile regression and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of estimates for homogeneous and heterogeneous regression models. Chapter 2 evaluates performance of quantile rankscore tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). A permutation F test maintained better Type I errors than the Chi-square T test for models with smaller n, greater number of parameters p, and more extreme quantiles τ. Both versions of the test required weighting to maintain correct Type I errors when there was heterogeneity under the alternative model. An example application related trout densities to stream channel width:depth. Chapter 3 evaluates a drop in dispersion, F-ratio like permutation test for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). Chapter 4 simulates from a large (N = 10,000) finite population representing grid areas on a landscape to demonstrate various forms of hidden bias that might occur when the effect of a measured habitat variable on some animal was confounded with the effect of another unmeasured variable (spatially and not spatially structured). Depending on whether interactions of the measured habitat and unmeasured variable were negative (interference interactions) or positive (facilitation interactions), either upper (τ > 0.5) or lower (τ < 0.5) quantile regression parameters were less biased than mean rate parameters. Sampling (n = 20 - 300) simulations demonstrated that confidence intervals constructed by inverting rankscore tests provided valid coverage of these biased parameters. Quantile regression was used to estimate effects of physical habitat resources on a bivalve mussel (Macomona liliana) in a New Zealand harbor by modeling the spatial trend surface as a cubic polynomial of location coordinates.

  18. An information hidden model holding cover distributions

    NASA Astrophysics Data System (ADS)

    Fu, Min; Cai, Chao; Dai, Zuxu

    2018-03-01

    The goal of steganography is to embed secret data into a cover so no one apart from the sender and intended recipients can find the secret data. Usually, the way the cover changing was decided by a hidden function. There were no existing model could be used to find an optimal function which can greatly reduce the distortion the cover suffered. This paper considers the cover carrying secret message as a random Markov chain, taking the advantages of a deterministic relation between initial distributions and transferring matrix of the Markov chain, and takes the transferring matrix as a constriction to decrease statistical distortion the cover suffered in the process of information hiding. Furthermore, a hidden function is designed and the transferring matrix is also presented to be a matrix from the original cover to the stego cover. Experiment results show that the new model preserves a consistent statistical characterizations of original and stego cover.

  19. Statistical Inference in Hidden Markov Models Using k-Segment Constraints

    PubMed Central

    Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher

    2016-01-01

    Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674

  20. Single photon and nonlocality

    NASA Astrophysics Data System (ADS)

    Drezet, Aurelien

    2007-03-01

    In a paper by Home and Agarwal [1], it is claimed that quantum nonlocality can be revealed in a simple interferometry experiment using only single particles. A critical analysis of the concept of hidden variable used by the authors of [1] shows that the reasoning is not correct.

  1. Modeling dyadic processes using Hidden Markov Models: A time series approach to mother-infant interactions during infant immunization.

    PubMed

    Stifter, Cynthia A; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at two and six months of age, used hidden Markov modeling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a 4-state model for the dyadic responses to a two-month inoculation whereas a 6-state model best described the dyadic process at six months. Two of the states at two months and three of the states at six months suggested a progression from high intensity crying to no crying with parents using vestibular and auditory soothing methods. The use of feeding and/or pacifying to soothe the infant characterized one two-month state and two six-month states. These data indicate that with maturation and experience, the mother-infant dyad is becoming more organized around the soothing interaction. Using hidden Markov modeling to describe individual differences, as well as normative processes, is also presented and discussed.

  2. Modeling dyadic processes using Hidden Markov Models: A time series approach to mother-infant interactions during infant immunization

    PubMed Central

    Stifter, Cynthia A.; Rovine, Michael

    2016-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at two and six months of age, used hidden Markov modeling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a 4-state model for the dyadic responses to a two-month inoculation whereas a 6-state model best described the dyadic process at six months. Two of the states at two months and three of the states at six months suggested a progression from high intensity crying to no crying with parents using vestibular and auditory soothing methods. The use of feeding and/or pacifying to soothe the infant characterized one two-month state and two six-month states. These data indicate that with maturation and experience, the mother-infant dyad is becoming more organized around the soothing interaction. Using hidden Markov modeling to describe individual differences, as well as normative processes, is also presented and discussed. PMID:27284272

  3. Hidden gauged U (1 ) model: Unifying scotogenic neutrino and flavor dark matter

    NASA Astrophysics Data System (ADS)

    Yu, Jiang-Hao

    2016-06-01

    In both scotogenic neutrino and flavor dark matter models, the dark sector communicates with the standard model fermions via Yukawa portal couplings. We propose an economic scenario where the scotogenic neutrino and a flavored mediator share the same inert Higgs doublet and all are charged under a hidden gauged U (1 ) symmetry. The dark Z2 symmetry in the dark sector is regarded as the remnant of this hidden U (1 ) symmetry breaking. In particular, we investigate a dark U (1 )D [and also U (1 )B-L] model which unifies the scotogenic neutrino and top-flavored mediator. Thus dark tops and dark neutrinos are the standard model fermion partners, and the dark matter could be the inert Higgs or the lightest dark neutrino. We note that this model has rich collider signatures on dark tops, the inert Higgs and the Z' gauge boson. Moreover, the scalar associated to the U (1 )D [and also U (1 )B -L ] symmetry breaking could explain the 750 GeV diphoton excess reported by ATLAS and CMS recently.

  4. Violation of a Bell-like inequality in single-neutron interferometry.

    PubMed

    Hasegawa, Yuji; Loidl, Rudolf; Badurek, Gerald; Baron, Matthias; Rauch, Helmut

    2003-09-04

    Non-local correlations between spatially separated systems have been extensively discussed in the context of the Einstein, Podolsky and Rosen (EPR) paradox and Bell's inequalities. Many proposals and experiments designed to test hidden variable theories and the violation of Bell's inequalities have been reported; usually, these involve correlated photons, although recently an experiment was performed with (9)Be(+) ions. Nevertheless, it is of considerable interest to show that such correlations (arising from quantum mechanical entanglement) are not simply a peculiarity of photons. Here we measure correlations between two degrees of freedom (comprising spatial and spin components) of single neutrons; this removes the need for a source of entangled neutron pairs, which would present a considerable technical challenge. A Bell-like inequality is introduced to clarify the correlations that can arise between observables of otherwise independent degrees of freedom. We demonstrate the violation of this Bell-like inequality: our measured value is 2.051 +/- 0.019, clearly above the value of 2 predicted by classical hidden variable theories.

  5. Best Hiding Capacity Scheme for Variable Length Messages Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bajaj, Ruchika; Bedi, Punam; Pal, S. K.

    Steganography is an art of hiding information in such a way that prevents the detection of hidden messages. Besides security of data, the quantity of data that can be hidden in a single cover medium, is also very important. We present a secure data hiding scheme with high embedding capacity for messages of variable length based on Particle Swarm Optimization. This technique gives the best pixel positions in the cover image, which can be used to hide the secret data. In the proposed scheme, k bits of the secret message are substituted into k least significant bits of the image pixel, where k varies from 1 to 4 depending on the message length. The proposed scheme is tested and results compared with simple LSB substitution, uniform 4-bit LSB hiding (with PSO) for the test images Nature, Baboon, Lena and Kitty. The experimental study confirms that the proposed method achieves high data hiding capacity and maintains imperceptibility and minimizes the distortion between the cover image and the obtained stego image.

  6. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  7. Hidden markov model for the prediction of transmembrane proteins using MATLAB.

    PubMed

    Chaturvedi, Navaneet; Shanker, Sudhanshu; Singh, Vinay Kumar; Sinha, Dhiraj; Pandey, Paras Nath

    2011-01-01

    Since membranous proteins play a key role in drug targeting therefore transmembrane proteins prediction is active and challenging area of biological sciences. Location based prediction of transmembrane proteins are significant for functional annotation of protein sequences. Hidden markov model based method was widely applied for transmembrane topology prediction. Here we have presented a revised and a better understanding model than an existing one for transmembrane protein prediction. Scripting on MATLAB was built and compiled for parameter estimation of model and applied this model on amino acid sequence to know the transmembrane and its adjacent locations. Estimated model of transmembrane topology was based on TMHMM model architecture. Only 7 super states are defined in the given dataset, which were converted to 96 states on the basis of their length in sequence. Accuracy of the prediction of model was observed about 74 %, is a good enough in the area of transmembrane topology prediction. Therefore we have concluded the hidden markov model plays crucial role in transmembrane helices prediction on MATLAB platform and it could also be useful for drug discovery strategy. The database is available for free at bioinfonavneet@gmail.comvinaysingh@bhu.ac.in.

  8. Temporal framing and the hidden-zero effect: rate-dependent outcomes on delay discounting.

    PubMed

    Naudé, Gideon P; Kaplan, Brent A; Reed, Derek D; Henley, Amy J; DiGennaro Reed, Florence D

    2018-05-01

    Recent research suggests that presenting time intervals as units (e.g., days) or as specific dates, can modulate the degree to which humans discount delayed outcomes. Another framing effect involves explicitly stating that choosing a smaller-sooner reward is mutually exclusive to receiving a larger-later reward, thus presenting choices as an extended sequence. In Experiment 1, participants (N = 201) recruited from Amazon Mechanical Turk completed the Monetary Choice Questionnaire in a 2 (delay framing) by 2 (zero framing) design. Regression suggested a main effect of delay, but not zero, framing after accounting for other demographic variables and manipulations. We observed a rate-dependent effect for the date-framing group, such that those with initially steep discounting exhibited greater sensitivity to the manipulation than those with initially shallow discounting. Subsequent analyses suggest these effects cannot be explained by regression to the mean. Experiment 2 addressed the possibility that the null effect of zero framing was due to within-subject exposure to the hidden- and explicit-zero conditions. A new Amazon Mechanical Turk sample completed the Monetary Choice Questionnaire in either hidden- or explicit-zero formats. Analyses revealed a main effect of reward magnitude, but not zero framing, suggesting potential limitations to the generality of the hidden-zero effect. © 2018 Society for the Experimental Analysis of Behavior.

  9. Accurate permittivity measurements for microwave imaging via ultra-wideband removal of spurious reflectors.

    PubMed

    Pelletier, Mathew G; Viera, Joseph A; Wanjura, John; Holt, Greg

    2010-01-01

    The use of microwave imaging is becoming more prevalent for detection of interior hidden defects in manufactured and packaged materials. In applications for detection of hidden moisture, microwave tomography can be used to image the material and then perform an inverse calculation to derive an estimate of the variability of the hidden material, such internal moisture, thereby alerting personnel to damaging levels of the hidden moisture before material degradation occurs. One impediment to this type of imaging occurs with nearby objects create strong reflections that create destructive and constructive interference, at the receiver, as the material is conveyed past the imaging antenna array. In an effort to remove the influence of the reflectors, such as metal bale ties, research was conducted to develop an algorithm for removal of the influence of the local proximity reflectors from the microwave images. This research effort produced a technique, based upon the use of ultra-wideband signals, for the removal of spurious reflections created by local proximity reflectors. This improvement enables accurate microwave measurements of moisture in such products as cotton bales, as well as other physical properties such as density or material composition. The proposed algorithm was shown to reduce errors by a 4:1 ratio and is an enabling technology for imaging applications in the presence of metal bale ties.

  10. Monitoring volcano activity through Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Cassisi, C.; Montalto, P.; Prestifilippo, M.; Aliotta, M.; Cannata, A.; Patanè, D.

    2013-12-01

    During 2011-2013, Mt. Etna was mainly characterized by cyclic occurrences of lava fountains, totaling to 38 episodes. During this time interval Etna volcano's states (QUIET, PRE-FOUNTAIN, FOUNTAIN, POST-FOUNTAIN), whose automatic recognition is very useful for monitoring purposes, turned out to be strongly related to the trend of RMS (Root Mean Square) of the seismic signal recorded by stations close to the summit area. Since RMS time series behavior is considered to be stochastic, we can try to model the system generating its values, assuming to be a Markov process, by using Hidden Markov models (HMMs). HMMs are a powerful tool in modeling any time-varying series. HMMs analysis seeks to recover the sequence of hidden states from the observed emissions. In our framework, observed emissions are characters generated by the SAX (Symbolic Aggregate approXimation) technique, which maps RMS time series values with discrete literal emissions. The experiments show how it is possible to guess volcano states by means of HMMs and SAX.

  11. Development of a brain MRI-based hidden Markov model for dementia recognition.

    PubMed

    Chen, Ying; Pham, Tuan D

    2013-01-01

    Dementia is an age-related cognitive decline which is indicated by an early degeneration of cortical and sub-cortical structures. Characterizing those morphological changes can help to understand the disease development and contribute to disease early prediction and prevention. But modeling that can best capture brain structural variability and can be valid in both disease classification and interpretation is extremely challenging. The current study aimed to establish a computational approach for modeling the magnetic resonance imaging (MRI)-based structural complexity of the brain using the framework of hidden Markov models (HMMs) for dementia recognition. Regularity dimension and semi-variogram were used to extract structural features of the brains, and vector quantization method was applied to convert extracted feature vectors to prototype vectors. The output VQ indices were then utilized to estimate parameters for HMMs. To validate its accuracy and robustness, experiments were carried out on individuals who were characterized as non-demented and mild Alzheimer's diseased. Four HMMs were constructed based on the cohort of non-demented young, middle-aged, elder and demented elder subjects separately. Classification was carried out using a data set including both non-demented and demented individuals with a wide age range. The proposed HMMs have succeeded in recognition of individual who has mild Alzheimer's disease and achieved a better classification accuracy compared to other related works using different classifiers. Results have shown the ability of the proposed modeling for recognition of early dementia. The findings from this research will allow individual classification to support the early diagnosis and prediction of dementia. By using the brain MRI-based HMMs developed in our proposed research, it will be more efficient, robust and can be easily used by clinicians as a computer-aid tool for validating imaging bio-markers for early prediction of dementia.

  12. DYNA3D, INGRID, and TAURUS: an integrated, interactive software system for crashworthiness engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benson, D.J.; Hallquist, J.O.; Stillman, D.W.

    1985-04-01

    Crashworthiness engineering has always been a high priority at Lawrence Livermore National Laboratory because of its role in the safe transport of radioactive material for the nuclear power industry and military. As a result, the authors have developed an integrated, interactive set of finite element programs for crashworthiness analysis. The heart of the system is DYNA3D, an explicit, fully vectorized, large deformation structural dynamics code. DYNA3D has the following four capabilities that are critical for the efficient and accurate analysis of crashes: (1) fully nonlinear solid, shell, and beam elements for representing a structure, (2) a broad range of constitutivemore » models for representing the materials, (3) sophisticated contact algorithms for the impact interactions, and (4) a rigid body capability to represent the bodies away from the impact zones at a greatly reduced cost without sacrificing any accuracy in the momentum calculations. To generate the large and complex data files for DYNA3D, INGRID, a general purpose mesh generator, is used. It runs on everything from IBM PCs to CRAYS, and can generate 1000 nodes/minute on a PC. With its efficient hidden line algorithms and many options for specifying geometry, INGRID also doubles as a geometric modeller. TAURUS, an interactive post processor, is used to display DYNA3D output. In addition to the standard monochrome hidden line display, time history plotting, and contouring, TAURUS generates interactive color displays on 8 color video screens by plotting color bands superimposed on the mesh which indicate the value of the state variables. For higher quality color output, graphic output files may be sent to the DICOMED film recorders. We have found that color is every bit as important as hidden line removal in aiding the analyst in understanding his results. In this paper the basic methodologies of the programs are presented along with several crashworthiness calculations.« less

  13. NUCLEAR X-RAY PROPERTIES OF THE PECULIAR RADIO-LOUD HIDDEN AGN 4C+29.30

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobolewska, M. A.; Siemiginowska, Aneta; Migliori, G.

    2012-10-20

    We present results from a study of nuclear emission from a nearby radio galaxy, 4C+29.30, over a broad 0.5-200 keV X-ray band. This study used new XMM-Newton ({approx}17 ks) and Chandra ({approx}300 ks) data, and archival Swift/BAT data from the 58 month catalog. The hard (>2 keV) X-ray spectrum of 4C+29.30 can be decomposed into an intrinsic hard power law ({Gamma} {approx} 1.56) modified by a cold absorber with an intrinsic column density N {sub H,z} {approx} 5 Multiplication-Sign 10{sup 23} cm{sup -2}, and its reflection (|{Omega}/2{pi}| {approx} 0.3) from a neutral matter including a narrow iron K{alpha} emission linemore » at a rest-frame energy {approx}6.4 keV. The reflected component is less absorbed than the intrinsic one with an upper limit on the absorbing column of N {sup refl} {sub H,z} < 2.5 Multiplication-Sign 10{sup 22} cm{sup -2}. The X-ray spectrum varied between the XMM-Newton and Chandra observations. We show that a scenario invoking variations of the normalization of the power law is favored over a model with variable intrinsic column density. X-rays in the 0.5-2 keV band are dominated by diffuse emission modeled with a thermal bremsstrahlung component with temperature {approx}0.7 keV, and contain only a marginal contribution from the scattered power-law component. We hypothesize that 4C+29.30 belongs to a class of 'hidden' active galactic nuclei containing a geometrically thick torus. However, unlike the majority of hidden AGNs, 4C+29.30 is radio-loud. Correlations between the scattering fraction and Eddington luminosity ratio, and between black hole mass and stellar velocity dispersion, imply that 4C+29.30 hosts a black hole with {approx}10{sup 8} M {sub Sun} mass.« less

  14. Network structure exploration in networks with node attributes

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Wang, Xiaolong; Bu, Junzhao; Tang, Buzhou; Xiang, Xin

    2016-05-01

    Complex networks provide a powerful way to represent complex systems and have been widely studied during the past several years. One of the most important tasks of network analysis is to detect structures (also called structural regularities) embedded in networks by determining group number and group partition. Most of network structure exploration models only consider network links. However, in real world networks, nodes may have attributes that are useful for network structure exploration. In this paper, we propose a novel Bayesian nonparametric (BNP) model to explore structural regularities in networks with node attributes, called Bayesian nonparametric attribute (BNPA) model. This model does not only take full advantage of both links between nodes and node attributes for group partition via shared hidden variables, but also determine group number automatically via the Bayesian nonparametric theory. Experiments conducted on a number of real and synthetic networks show that our BNPA model is able to automatically explore structural regularities in networks with node attributes and is competitive with other state-of-the-art models.

  15. A cascaded neuro-computational model for spoken word recognition

    NASA Astrophysics Data System (ADS)

    Hoya, Tetsuya; van Leeuwen, Cees

    2010-03-01

    In human speech recognition, words are analysed at both pre-lexical (i.e., sub-word) and lexical (word) levels. The aim of this paper is to propose a constructive neuro-computational model that incorporates both these levels as cascaded layers of pre-lexical and lexical units. The layered structure enables the system to handle the variability of real speech input. Within the model, receptive fields of the pre-lexical layer consist of radial basis functions; the lexical layer is composed of units that perform pattern matching between their internal template and a series of labels, corresponding to the winning receptive fields in the pre-lexical layer. The model adapts through self-tuning of all units, in combination with the formation of a connectivity structure through unsupervised (first layer) and supervised (higher layers) network growth. Simulation studies show that the model can achieve a level of performance in spoken word recognition similar to that of a benchmark approach using hidden Markov models, while enabling parallel access to word candidates in lexical decision making.

  16. Tracking the visual focus of attention for a varying number of wandering people.

    PubMed

    Smith, Kevin; Ba, Sileye O; Odobez, Jean-Marc; Gatica-Perez, Daniel

    2008-07-01

    We define and address the problem of finding the visual focus of attention for a varying number of wandering people (VFOA-W), determining where the people's movement is unconstrained. VFOA-W estimation is a new and important problem with mplications for behavior understanding and cognitive science, as well as real-world applications. One such application, which we present in this article, monitors the attention passers-by pay to an outdoor advertisement. Our approach to the VFOA-W problem proposes a multi-person tracking solution based on a dynamic Bayesian network that simultaneously infers the (variable) number of people in a scene, their body locations, their head locations, and their head pose. For efficient inference in the resulting large variable-dimensional state-space we propose a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampling scheme, as well as a novel global observation model which determines the number of people in the scene and localizes them. We propose a Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM)-based VFOA-W model which use head pose and location information to determine people's focus state. Our models are evaluated for tracking performance and ability to recognize people looking at an outdoor advertisement, with results indicating good performance on sequences where a moderate number of people pass in front of an advertisement.

  17. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    PubMed

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Studying Climate Response to Forcing by the Nonlinear Dynamical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Mukhin, Dmitry; Gavrilov, Andrey; Loskutov, Evgeny; Feigin, Alexander

    2017-04-01

    An analysis of global climate response to external forcing, both anthropogenic (mainly, CO2 and aerosol) and natural (solar and volcanic), is needed for adequate predictions of global climate change. Being complex dynamical system, the climate reacts to external perturbations exciting feedbacks (both positive and negative) making the response non-trivial and poorly predictable. Thus an extraction of internal modes of climate system, investigation of their interaction with external forcings and further modeling and forecast of their dynamics, are all the problems providing the success of climate modeling. In the report the new method for principal mode extraction from climate data is presented. The method is based on the Nonlinear Dynamical Mode (NDM) expansion [1,2], but takes into account a number of external forcings applied to the system. Each NDM is represented by hidden time series governing the observed variability, which, together with external forcing time series, are mapped onto data space. While forcing time series are considered to be known, the hidden unknown signals underlying the internal climate dynamics are extracted from observed data by the suggested method. In particular, it gives us an opportunity to study the evolution of principal system's mode structure in changing external conditions and separate the internal climate variability from trends forced by external perturbations. Furthermore, the modes so obtained can be extrapolated beyond the observational time series, and long-term prognosis of modes' structure including characteristics of interconnections and responses to external perturbations, can be carried out. In this work the method is used for reconstructing and studying the principal modes of climate variability on inter-annual and decadal time scales accounting the external forcings such as anthropogenic emissions, variations of the solar activity and volcanic activity. The structure of the obtained modes as well as their response to external factors, e.g. forecast their change in 21 century under different CO2 emission scenarios, are discussed. [1] Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep15510 [2] Gavrilov, A., Mukhin, D., Loskutov, E., Volodin, E., Feigin, A., & Kurths, J. (2016). Method for reconstructing nonlinear modes with adaptive structure from multidimensional data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 26(12), 123101. http://doi.org/10.1063/1.4968852

  19. Sparse distributed memory and related models

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1992-01-01

    Described here is sparse distributed memory (SDM) as a neural-net associative memory. It is characterized by two weight matrices and by a large internal dimension - the number of hidden units is much larger than the number of input or output units. The first matrix, A, is fixed and possibly random, and the second matrix, C, is modifiable. The SDM is compared and contrasted to (1) computer memory, (2) correlation-matrix memory, (3) feet-forward artificial neural network, (4) cortex of the cerebellum, (5) Marr and Albus models of the cerebellum, and (6) Albus' cerebellar model arithmetic computer (CMAC). Several variations of the basic SDM design are discussed: the selected-coordinate and hyperplane designs of Jaeckel, the pseudorandom associative neural memory of Hassoun, and SDM with real-valued input variables by Prager and Fallside. SDM research conducted mainly at the Research Institute for Advanced Computer Science (RIACS) in 1986-1991 is highlighted.

  20. The Use of Gestalt Interventions in the Treatment of the Resistant Alcohol-Dependent Client.

    ERIC Educational Resources Information Center

    Ramey, Luellen

    1998-01-01

    Reviews ethical and practical dilemmas associated with clients who have hidden alcohol dependencies, and proposes an approach rooted in Gestalt counseling theory which confronts these issues and is compatible with a current emerging alcohol-treatment model. Suggests specific activities for addressing client resistance to revealing a hidden alcohol…

  1. Quantile regression reveals hidden bias and uncertainty in habitat models

    Treesearch

    Brian S. Cade; Barry R. Noon; Curtis H. Flather

    2005-01-01

    We simulated the effects of missing information on statistical distributions of animal response that covaried with measured predictors of habitat to evaluate the utility and performance of quantile regression for providing more useful intervals of uncertainty in habitat relationships. These procedures were evaulated for conditions in which heterogeneity and hidden bias...

  2. The hidden consequences of fire suppression

    Treesearch

    Carol Miller

    2012-01-01

    Wilderness managers need a way to quantify and monitor the effects of suppressing lightning-caused wildfires, which can alter natural fire regimes, vegetation, and habitat. Using computerized models of fire spread, weather, and fuels, it is now possible to quantify many of the hidden consequences of fire suppression. Case study watersheds in Yosemite and Sequoia-Kings...

  3. PVWatts Version 1 Technical Reference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2013-10-01

    The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.

  4. Hidden Markov models and neural networks for fault detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic

    1994-01-01

    Neural networks plus hidden Markov models (HMM) can provide excellent detection and false alarm rate performance in fault detection applications, as shown in this viewgraph presentation. Modified models allow for novelty detection. Key contributions of neural network models are: (1) excellent nonparametric discrimination capability; (2) a good estimator of posterior state probabilities, even in high dimensions, and thus can be embedded within overall probabilistic model (HMM); and (3) simple to implement compared to other nonparametric models. Neural network/HMM monitoring model is currently being integrated with the new Deep Space Network (DSN) antenna controller software and will be on-line monitoring a new DSN 34-m antenna (DSS-24) by July, 1994.

  5. Hidden hyperchaos and electronic circuit application in a 5D self-exciting homopolar disc dynamo

    NASA Astrophysics Data System (ADS)

    Wei, Zhouchao; Moroz, Irene; Sprott, J. C.; Akgul, Akif; Zhang, Wei

    2017-03-01

    We report on the finding of hidden hyperchaos in a 5D extension to a known 3D self-exciting homopolar disc dynamo. The hidden hyperchaos is identified through three positive Lyapunov exponents under the condition that the proposed model has just two stable equilibrium states in certain regions of parameter space. The new 5D hyperchaotic self-exciting homopolar disc dynamo has multiple attractors including point attractors, limit cycles, quasi-periodic dynamics, hidden chaos or hyperchaos, as well as coexisting attractors. We use numerical integrations to create the phase plane trajectories, produce bifurcation diagram, and compute Lyapunov exponents to verify the hidden attractors. Because no unstable equilibria exist in two parameter regions, the system has a multistability and six kinds of complex dynamic behaviors. To the best of our knowledge, this feature has not been previously reported in any other high-dimensional system. Moreover, the 5D hyperchaotic system has been simulated using a specially designed electronic circuit and viewed on an oscilloscope, thereby confirming the results of the numerical integrations. Both Matlab and the oscilloscope outputs produce similar phase portraits. Such implementations in real time represent a new type of hidden attractor with important consequences for engineering applications.

  6. Hidden hyperchaos and electronic circuit application in a 5D self-exciting homopolar disc dynamo.

    PubMed

    Wei, Zhouchao; Moroz, Irene; Sprott, J C; Akgul, Akif; Zhang, Wei

    2017-03-01

    We report on the finding of hidden hyperchaos in a 5D extension to a known 3D self-exciting homopolar disc dynamo. The hidden hyperchaos is identified through three positive Lyapunov exponents under the condition that the proposed model has just two stable equilibrium states in certain regions of parameter space. The new 5D hyperchaotic self-exciting homopolar disc dynamo has multiple attractors including point attractors, limit cycles, quasi-periodic dynamics, hidden chaos or hyperchaos, as well as coexisting attractors. We use numerical integrations to create the phase plane trajectories, produce bifurcation diagram, and compute Lyapunov exponents to verify the hidden attractors. Because no unstable equilibria exist in two parameter regions, the system has a multistability and six kinds of complex dynamic behaviors. To the best of our knowledge, this feature has not been previously reported in any other high-dimensional system. Moreover, the 5D hyperchaotic system has been simulated using a specially designed electronic circuit and viewed on an oscilloscope, thereby confirming the results of the numerical integrations. Both Matlab and the oscilloscope outputs produce similar phase portraits. Such implementations in real time represent a new type of hidden attractor with important consequences for engineering applications.

  7. Hidden Markov models of biological primary sequence information.

    PubMed Central

    Baldi, P; Chauvin, Y; Hunkapiller, T; McClure, M A

    1994-01-01

    Hidden Markov model (HMM) techniques are used to model families of biological sequences. A smooth and convergent algorithm is introduced to iteratively adapt the transition and emission parameters of the models from the examples in a given family. The HMM approach is applied to three protein families: globins, immunoglobulins, and kinases. In all cases, the models derived capture the important statistical characteristics of the family and can be used for a number of tasks, including multiple alignments, motif detection, and classification. For K sequences of average length N, this approach yields an effective multiple-alignment algorithm which requires O(KN2) operations, linear in the number of sequences. PMID:8302831

  8. Design of double fuzzy clustering-driven context neural networks.

    PubMed

    Kim, Eun-Hu; Oh, Sung-Kwun; Pedrycz, Witold

    2018-08-01

    In this study, we introduce a novel category of double fuzzy clustering-driven context neural networks (DFCCNNs). The study is focused on the development of advanced design methodologies for redesigning the structure of conventional fuzzy clustering-based neural networks. The conventional fuzzy clustering-based neural networks typically focus on dividing the input space into several local spaces (implied by clusters). In contrast, the proposed DFCCNNs take into account two distinct local spaces called context and cluster spaces, respectively. Cluster space refers to the local space positioned in the input space whereas context space concerns a local space formed in the output space. Through partitioning the output space into several local spaces, each context space is used as the desired (target) local output to construct local models. To complete this, the proposed network includes a new context layer for reasoning about context space in the output space. In this sense, Fuzzy C-Means (FCM) clustering is useful to form local spaces in both input and output spaces. The first one is used in order to form clusters and train weights positioned between the input and hidden layer, whereas the other one is applied to the output space to form context spaces. The key features of the proposed DFCCNNs can be enumerated as follows: (i) the parameters between the input layer and hidden layer are built through FCM clustering. The connections (weights) are specified as constant terms being in fact the centers of the clusters. The membership functions (represented through the partition matrix) produced by the FCM are used as activation functions located at the hidden layer of the "conventional" neural networks. (ii) Following the hidden layer, a context layer is formed to approximate the context space of the output variable and each node in context layer means individual local model. The outputs of the context layer are specified as a combination of both weights formed as linear function and the outputs of the hidden layer. The weights are updated using the least square estimation (LSE)-based method. (iii) At the output layer, the outputs of context layer are decoded to produce the corresponding numeric output. At this time, the weighted average is used and the weights are also adjusted with the use of the LSE scheme. From the viewpoint of performance improvement, the proposed design methodologies are discussed and experimented with the aid of benchmark machine learning datasets. Through the experiments, it is shown that the generalization abilities of the proposed DFCCNNs are better than those of the conventional FCNNs reported in the literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Modeling of local sea level rise and its future projection under climate change using regional information through EOF analysis

    NASA Astrophysics Data System (ADS)

    Naren, A.; Maity, Rajib

    2017-12-01

    Sea level rise is one of the manifestations of climate change and may cause a threat to the coastal regions. Estimates from global circulation models (GCMs) are either not available on coastal locations due to their coarse spatial resolution or not reliable since the mismatch between (interpolated) GCM estimates at coastal locations and actual observation over historical period is significantly different. We propose a semi-empirical framework to model the local sea level rise (SLR) using the possibly existing relationship between local SLR and regional atmospheric/oceanic variables. Selection of set of input variables mostly based on the literature bears the signature of both atmospheric and oceanic variables that possibly have an effect on SLR. The proposed approach offers a method to extract the combined information hidden in the regional fields of atmospheric/oceanic variables for a specific target coastal location. Generality of the approach ensures the inclusion of more variables in the set of inputs depending on the geographical location of any coastal station. For demonstration, 14 coastal locations along the Indian coast and islands are considered and a set of regional atmospheric and oceanic variables are considered. After development and validation of the model at each coastal location with the historical data, the model is further used for future projection of local SLR up to the year 2100 for three different future emission scenarios represented by representative concentration pathways (RCPs)—RCP2.6, RCP4.5, and RCP8.5. The maximum projected SLR is found to vary from 260.65 to 393.16 mm (RCP8.5) by the end of 2100 among the locations considered. Outcome of the proposed approach is expected to be useful in regional coastal management and in developing mitigation strategies in a changing climate.

  10. Adaptive sampling in behavioral surveys.

    PubMed

    Thompson, S K

    1997-01-01

    Studies of populations such as drug users encounter difficulties because the members of the populations are rare, hidden, or hard to reach. Conventionally designed large-scale surveys detect relatively few members of the populations so that estimates of population characteristics have high uncertainty. Ethnographic studies, on the other hand, reach suitable numbers of individuals only through the use of link-tracing, chain referral, or snowball sampling procedures that often leave the investigators unable to make inferences from their sample to the hidden population as a whole. In adaptive sampling, the procedure for selecting people or other units to be in the sample depends on variables of interest observed during the survey, so the design adapts to the population as encountered. For example, when self-reported drug use is found among members of the sample, sampling effort may be increased in nearby areas. Types of adaptive sampling designs include ordinary sequential sampling, adaptive allocation in stratified sampling, adaptive cluster sampling, and optimal model-based designs. Graph sampling refers to situations with nodes (for example, people) connected by edges (such as social links or geographic proximity). An initial sample of nodes or edges is selected and edges are subsequently followed to bring other nodes into the sample. Graph sampling designs include network sampling, snowball sampling, link-tracing, chain referral, and adaptive cluster sampling. A graph sampling design is adaptive if the decision to include linked nodes depends on variables of interest observed on nodes already in the sample. Adjustment methods for nonsampling errors such as imperfect detection of drug users in the sample apply to adaptive as well as conventional designs.

  11. Detecting cell division of Pseudomonas aeruginosa bacteria from bright-field microscopy images with hidden conditional random fields.

    PubMed

    Ong, Lee-Ling S; Xinghua Zhang; Kundukad, Binu; Dauwels, Justin; Doyle, Patrick; Asada, H Harry

    2016-08-01

    An approach to automatically detect bacteria division with temporal models is presented. To understand how bacteria migrate and proliferate to form complex multicellular behaviours such as biofilms, it is desirable to track individual bacteria and detect cell division events. Unlike eukaryotic cells, prokaryotic cells such as bacteria lack distinctive features, causing bacteria division difficult to detect in a single image frame. Furthermore, bacteria may detach, migrate close to other bacteria and may orientate themselves at an angle to the horizontal plane. Our system trains a hidden conditional random field (HCRF) model from tracked and aligned bacteria division sequences. The HCRF model classifies a set of image frames as division or otherwise. The performance of our HCRF model is compared with a Hidden Markov Model (HMM). The results show that a HCRF classifier outperforms a HMM classifier. From 2D bright field microscopy data, it is a challenge to separate individual bacteria and associate observations to tracks. Automatic detection of sequences with bacteria division will improve tracking accuracy.

  12. Using hidden Markov models to align multiple sequences.

    PubMed

    Mount, David W

    2009-07-01

    A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.

  13. The Contextuality Loophole is Fatal for the Derivation of Bell Inequalities: Reply to a Comment by I. Schmelzer

    NASA Astrophysics Data System (ADS)

    Nieuwenhuizen, Theodorus M.; Kupczynski, Marian

    2017-02-01

    Ilya Schmelzer wrote recently: Nieuwenhuizen argued that there exists some "contextuality loophole" in Bell's theorem. This claim in unjustified. It is made clear that this arose from attaching a meaning to the title and the content of the paper different from the one intended by Nieuwenhuizen. "Contextual loophole" means only that if the supplementary parameters describing measuring instruments are correctly introduced, Bell and Bell-type inequalities may not be proven. It is also stressed that a hidden variable model suffers from a "contextuality loophole" if it tries to describe different sets of incompatible experiments using a unique probability space and a unique joint probability distribution.

  14. Generalised filtering and stochastic DCM for fMRI.

    PubMed

    Li, Baojuan; Daunizeau, Jean; Stephan, Klaas E; Penny, Will; Hu, Dewen; Friston, Karl

    2011-09-15

    This paper is about the fitting or inversion of dynamic causal models (DCMs) of fMRI time series. It tries to establish the validity of stochastic DCMs that accommodate random fluctuations in hidden neuronal and physiological states. We compare and contrast deterministic and stochastic DCMs, which do and do not ignore random fluctuations or noise on hidden states. We then compare stochastic DCMs, which do and do not ignore conditional dependence between hidden states and model parameters (generalised filtering and dynamic expectation maximisation, respectively). We first characterise state-noise by comparing the log evidence of models with different a priori assumptions about its amplitude, form and smoothness. Face validity of the inversion scheme is then established using data simulated with and without state-noise to ensure that DCM can identify the parameters and model that generated the data. Finally, we address construct validity using real data from an fMRI study of internet addiction. Our analyses suggest the following. (i) The inversion of stochastic causal models is feasible, given typical fMRI data. (ii) State-noise has nontrivial amplitude and smoothness. (iii) Stochastic DCM has face validity, in the sense that Bayesian model comparison can distinguish between data that have been generated with high and low levels of physiological noise and model inversion provides veridical estimates of effective connectivity. (iv) Relaxing conditional independence assumptions can have greater construct validity, in terms of revealing group differences not disclosed by variational schemes. Finally, we note that the ability to model endogenous or random fluctuations on hidden neuronal (and physiological) states provides a new and possibly more plausible perspective on how regionally specific signals in fMRI are generated. Copyright © 2011. Published by Elsevier Inc.

  15. Investigating Dueling Scenarios in NGC 7582 with Broadband X-ray Spectroscopy

    NASA Astrophysics Data System (ADS)

    Rivers, E.

    2015-09-01

    NGC 7582 is a well-studied X-ray bright Seyfert 2 with moderately heavy (NH = 10^{23} - 10^{24} cm^{-2}), highly variable absorption and unusually strong reflection spectral features. The spectral shape changed around the year 2000, dropping in observed flux and becoming much more highly absorbed. Two scenarios have been put forth to explain this spectral change: 1) the source "shut off" around this time, decreasing in intrinsic luminosity, with a delayed decrease in reflection features due to the light crossing time of the Compton-thick material or 2) the source is a "hidden nucleus" which has recently become more heavily obscured, with only a portion of the power law continuum leaking through. NuSTAR observed NGC 7582 twice in 2012 two weeks apart in order to quantify the reflection using high-quality data above 10 keV. We analyze both NuSTAR observations placing them in the context of historical X-ray, infrared and optical observations, including re-analysis of RXTE data from 2003-2005. We find that the most plausible scenario is that NGC 7582 has a hidden nucleus which has recently become more heavily absorbed by a patchy torus with a covering fraction of 80-90% and a column density of 3.6 x 10^{24} cm^{-2}. We find the need for an additional highly variable full-covering absorber with NH= 4-6 x 10^{23} cm^{-2}, possibly associated with a hidden broad line region or a dust lane in the host galaxy.

  16. A fast hidden line algorithm with contour option. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Thue, R. E.

    1984-01-01

    The JonesD algorithm was modified to allow the processing of N-sided elements and implemented in conjunction with a 3-D contour generation algorithm. The total hidden line and contour subsystem is implemented in the MOVIE.BYU Display package, and is compared to the subsystems already existing in the MOVIE.BYU package. The comparison reveals that the modified JonesD hidden line and contour subsystem yields substantial processing time savings, when processing moderate sized models comprised of 1000 elements or less. There are, however, some limitations to the modified JonesD subsystem.

  17. Student portfolios and the hidden curriculum on gender: mapping exclusion.

    PubMed

    Phillips, Christine B

    2009-09-01

    The hidden curriculum - the norms, values and practices that are transmitted to students through modelling by preceptors and teachers, and decisions about curricular exclusions and inclusions - can be profoundly important in the socialising of trainee doctors. However, tracking the hidden curriculum as it evolves can be challenging for medical schools. This study aimed to explore the content of student e-portfolios on gender issues, a key perspective often taught through a hidden curriculum. Online posts for a gender and medicine e-portfolio task completed by two cohorts of students in Year 3 of a 4-year medical course (n = 167, 66% female) were analysed using a grounded theory approach. A process of gendered 'othering' was applied to both men and women in the medical school using different pedagogical strategies. Curricular emphases on women's health and lack of support for male students to acquire gynaecological examination skills were seen as explicit ways of excluding males. For female medical students, exclusion tended to be implicit, operating through modelling and aphoristic comments about so-called 'female-friendly' career choices and the negative impact of motherhood on career. E-portfolios can be a useful way of tracking the hidden curriculum as it evolves. Responses to gendered exclusion may be developed more readily for the explicit processes impacting on male students than for the implicit processes impacting on female students, which often reflect structural issues related to training and employment.

  18. Multiscale hidden Markov models for photon-limited imaging

    NASA Astrophysics Data System (ADS)

    Nowak, Robert D.

    1999-06-01

    Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.

  19. Object Boundaries Influence Toddlers' Performance in a Search Task

    ERIC Educational Resources Information Center

    Shutts, Kristin; Keen, Rachel; Spelke, Elizabeth S.

    2006-01-01

    Previous research has shown that young children have difficulty searching for a hidden object whose location depends on the position of a partly visible physical barrier. Across four experiments, we tested whether children's search errors are affected by two variables that influence adults' object-directed attention: object boundaries and…

  20. The effect of newly induced mutations on the fitness of genotypes and populations of yeast (Saccharomyces cerevisiae).

    PubMed

    Orthen, E; Lange, P; Wöhrmann, K

    1984-12-01

    This paper analyses the fate of artificially induced mutations and their importance to the fitness of populations of the yeast, Saccharomyces cerevisiae, an increasingly important model organism in population genetics. Diploid strains, treated with UV and EMS, were cultured asexually for approximately 540 generations and under conditions where the asexual growth was interrupted by a sexual phase. Growth rates of 100 randomly sampled diploid clones were estimated at the beginning and at the end of the experiment. After the induction of sporulation the growth rates of 100 randomly sampled spores were measured. UV and EMS treatment decreases the average growth rate of the clones significantly but increases the variability in comparison to the untreated control. After selection over approximately 540 generations, variability in growth rates was reduced to that of the untreated control. No increase in mean population fitness was observed. However, the results show that after selection there still exists a large amount of hidden genetic variability in the populations which is revealed when the clones are cultivated in environments other than those in which selection took place. A sexual phase increased the reduction of the induced variability.

  1. Parametric inference for biological sequence analysis.

    PubMed

    Pachter, Lior; Sturmfels, Bernd

    2004-11-16

    One of the major successes in computational biology has been the unification, by using the graphical model formalism, of a multitude of algorithms for annotating and comparing biological sequences. Graphical models that have been applied to these problems include hidden Markov models for annotation, tree models for phylogenetics, and pair hidden Markov models for alignment. A single algorithm, the sum-product algorithm, solves many of the inference problems that are associated with different statistical models. This article introduces the polytope propagation algorithm for computing the Newton polytope of an observation from a graphical model. This algorithm is a geometric version of the sum-product algorithm and is used to analyze the parametric behavior of maximum a posteriori inference calculations for graphical models.

  2. Hidden Markov model analysis of force/torque information in telemanipulation

    NASA Technical Reports Server (NTRS)

    Hannaford, Blake; Lee, Paul

    1991-01-01

    A model for the prediction and analysis of sensor information recorded during robotic performance of telemanipulation tasks is presented. The model uses the hidden Markov model to describe the task structure, the operator's or intelligent controller's goal structure, and the sensor signals. A methodology for constructing the model parameters based on engineering knowledge of the task is described. It is concluded that the model and its optimal state estimation algorithm, the Viterbi algorithm, are very succesful at the task of segmenting the data record into phases corresponding to subgoals of the task. The model provides a rich modeling structure within a statistical framework, which enables it to represent complex systems and be robust to real-world sensory signals.

  3. Hydrogeophysical investigations at Hidden Dam, Raymond, California

    USGS Publications Warehouse

    Minsley, Burke J.; Burton, Bethany L.; Ikard, Scott; Powers, Michael H.

    2011-01-01

    Self-potential and direct current resistivity surveys are carried out at the Hidden Dam site in Raymond, California to assess present-day seepage patterns and better understand the hydrogeologic mechanisms that likely influence seepage. Numerical modeling is utilized in conjunction with the geophysical measurements to predict variably-saturated flow through typical two-dimensional dam cross-sections as a function of reservoir elevation. Several different flow scenarios are investigated based on the known hydrogeology, as well as information about typical subsurface structures gained from the resistivity survey. The flow models are also used to simulate the bulk electrical resistivity in the subsurface under varying saturation conditions, as well as the self-potential response using petrophysical relationships and electrokinetic coupling equations.The self-potential survey consists of 512 measurements on the downstream area of the dam, and corroborates known seepage areas on the northwest side of the dam. Two direct-current resistivity profiles, each approximately 2,500 ft (762 m) long, indicate a broad sediment channel under the northwest side of the dam, which may be a significant seepage pathway through the foundation. A focusing of seepage in low-topography areas downstream of the dam is confirmed from the numerical flow simulations, which is also consistent with past observations. Little evidence of seepage is identified from the self-potential data on the southeast side of the dam, also consistent with historical records, though one possible area of focused seepage is identified near the outlet works. Integration of the geophysical surveys, numerical modeling, and observation well data provides a framework for better understanding seepage at the site through a combined hydrogeophysical approach.

  4. A High-Performance Neural Prosthesis Incorporating Discrete State Selection With Hidden Markov Models.

    PubMed

    Kao, Jonathan C; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V

    2017-04-01

    Communication neural prostheses aim to restore efficient communication to people with motor neurological injury or disease by decoding neural activity into control signals. These control signals are both analog (e.g., the velocity of a computer mouse) and discrete (e.g., clicking an icon with a computer mouse) in nature. Effective, high-performing, and intuitive-to-use communication prostheses should be capable of decoding both analog and discrete state variables seamlessly. However, to date, the highest-performing autonomous communication prostheses rely on precise analog decoding and typically do not incorporate high-performance discrete decoding. In this report, we incorporated a hidden Markov model (HMM) into an intracortical communication prosthesis to enable accurate and fast discrete state decoding in parallel with analog decoding. In closed-loop experiments with nonhuman primates implanted with multielectrode arrays, we demonstrate that incorporating an HMM into a neural prosthesis can increase state-of-the-art achieved bitrate by 13.9% and 4.2% in two monkeys ( ). We found that the transition model of the HMM is critical to achieving this performance increase. Further, we found that using an HMM resulted in the highest achieved peak performance we have ever observed for these monkeys, achieving peak bitrates of 6.5, 5.7, and 4.7 bps in Monkeys J, R, and L, respectively. Finally, we found that this neural prosthesis was robustly controllable for the duration of entire experimental sessions. These results demonstrate that high-performance discrete decoding can be beneficially combined with analog decoding to achieve new state-of-the-art levels of performance.

  5. Revealing the Hidden Water Budget of an Alpine Volcanic Watershed Using a Bayesian Mixing Model

    NASA Astrophysics Data System (ADS)

    Markovich, K. H.; Arumi, J. L.; Dahlke, H. E.; Fogg, G. E.

    2017-12-01

    Climate change is altering alpine water budgets in observable ways, such as snow melting sooner or falling as rain, but also in hidden ways, such as shifting recharge timing and increased evapotranspiration demand leading to diminished summer low flows. The combination of complex hydrogeology and sparse availability of data make it difficult to predict the direction or magnitude of shifts in alpine water budgets, and thus difficult to inform decision-making. We present a data sparse watershed in the Andes Mountains of central Chile in which complex geology, interbasin flows, and surface water-groundwater interactions impede our ability to fully describe the water budget. We collected water samples for stable isotopes and major anions and cations, over the course of water year 2016-17 to characterize the spatial and temporal variability in endmember signatures (snow, rain, and groundwater). We use a Bayesian Hierarchical Model (BHM) to explicitly incorporate uncertainty and prior information into a mixing model, and predict the proportional contribution of snow, rain, and groundwater to streamflow throughout the year for the full catchment as well as its two sub-catchments. Preliminary results suggest that streamflow is likely more rainfall-dominated than previously thought, which not only alters our projections of climate change impacts, but make this watershed a potential example for other watersheds undergoing a snow to rain transition. Understanding how these proportions vary in space and time will help us elucidate key information on stores, fluxes, and timescales of water flow for improved current and future water resource management.

  6. Nonlinear Spatial Inversion Without Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Nawaz, A.

    2017-12-01

    High-dimensional, nonlinear inverse or inference problems usually have non-unique solutions. The distribution of solutions are described by probability distributions, and these are usually found using Monte Carlo (MC) sampling methods. These take pseudo-random samples of models in parameter space, calculate the probability of each sample given available data and other information, and thus map out high or low probability values of model parameters. However, such methods would converge to the solution only as the number of samples tends to infinity; in practice, MC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. We propose a method for Bayesian inversion of categorical variables such as geological facies or rock types in spatial problems, which requires no sampling at all. The method uses a 2-D Hidden Markov Model over a grid of cells, where observations represent localized data constraining the model in each cell. The data in our example application are seismic properties such as P- and S-wave impedances or rock density; our model parameters are the hidden states and represent the geological rock types in each cell. The observations at each location are assumed to depend on the facies at that location only - an assumption referred to as `localized likelihoods'. However, the facies at a location cannot be determined solely by the observation at that location as it also depends on prior information concerning its correlation with the spatial distribution of facies elsewhere. Such prior information is included in the inversion in the form of a training image which represents a conceptual depiction of the distribution of local geologies that might be expected, but other forms of prior information can be used in the method as desired. The method provides direct (pseudo-analytic) estimates of posterior marginal probability distributions over each variable, so these do not need to be estimated from samples as is required in MC methods. On a 2-D test example the method is shown to outperform previous methods significantly, and at a fraction of the computational cost. In many foreseeable applications there are therefore no serious impediments to extending the method to 3-D spatial models.

  7. Quantification of Operational Risk Using A Data Mining

    NASA Technical Reports Server (NTRS)

    Perera, J. Sebastian

    1999-01-01

    What is Data Mining? - Data Mining is the process of finding actionable information hidden in raw data. - Data Mining helps find hidden patterns, trends, and important relationships often buried in a sea of data - Typically, automated software tools based on advanced statistical analysis and data modeling technology can be utilized to automate the data mining process

  8. Modelled and observed mass balance of Rikha Samba Glacier, Nepal, Central Himalaya

    NASA Astrophysics Data System (ADS)

    Gurung, T. R.; Kayastha, R. B.; Fujita, K.; Sinisalo, A. K.; Stumm, D.; Joshi, S.; Litt, M.

    2016-12-01

    Glacier mass balance variability has an implication for the regional water resources and it helps to understand the response of glacier to climate change in the Himalayan region. Several mass balance studies have been started in the Himalayan region since 1970s, but they are characterized by frequent temporal gaps and a poor spatial representatively. This study aims at bridging the temporal gaps in a long term mass balance series of the Rikha Samba glacier (5383 - 6475 m a.s.l.), a benchmark glacier located in the Hidden Valley, Mustang, Nepal. The ERA Interim reanalysis data for the period 2011-2015 is calibrated with the observed meteorological variables from an AWS installed near the glacier terminus. We apply an energy mass balance model, validated with the available in-situ measurements for the years 1998 and 2011-2015. The results show that the glacier is shrinking at a moderate negative mass balance rate for the period 1995 to 2015 and the high altitude location of Rikha Samba also prevents a bigger mass loss compared to other small Himalayan glaciers. Precipitation from July to January and the mean air temperature from June to October are the most influential climatic parameters of the annual mass balance variability of Rikha Samba glacier.

  9. Measuring the usefulness of hidden units in Boltzmann machines with mutual information.

    PubMed

    Berglund, Mathias; Raiko, Tapani; Cho, Kyunghyun

    2015-04-01

    Restricted Boltzmann machines (RBMs) and deep Boltzmann machines (DBMs) are important models in deep learning, but it is often difficult to measure their performance in general, or measure the importance of individual hidden units in specific. We propose to use mutual information to measure the usefulness of individual hidden units in Boltzmann machines. The measure is fast to compute, and serves as an upper bound for the information the neuron can pass on, enabling detection of a particular kind of poor training results. We confirm experimentally that the proposed measure indicates how much the performance of the model drops when some of the units of an RBM are pruned away. We demonstrate the usefulness of the measure for early detection of poor training in DBMs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Poisson-Gaussian Noise Reduction Using the Hidden Markov Model in Contourlet Domain for Fluorescence Microscopy Images

    PubMed Central

    Yang, Sejung; Lee, Byung-Uk

    2015-01-01

    In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138

  11. Modeling the Effects of Cu Content and Deformation Variables on the High-Temperature Flow Behavior of Dilute Al-Fe-Si Alloys Using an Artificial Neural Network.

    PubMed

    Shakiba, Mohammad; Parson, Nick; Chen, X-Grant

    2016-06-30

    The hot deformation behavior of Al-0.12Fe-0.1Si alloys with varied amounts of Cu (0.002-0.31 wt %) was investigated by uniaxial compression tests conducted at different temperatures (400 °C-550 °C) and strain rates (0.01-10 s -1 ). The results demonstrated that flow stress decreased with increasing deformation temperature and decreasing strain rate, while flow stress increased with increasing Cu content for all deformation conditions studied due to the solute drag effect. Based on the experimental data, an artificial neural network (ANN) model was developed to study the relationship between chemical composition, deformation variables and high-temperature flow behavior. A three-layer feed-forward back-propagation artificial neural network with 20 neurons in a hidden layer was established in this study. The input parameters were Cu content, temperature, strain rate and strain, while the flow stress was the output. The performance of the proposed model was evaluated using the K-fold cross-validation method. The results showed excellent generalization capability of the developed model. Sensitivity analysis indicated that the strain rate is the most important parameter, while the Cu content exhibited a modest but significant influence on the flow stress.

  12. Modeling the Effects of Cu Content and Deformation Variables on the High-Temperature Flow Behavior of Dilute Al-Fe-Si Alloys Using an Artificial Neural Network

    PubMed Central

    Shakiba, Mohammad; Parson, Nick; Chen, X.-Grant

    2016-01-01

    The hot deformation behavior of Al-0.12Fe-0.1Si alloys with varied amounts of Cu (0.002–0.31 wt %) was investigated by uniaxial compression tests conducted at different temperatures (400 °C–550 °C) and strain rates (0.01–10 s−1). The results demonstrated that flow stress decreased with increasing deformation temperature and decreasing strain rate, while flow stress increased with increasing Cu content for all deformation conditions studied due to the solute drag effect. Based on the experimental data, an artificial neural network (ANN) model was developed to study the relationship between chemical composition, deformation variables and high-temperature flow behavior. A three-layer feed-forward back-propagation artificial neural network with 20 neurons in a hidden layer was established in this study. The input parameters were Cu content, temperature, strain rate and strain, while the flow stress was the output. The performance of the proposed model was evaluated using the K-fold cross-validation method. The results showed excellent generalization capability of the developed model. Sensitivity analysis indicated that the strain rate is the most important parameter, while the Cu content exhibited a modest but significant influence on the flow stress. PMID:28773658

  13. Mind wandering at the fingertips: automatic parsing of subjective states based on response time variability

    PubMed Central

    Bastian, Mikaël; Sackur, Jérôme

    2013-01-01

    Research from the last decade has successfully used two kinds of thought reports in order to assess whether the mind is wandering: random thought-probes and spontaneous reports. However, none of these two methods allows any assessment of the subjective state of the participant between two reports. In this paper, we present a step by step elaboration and testing of a continuous index, based on response time variability within Sustained Attention to Response Tasks (N = 106, for a total of 10 conditions). We first show that increased response time variability predicts mind wandering. We then compute a continuous index of response time variability throughout full experiments and show that the temporal position of a probe relative to the nearest local peak of the continuous index is predictive of mind wandering. This suggests that our index carries information about the subjective state of the subject even when he or she is not probed, and opens the way for on-line tracking of mind wandering. Finally we proceed a step further and infer the internal attentional states on the basis of the variability of response times. To this end we use the Hidden Markov Model framework, which allows us to estimate the durations of on-task and off-task episodes. PMID:24046753

  14. Comparison of statistical algorithms for detecting homogeneous river reaches along a longitudinal continuum

    NASA Astrophysics Data System (ADS)

    Leviandier, Thierry; Alber, A.; Le Ber, F.; Piégay, H.

    2012-02-01

    Seven methods designed to delineate homogeneous river segments, belonging to four families, namely — tests of homogeneity, contrast enhancing, spatially constrained classification, and hidden Markov models — are compared, firstly on their principles, then on a case study, and on theoretical templates. These templates contain patterns found in the case study but not considered in the standard assumptions of statistical methods, such as gradients and curvilinear structures. The influence of data resolution, noise and weak satisfaction of the assumptions underlying the methods is investigated. The control of the number of reaches obtained in order to achieve meaningful comparisons is discussed. No method is found that outperforms all the others on all trials. However, the methods with sequential algorithms (keeping at order n + 1 all breakpoints found at order n) fail more often than those running complete optimisation at any order. The Hubert-Kehagias method and Hidden Markov Models are the most successful at identifying subpatterns encapsulated within the templates. Ergodic Hidden Markov Models are, moreover, liable to exhibit transition areas.

  15. Uncovering hidden heterogeneity: Geo-statistical models illuminate the fine scale effects of boating infrastructure on sediment characteristics and contaminants.

    PubMed

    Hedge, L H; Dafforn, K A; Simpson, S L; Johnston, E L

    2017-06-30

    Infrastructure associated with coastal communities is likely to not only directly displace natural systems, but also leave environmental footprints' that stretch over multiple scales. Some coastal infrastructure will, there- fore, generate a hidden layer of habitat heterogeneity in sediment systems that is not immediately observable in classical impact assessment frameworks. We examine the hidden heterogeneity associated with one of the most ubiquitous coastal modifications; dense swing moorings fields. Using a model based geo-statistical framework we highlight the variation in sedimentology throughout mooring fields and reference locations. Moorings were correlated with patches of sediment with larger particle sizes, and associated metal(loid) concentrations in these patches were depressed. Our work highlights two important ideas i) mooring fields create a mosaic of habitat in which contamination decreases and grain sizes increase close to moorings, and ii) model- based frameworks provide an information rich, easy-to-interpret way to communicate complex analyses to stakeholders. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  16. Self-Organizing Hidden Markov Model Map (SOHMMM).

    PubMed

    Ferles, Christos; Stafylopatis, Andreas

    2013-12-01

    A hybrid approach combining the Self-Organizing Map (SOM) and the Hidden Markov Model (HMM) is presented. The Self-Organizing Hidden Markov Model Map (SOHMMM) establishes a cross-section between the theoretic foundations and algorithmic realizations of its constituents. The respective architectures and learning methodologies are fused in an attempt to meet the increasing requirements imposed by the properties of deoxyribonucleic acid (DNA), ribonucleic acid (RNA), and protein chain molecules. The fusion and synergy of the SOM unsupervised training and the HMM dynamic programming algorithms bring forth a novel on-line gradient descent unsupervised learning algorithm, which is fully integrated into the SOHMMM. Since the SOHMMM carries out probabilistic sequence analysis with little or no prior knowledge, it can have a variety of applications in clustering, dimensionality reduction and visualization of large-scale sequence spaces, and also, in sequence discrimination, search and classification. Two series of experiments based on artificial sequence data and splice junction gene sequences demonstrate the SOHMMM's characteristics and capabilities. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. A lithology identification method for continental shale oil reservoir based on BP neural network

    NASA Astrophysics Data System (ADS)

    Han, Luo; Fuqiang, Lai; Zheng, Dong; Weixu, Xia

    2018-06-01

    The Dongying Depression and Jiyang Depression of the Bohai Bay Basin consist of continental sedimentary facies with a variable sedimentary environment and the shale layer system has a variety of lithologies and strong heterogeneity. It is difficult to accurately identify the lithologies with traditional lithology identification methods. The back propagation (BP) neural network was used to predict the lithology of continental shale oil reservoirs. Based on the rock slice identification, x-ray diffraction bulk rock mineral analysis, scanning electron microscope analysis, and the data of well logging and logging, the lithology was divided with carbonate, clay and felsic as end-member minerals. According to the core-electrical relationship, the frequency histogram was then used to calculate the logging response range of each lithology. The lithology-sensitive curves selected from 23 logging curves (GR, AC, CNL, DEN, etc) were chosen as the input variables. Finally, the BP neural network training model was established to predict the lithology. The lithology in the study area can be divided into four types: mudstone, lime mudstone, lime oil-mudstone, and lime argillaceous oil-shale. The logging responses of lithology were complicated and characterized by the low values of four indicators and medium values of two indicators. By comparing the number of hidden nodes and the number of training times, we found that the number of 15 hidden nodes and 1000 times of training yielded the best training results. The optimal neural network training model was established based on the above results. The lithology prediction results of BP neural network of well XX-1 showed that the accuracy rate was over 80%, indicating that the method was suitable for lithology identification of continental shale stratigraphy. The study provided the basis for the reservoir quality and oily evaluation of continental shale reservoirs and was of great significance to shale oil and gas exploration.

  18. Post processing of optically recognized text via second order hidden Markov model

    NASA Astrophysics Data System (ADS)

    Poudel, Srijana

    In this thesis, we describe a postprocessing system on Optical Character Recognition(OCR) generated text. Second Order Hidden Markov Model (HMM) approach is used to detect and correct the OCR related errors. The reason for choosing the 2nd order HMM is to keep track of the bigrams so that the model can represent the system more accurately. Based on experiments with training data of 159,733 characters and testing of 5,688 characters, the model was able to correct 43.38 % of the errors with a precision of 75.34 %. However, the precision value indicates that the model introduced some new errors, decreasing the correction percentage to 26.4%.

  19. Accurate Permittivity Measurements for Microwave Imaging via Ultra-Wideband Removal of Spurious Reflectors

    PubMed Central

    Pelletier, Mathew G.; Viera, Joseph A.; Wanjura, John; Holt, Greg

    2010-01-01

    The use of microwave imaging is becoming more prevalent for detection of interior hidden defects in manufactured and packaged materials. In applications for detection of hidden moisture, microwave tomography can be used to image the material and then perform an inverse calculation to derive an estimate of the variability of the hidden material, such internal moisture, thereby alerting personnel to damaging levels of the hidden moisture before material degradation occurs. One impediment to this type of imaging occurs with nearby objects create strong reflections that create destructive and constructive interference, at the receiver, as the material is conveyed past the imaging antenna array. In an effort to remove the influence of the reflectors, such as metal bale ties, research was conducted to develop an algorithm for removal of the influence of the local proximity reflectors from the microwave images. This research effort produced a technique, based upon the use of ultra-wideband signals, for the removal of spurious reflections created by local proximity reflectors. This improvement enables accurate microwave measurements of moisture in such products as cotton bales, as well as other physical properties such as density or material composition. The proposed algorithm was shown to reduce errors by a 4:1 ratio and is an enabling technology for imaging applications in the presence of metal bale ties. PMID:22163668

  20. General gauge mediation in five dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGarrie, Moritz; Russo, Rodolfo

    2010-08-01

    We use the ''general gauge mediation'' (GGM) formalism to describe a five-dimensional setup with an S{sup 1}/Z{sub 2} orbifold. We first consider a model independent supersymmetry breaking hidden sector on one boundary and generic chiral matter on another. Using the definition of GGM, the effects of the hidden sector are contained in a set of global symmetry current correlator functions and is mediated through the bulk. We find the gaugino, sfermion and hyperscalar mass formulas for minimal and generalized messengers in different regimes of a large, small and intermediate extra dimension. Then we use the five-dimensional GGM formalism to constructmore » a model in which an SU(5) Intriligator, Seiberg and Shih (ISS) model is located on the hidden boundary. We weakly gauge a global symmetry of the ISS model and associate it with the bulk vector superfield. Compared to four-dimensional GGM, there is a natural way to adjust the gaugino versus sfermion mass ratio by a factor (Ml){sup 2}, where M is a characteristic mass scale of the supersymmetry breaking sector and l is the length of the extra dimension.« less

  1. SU-E-T-206: Improving Radiotherapy Toxicity Based On Artificial Neural Network (ANN) for Head and Neck Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Daniel D; Wernicke, A Gabriella; Nori, Dattatreyudu

    Purpose/Objective(s): The aim of this study is to build the estimator of toxicity using artificial neural network (ANN) for head and neck cancer patients Materials/Methods: An ANN can combine variables into a predictive model during training and considered all possible correlations of variables. We constructed an ANN based on the data from 73 patients with advanced H and N cancer treated with external beam radiotherapy and/or chemotherapy at our institution. For the toxicity estimator we defined input data including age, sex, site, stage, pathology, status of chemo, technique of external beam radiation therapy (EBRT), length of treatment, dose of EBRT,more » status of post operation, length of follow-up, the status of local recurrences and distant metastasis. These data were digitized based on the significance and fed to the ANN as input nodes. We used 20 hidden nodes (for the 13 input nodes) to take care of the correlations of input nodes. For training ANN, we divided data into three subsets such as training set, validation set and test set. Finally, we built the estimator for the toxicity from ANN output. Results: We used 13 input variables including the status of local recurrences and distant metastasis and 20 hidden nodes for correlations. 59 patients for training set, 7 patients for validation set and 7 patients for test set and fed the inputs to Matlab neural network fitting tool. We trained the data within 15% of errors of outcome. In the end we have the toxicity estimation with 74% of accuracy. Conclusion: We proved in principle that ANN can be a very useful tool for predicting the RT outcomes for high risk H and N patients. Currently we are improving the results using cross validation.« less

  2. Nonlinear-drifted Brownian motion with multiple hidden states for remaining useful life prediction of rechargeable batteries

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Zhao, Yang; Yang, Fangfang; Tsui, Kwok-Leung

    2017-09-01

    Brownian motion with adaptive drift has attracted much attention in prognostics because its first hitting time is highly relevant to remaining useful life prediction and it follows the inverse Gaussian distribution. Besides linear degradation modeling, nonlinear-drifted Brownian motion has been developed to model nonlinear degradation. Moreover, the first hitting time distribution of the nonlinear-drifted Brownian motion has been approximated by time-space transformation. In the previous studies, the drift coefficient is the only hidden state used in state space modeling of the nonlinear-drifted Brownian motion. Besides the drift coefficient, parameters of a nonlinear function used in the nonlinear-drifted Brownian motion should be treated as additional hidden states of state space modeling to make the nonlinear-drifted Brownian motion more flexible. In this paper, a prognostic method based on nonlinear-drifted Brownian motion with multiple hidden states is proposed and then it is applied to predict remaining useful life of rechargeable batteries. 26 sets of rechargeable battery degradation samples are analyzed to validate the effectiveness of the proposed prognostic method. Moreover, some comparisons with a standard particle filter based prognostic method, a spherical cubature particle filter based prognostic method and two classic Bayesian prognostic methods are conducted to highlight the superiority of the proposed prognostic method. Results show that the proposed prognostic method has lower average prediction errors than the particle filter based prognostic methods and the classic Bayesian prognostic methods for battery remaining useful life prediction.

  3. Exploring the Unknown: Detection of Fast Variability of Starlight (Abstract)

    NASA Astrophysics Data System (ADS)

    Stanton, R. H.

    2017-12-01

    (Abstract only) In previous papers the author described a photometer designed for observing high-speed events such as lunar and asteroid occultations, and for searching for new varieties of fast stellar variability. A significant challenge presented by such a system is how one deals with the large quantity of data generated in order to process it efficiently and reveal any hidden information that might be present. This paper surveys some of the techniques used to achieve this goal.

  4. Exploring the Hard and Soft X-ray Emission of Magnetic Cataclysmic Variables

    NASA Astrophysics Data System (ADS)

    de Martino, D.; Anzolin, G.; Bonnet-Bidaud, J.-M.; Falanga, M.; Matt, G.; Mouchet, M.; Mukai, K.; Masetti, N.

    2009-05-01

    A non-negligible fraction of galactic hard (>20 keV) X-ray sources were identified as CVs of the magnetic Intermediate Polar type in INTEGRAL, SWIFT and RXTE surveys, that suggests a still hidden but potentially important population of faint hard X-ray sources. Simbol-X has the unique potential to simultaneously characterize their variable and complex soft and hard X-ray emission thus allowing to understand their putative role in galactic populations of X-ray sources.

  5. Hidden axion dark matter decaying through mixing with QCD axion and the 3.5 keV X-ray line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higaki, Tetsutaro; Kitajima, Naoya; Takahashi, Fuminobu, E-mail: thigaki@post.kek.jp, E-mail: kitajima@tuhep.phys.tohoku.ac.jp, E-mail: fumi@tuhep.phys.tohoku.ac.jp

    2014-12-01

    Hidden axions may be coupled to the standard model particles through a kinetic or mass mixing with QCD axion. We study a scenario in which a hidden axion constitutes a part of or the whole of dark matter and decays into photons through the mixing, explaining the 3.5 keV X-ray line signal. Interestingly, the required long lifetime of the hidden axion dark matter can be realized for the QCD axion decay constant at an intermediate scale, if the mixing is sufficiently small. In such a two component dark matter scenario, the primordial density perturbations of the hidden axion can bemore » highly non-Gaussian, leading to a possible dispersion in the X-ray line strength from various galaxy clusters and near-by galaxies. We also discuss how the parallel and orthogonal alignment of two axions affects their couplings to gauge fields. In particular, the QCD axion decay constant can be much larger than the actual Peccei-Quinn symmetry breaking.« less

  6. Generalizing the Network Scale-Up Method: A New Estimator for the Size of Hidden Populations*

    PubMed Central

    Feehan, Dennis M.; Salganik, Matthew J.

    2018-01-01

    The network scale-up method enables researchers to estimate the size of hidden populations, such as drug injectors and sex workers, using sampled social network data. The basic scale-up estimator offers advantages over other size estimation techniques, but it depends on problematic modeling assumptions. We propose a new generalized scale-up estimator that can be used in settings with non-random social mixing and imperfect awareness about membership in the hidden population. Further, the new estimator can be used when data are collected via complex sample designs and from incomplete sampling frames. However, the generalized scale-up estimator also requires data from two samples: one from the frame population and one from the hidden population. In some situations these data from the hidden population can be collected by adding a small number of questions to already planned studies. For other situations, we develop interpretable adjustment factors that can be applied to the basic scale-up estimator. We conclude with practical recommendations for the design and analysis of future studies. PMID:29375167

  7. Sacrificial bonds and hidden length in biomaterials -- a kinetic description of strength and toughness in bone

    NASA Astrophysics Data System (ADS)

    Lieou, Charles K. C.; Elbanna, Ahmed E.; Carlson, Jean M.

    2013-03-01

    Sacrificial bonds and hidden length in structural molecules account for the greatly increased fracture toughness of biological materials compared to synthetic materials without such structural features, by providing a molecular-scale mechanism of energy dissipation. One example of occurrence of sacrificial bonds and hidden length is in the polymeric glue connection between collagen fibrils in animal bone. In this talk, we propose a simple kinetic model that describes the breakage of sacrificial bonds and the revelation of hidden length, based on Bell's theory. We postulate a master equation governing the rates of bond breakage and formation, at the mean-field level, allowing for the number of bonds and hidden lengths to take up non-integer values between successive, discrete bond-breakage events. This enables us to predict the mechanical behavior of a quasi-one-dimensional ensemble of polymers at different stretching rates. We find that both the rupture peak heights and maximum stretching distance increase with the stretching rate. In addition, our theory naturally permits the possibility of self-healing in such biological structures.

  8. Prediction of hearing loss among the noise-exposed workers in a steel factory using artificial intelligence approach.

    PubMed

    Aliabadi, Mohsen; Farhadian, Maryam; Darvishi, Ebrahim

    2015-08-01

    Prediction of hearing loss in noisy workplaces is considered to be an important aspect of hearing conservation program. Artificial intelligence, as a new approach, can be used to predict the complex phenomenon such as hearing loss. Using artificial neural networks, this study aims to present an empirical model for the prediction of the hearing loss threshold among noise-exposed workers. Two hundred and ten workers employed in a steel factory were chosen, and their occupational exposure histories were collected. To determine the hearing loss threshold, the audiometric test was carried out using a calibrated audiometer. The personal noise exposure was also measured using a noise dosimeter in the workstations of workers. Finally, data obtained five variables, which can influence the hearing loss, were used for the development of the prediction model. Multilayer feed-forward neural networks with different structures were developed using MATLAB software. Neural network structures had one hidden layer with the number of neurons being approximately between 5 and 15 neurons. The best developed neural networks with one hidden layer and ten neurons could accurately predict the hearing loss threshold with RMSE = 2.6 dB and R(2) = 0.89. The results also confirmed that neural networks could provide more accurate predictions than multiple regressions. Since occupational hearing loss is frequently non-curable, results of accurate prediction can be used by occupational health experts to modify and improve noise exposure conditions.

  9. Causal Modeling the Delayed-Choice Experiment

    NASA Astrophysics Data System (ADS)

    Chaves, Rafael; Lemos, Gabriela Barreto; Pienaar, Jacques

    2018-05-01

    Wave-particle duality has become one of the flagships of quantum mechanics. This counterintuitive concept is highlighted in a delayed-choice experiment, where the experimental setup that reveals either the particle or wave nature of a quantum system is decided after the system has entered the apparatus. Here we consider delayed-choice experiments from the perspective of device-independent causal models and show their equivalence to a prepare-and-measure scenario. Within this framework, we consider Wheeler's original proposal and its variant using a quantum control and show that a simple classical causal model is capable of reproducing the quantum mechanical predictions. Nonetheless, among other results, we show that, in a slight variant of Wheeler's gedanken experiment, a photon in an interferometer can indeed generate statistics incompatible with any nonretrocausal hidden variable model, whose dimensionality is the same as that of the quantum system it is supposed to mimic. Our proposal tolerates arbitrary losses and inefficiencies, making it specially suited to loophole-free experimental implementations.

  10. Unified origin for baryonic visible matter and antibaryonic dark matter.

    PubMed

    Davoudiasl, Hooman; Morrissey, David E; Sigurdson, Kris; Tulin, Sean

    2010-11-19

    We present a novel mechanism for generating both the baryon and dark matter densities of the Universe. A new Dirac fermion X carrying a conserved baryon number charge couples to the standard model quarks as well as a GeV-scale hidden sector. CP-violating decays of X, produced nonthermally in low-temperature reheating, sequester antibaryon number in the hidden sector, thereby leaving a baryon excess in the visible sector. The antibaryonic hidden states are stable dark matter. A spectacular signature of this mechanism is the baryon-destroying inelastic scattering of dark matter that can annihilate baryons at appreciable rates relevant for nucleon decay searches.

  11. Quasi-supervised scoring of human sleep in polysomnograms using augmented input variables.

    PubMed

    Yaghouby, Farid; Sunderam, Sridhar

    2015-04-01

    The limitations of manual sleep scoring make computerized methods highly desirable. Scoring errors can arise from human rater uncertainty or inter-rater variability. Sleep scoring algorithms either come as supervised classifiers that need scored samples of each state to be trained, or as unsupervised classifiers that use heuristics or structural clues in unscored data to define states. We propose a quasi-supervised classifier that models observations in an unsupervised manner but mimics a human rater wherever training scores are available. EEG, EMG, and EOG features were extracted in 30s epochs from human-scored polysomnograms recorded from 42 healthy human subjects (18-79 years) and archived in an anonymized, publicly accessible database. Hypnograms were modified so that: 1. Some states are scored but not others; 2. Samples of all states are scored but not for transitional epochs; and 3. Two raters with 67% agreement are simulated. A framework for quasi-supervised classification was devised in which unsupervised statistical models-specifically Gaussian mixtures and hidden Markov models--are estimated from unlabeled training data, but the training samples are augmented with variables whose values depend on available scores. Classifiers were fitted to signal features incorporating partial scores, and used to predict scores for complete recordings. Performance was assessed using Cohen's Κ statistic. The quasi-supervised classifier performed significantly better than an unsupervised model and sometimes as well as a completely supervised model despite receiving only partial scores. The quasi-supervised algorithm addresses the need for classifiers that mimic scoring patterns of human raters while compensating for their limitations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Efficiently Exploring Multilevel Data with Recursive Partitioning

    ERIC Educational Resources Information Center

    Martin, Daniel P.; von Oertzen, Timo; Rimm-Kaufman, Sara E.

    2015-01-01

    There is an increasing number of datasets with many participants, variables, or both, in education and other fields that often deal with large, multilevel data structures. Once initial confirmatory hypotheses are exhausted, it can be difficult to determine how best to explore the dataset to discover hidden relationships that could help to inform…

  13. Constraints on hidden photons from current and future observations of CMB spectral distortions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunze, Kerstin E.; Vázquez-Mozo, Miguel Á., E-mail: kkunze@usal.es, E-mail: Miguel.Vazquez-Mozo@cern.ch

    2015-12-01

    A variety of beyond the standard model scenarios contain very light hidden sector U(1) gauge bosons undergoing kinetic mixing with the photon. The resulting oscillation between ordinary and hidden photons leads to spectral distortions of the cosmic microwave background. We update the bounds on the mixing parameter χ{sub 0} and the mass of the hidden photon m{sub γ'} for future experiments measuring CMB spectral distortions, such as PIXIE and PRISM/COrE. For 10{sup −14} eV∼< m{sub γ'}∼< 10{sup −13} eV, we find the kinetic mixing angle χ{sub 0} has to be less than 10{sup −8} at 95% CL. These bounds are more than an ordermore » of magnitude stronger than those derived from the COBE/FIRAS data.« less

  14. Modelling malaria incidence with environmental dependency in a locality of Sudanese savannah area, Mali

    PubMed Central

    Gaudart, Jean; Touré, Ousmane; Dessay, Nadine; Dicko, A lassane; Ranque, Stéphane; Forest, Loic; Demongeot, Jacques; Doumbo, Ogobara K

    2009-01-01

    Background The risk of Plasmodium falciparum infection is variable over space and time and this variability is related to environmental variability. Environmental factors affect the biological cycle of both vector and parasite. Despite this strong relationship, environmental effects have rarely been included in malaria transmission models. Remote sensing data on environment were incorporated into a temporal model of the transmission, to forecast the evolution of malaria epidemiology, in a locality of Sudanese savannah area. Methods A dynamic cohort was constituted in June 1996 and followed up until June 2001 in the locality of Bancoumana, Mali. The 15-day composite vegetation index (NDVI), issued from satellite imagery series (NOAA) from July 1981 to December 2006, was used as remote sensing data. The statistical relationship between NDVI and incidence of P. falciparum infection was assessed by ARIMA analysis. ROC analysis provided an NDVI value for the prediction of an increase in incidence of parasitaemia. Malaria transmission was modelled using an SIRS-type model, adapted to Bancoumana's data. Environmental factors influenced vector mortality and aggressiveness, as well as length of the gonotrophic cycle. NDVI observations from 1981 to 2001 were used for the simulation of the extrinsic variable of a hidden Markov chain model. Observations from 2002 to 2006 served as external validation. Results The seasonal pattern of P. falciparum incidence was significantly explained by NDVI, with a delay of 15 days (p = 0.001). An NDVI threshold of 0.361 (p = 0.007) provided a Diagnostic Odd Ratio (DOR) of 2.64 (CI95% [1.26;5.52]). The deterministic transmission model, with stochastic environmental factor, predicted an endemo-epidemic pattern of malaria infection. The incidences of parasitaemia were adequately modelled, using the observed NDVI as well as the NDVI simulations. Transmission pattern have been modelled and observed values were adequately predicted. The error parameters have shown the smallest values for a monthly model of environmental changes. Conclusion Remote-sensed data were coupled with field study data in order to drive a malaria transmission model. Several studies have shown that the NDVI presents significant correlations with climate variables, such as precipitations particularly in Sudanese savannah environments. Non-linear model combining environmental variables, predisposition factors and transmission pattern can be used for community level risk evaluation. PMID:19361335

  15. Modelling malaria incidence with environmental dependency in a locality of Sudanese savannah area, Mali.

    PubMed

    Gaudart, Jean; Touré, Ousmane; Dessay, Nadine; Dicko, A Lassane; Ranque, Stéphane; Forest, Loic; Demongeot, Jacques; Doumbo, Ogobara K

    2009-04-10

    The risk of Plasmodium falciparum infection is variable over space and time and this variability is related to environmental variability. Environmental factors affect the biological cycle of both vector and parasite. Despite this strong relationship, environmental effects have rarely been included in malaria transmission models.Remote sensing data on environment were incorporated into a temporal model of the transmission, to forecast the evolution of malaria epidemiology, in a locality of Sudanese savannah area. A dynamic cohort was constituted in June 1996 and followed up until June 2001 in the locality of Bancoumana, Mali. The 15-day composite vegetation index (NDVI), issued from satellite imagery series (NOAA) from July 1981 to December 2006, was used as remote sensing data.The statistical relationship between NDVI and incidence of P. falciparum infection was assessed by ARIMA analysis. ROC analysis provided an NDVI value for the prediction of an increase in incidence of parasitaemia.Malaria transmission was modelled using an SIRS-type model, adapted to Bancoumana's data. Environmental factors influenced vector mortality and aggressiveness, as well as length of the gonotrophic cycle. NDVI observations from 1981 to 2001 were used for the simulation of the extrinsic variable of a hidden Markov chain model. Observations from 2002 to 2006 served as external validation. The seasonal pattern of P. falciparum incidence was significantly explained by NDVI, with a delay of 15 days (p = 0.001). An NDVI threshold of 0.361 (p = 0.007) provided a Diagnostic Odd Ratio (DOR) of 2.64 (CI95% [1.26;5.52]).The deterministic transmission model, with stochastic environmental factor, predicted an endemo-epidemic pattern of malaria infection. The incidences of parasitaemia were adequately modelled, using the observed NDVI as well as the NDVI simulations. Transmission pattern have been modelled and observed values were adequately predicted. The error parameters have shown the smallest values for a monthly model of environmental changes. Remote-sensed data were coupled with field study data in order to drive a malaria transmission model. Several studies have shown that the NDVI presents significant correlations with climate variables, such as precipitations particularly in Sudanese savannah environments. Non-linear model combining environmental variables, predisposition factors and transmission pattern can be used for community level risk evaluation.

  16. ENSO Dynamics and Trends, AN Alternate View

    NASA Astrophysics Data System (ADS)

    Rojo Hernandez, J. D.; Lall, U.; Mesa, O. J.

    2017-12-01

    El Niño - Southern Oscillation (ENSO) is the most important inter-annual climate fluctuation on a planetary level with great effects on the hydrological cycle, agriculture, ecosystems, health and society. This work demonstrates the use of the Non-Homogeneus hidden Markov Models (NHMM) to characterize ENSO using a set of discrete states with variable transition probabilities matrix using the data of sea surface temperature anomalies (SSTA) of the Kaplan Extended SST v2 between 120E -90W, 15N-15S from Jan-1856 to Dec-2016. ENSO spatial patterns, their temporal distribution, the transition probabilities between patterns and their temporal evolution are the main results of the NHHMM applied to ENSO. The five "hidden" states found appear to represent the different "Flavors" described in the literature: the Canonical El Niño, Central El Niño, a Neutral state, Central La Niña and the Canonical Niña. Using the whole record length of the SSTA it was possible to identify trends in the dynamic system, with a decrease in the probability of occurrence of the cold events and a significant increase of the warm events, in particular of Central El Niño events whose probability of occurrence has increased Dramatically since 1960 coupled with increases in global temperature.

  17. Bayesian structural inference for hidden processes.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ε-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ε-machines, irrespective of estimated transition probabilities. Properties of ε-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  18. Bayesian structural inference for hidden processes

    NASA Astrophysics Data System (ADS)

    Strelioff, Christopher C.; Crutchfield, James P.

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  19. Modeling Driver Behavior near Intersections in Hidden Markov Model

    PubMed Central

    Li, Juan; He, Qinglian; Zhou, Hang; Guan, Yunlin; Dai, Wei

    2016-01-01

    Intersections are one of the major locations where safety is a big concern to drivers. Inappropriate driver behaviors in response to frequent changes when approaching intersections often lead to intersection-related crashes or collisions. Thus to better understand driver behaviors at intersections, especially in the dilemma zone, a Hidden Markov Model (HMM) is utilized in this study. With the discrete data processing, the observed dynamic data of vehicles are used for the inference of the Hidden Markov Model. The Baum-Welch (B-W) estimation algorithm is applied to calculate the vehicle state transition probability matrix and the observation probability matrix. When combined with the Forward algorithm, the most likely state of the driver can be obtained. Thus the model can be used to measure the stability and risk of driver behavior. It is found that drivers’ behaviors in the dilemma zone are of lower stability and higher risk compared with those in other regions around intersections. In addition to the B-W estimation algorithm, the Viterbi Algorithm is utilized to predict the potential dangers of vehicles. The results can be applied to driving assistance systems to warn drivers to avoid possible accidents. PMID:28009838

  20. PVWatts Version 5 Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2014-09-01

    The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.

  1. Another convex combination of product states for the separable Werner state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azuma, Hiroo; Ban, Masashi; CREST, Japan Science and Technology Agency, 1-1-9 Yaesu, Chuo-ku, Tokyo 103-0028

    2006-03-15

    In this paper, we write down the separable Werner state in a two-qubit system explicitly as a convex combination of product states, which is different from the convex combination obtained by Wootters' method. The Werner state in a two-qubit system has a single real parameter and varies from inseparable to separable according to the value of its parameter. We derive a hidden variable model that is induced by our decomposed form for the separable Werner state. From our explicit form of the convex combination of product states, we understand the following: The critical point of the parameter for separability ofmore » the Werner state comes from positivity of local density operators of the qubits.« less

  2. Solving the quantum many-body problem with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Carleo, Giuseppe; Troyer, Matthias

    2017-02-01

    The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the nontrivial correlations encoded in the exponential complexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form for some notable cases of physical interest. We introduce a variational representation of quantum states based on artificial neural networks with a variable number of hidden neurons. A reinforcement-learning scheme we demonstrate is capable of both finding the ground state and describing the unitary time evolution of complex interacting quantum systems. Our approach achieves high accuracy in describing prototypical interacting spins models in one and two dimensions.

  3. Quantum Locality?

    NASA Astrophysics Data System (ADS)

    Stapp, Henry P.

    2012-05-01

    Robert Griffiths has recently addressed, within the framework of a `consistent quantum theory' that he has developed, the issue of whether, as is often claimed, quantum mechanics entails a need for faster-than-light transfers of information over long distances. He argues that the putative proofs of this property that involve hidden variables include in their premises some essentially classical-physics-type assumptions that are not entailed by the precepts of quantum mechanics. Thus whatever is proved is not a feature of quantum mechanics, but is a property of a theory that tries to combine quantum theory with quasi-classical features that go beyond what is entailed by quantum theory itself. One cannot logically prove properties of a system by establishing, instead, properties of a system modified by adding properties alien to the original system. Hence Griffiths' rejection of hidden-variable-based proofs is logically warranted. Griffiths mentions the existence of a certain alternative proof that does not involve hidden variables, and that uses only macroscopically described observable properties. He notes that he had examined in his book proofs of this general kind, and concluded that they provide no evidence for nonlocal influences. But he did not examine the particular proof that he cites. An examination of that particular proof by the method specified by his `consistent quantum theory' shows that the cited proof is valid within that restrictive version of quantum theory. An added section responds to Griffiths' reply, which cites general possibilities of ambiguities that might make what is to be proved ill-defined, and hence render the pertinent `consistent framework' ill defined. But the vagaries that he cites do not upset the proof in question, which, both by its physical formulation and by explicit identification, specify the framework to be used. Griffiths confirms the validity of the proof insofar as that pertinent framework is used. The section also shows, in response to Griffiths' challenge, why a putative proof of locality that he has described is flawed.

  4. Detecting Hidden Diversification Shifts in Models of Trait-Dependent Speciation and Extinction.

    PubMed

    Beaulieu, Jeremy M; O'Meara, Brian C

    2016-07-01

    The distribution of diversity can vary considerably from clade to clade. Attempts to understand these patterns often employ state-dependent speciation and extinction models to determine whether the evolution of a particular novel trait has increased speciation rates and/or decreased extinction rates. It is still unclear, however, whether these models are uncovering important drivers of diversification, or whether they are simply pointing to more complex patterns involving many unmeasured and co-distributed factors. Here we describe an extension to the popular state-dependent speciation and extinction models that specifically accounts for the presence of unmeasured factors that could impact diversification rates estimated for the states of any observed trait, addressing at least one major criticism of BiSSE (Binary State Speciation and Extinction) methods. Specifically, our model, which we refer to as HiSSE (Hidden State Speciation and Extinction), assumes that related to each observed state in the model are "hidden" states that exhibit potentially distinct diversification dynamics and transition rates than the observed states in isolation. We also demonstrate how our model can be used as character-independent diversification models that allow for a complex diversification process that is independent of the evolution of a character. Under rigorous simulation tests and when applied to empirical data, we find that HiSSE performs reasonably well, and can at least detect net diversification rate differences between observed and hidden states and detect when diversification rate differences do not correlate with the observed states. We discuss the remaining issues with state-dependent speciation and extinction models in general, and the important ways in which HiSSE provides a more nuanced understanding of trait-dependent diversification. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Phase diagram of restricted Boltzmann machines and generalized Hopfield networks with arbitrary priors.

    PubMed

    Barra, Adriano; Genovese, Giuseppe; Sollich, Peter; Tantari, Daniele

    2018-02-01

    Restricted Boltzmann machines are described by the Gibbs measure of a bipartite spin glass, which in turn can be seen as a generalized Hopfield network. This equivalence allows us to characterize the state of these systems in terms of their retrieval capabilities, both at low and high load, of pure states. We study the paramagnetic-spin glass and the spin glass-retrieval phase transitions, as the pattern (i.e., weight) distribution and spin (i.e., unit) priors vary smoothly from Gaussian real variables to Boolean discrete variables. Our analysis shows that the presence of a retrieval phase is robust and not peculiar to the standard Hopfield model with Boolean patterns. The retrieval region becomes larger when the pattern entries and retrieval units get more peaked and, conversely, when the hidden units acquire a broader prior and therefore have a stronger response to high fields. Moreover, at low load retrieval always exists below some critical temperature, for every pattern distribution ranging from the Boolean to the Gaussian case.

  6. Neural networks applied to discriminate botanical origin of honeys.

    PubMed

    Anjos, Ofélia; Iglesias, Carla; Peres, Fátima; Martínez, Javier; García, Ángela; Taboada, Javier

    2015-05-15

    The aim of this work is develop a tool based on neural networks to predict the botanical origin of honeys using physical and chemical parameters. The managed database consists of 49 honey samples of 2 different classes: monofloral (almond, holm oak, sweet chestnut, eucalyptus, orange, rosemary, lavender, strawberry trees, thyme, heather, sunflower) and multifloral. The moisture content, electrical conductivity, water activity, ashes content, pH, free acidity, colorimetric coordinates in CIELAB space (L(∗), a(∗), b(∗)) and total phenols content of the honey samples were evaluated. Those properties were considered as input variables of the predictive model. The neural network is optimised through several tests with different numbers of neurons in the hidden layer and also with different input variables. The reduced error rates (5%) allow us to conclude that the botanical origin of honey can be reliably and quickly known from the colorimetric information and the electrical conductivity of honey. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Application of data mining techniques to explore predictors of HCC in Egyptian patients with HCV-related chronic liver disease.

    PubMed

    Omran, Dalia Abd El Hamid; Awad, AbuBakr Hussein; Mabrouk, Mahasen Abd El Rahman; Soliman, Ahmad Fouad; Aziz, Ashraf Omar Abdel

    2015-01-01

    Hepatocellular carcinoma (HCC) is the second most common malignancy in Egypt. Data mining is a method of predictive analysis which can explore tremendous volumes of information to discover hidden patterns and relationships. Our aim here was to develop a non-invasive algorithm for prediction of HCC. Such an algorithm should be economical, reliable, easy to apply and acceptable by domain experts. This cross-sectional study enrolled 315 patients with hepatitis C virus (HCV) related chronic liver disease (CLD); 135 HCC, 116 cirrhotic patients without HCC and 64 patients with chronic hepatitis C. Using data mining analysis, we constructed a decision tree learning algorithm to predict HCC. The decision tree algorithm was able to predict HCC with recall (sensitivity) of 83.5% and precession (specificity) of 83.3% using only routine data. The correctly classified instances were 259 (82.2%), and the incorrectly classified instances were 56 (17.8%). Out of 29 attributes, serum alpha fetoprotein (AFP), with an optimal cutoff value of ≥50.3 ng/ml was selected as the best predictor of HCC. To a lesser extent, male sex, presence of cirrhosis, AST>64U/L, and ascites were variables associated with HCC. Data mining analysis allows discovery of hidden patterns and enables the development of models to predict HCC, utilizing routine data as an alternative to CT and liver biopsy. This study has highlighted a new cutoff for AFP (≥50.3 ng/ml). Presence of a score of >2 risk variables (out of 5) can successfully predict HCC with a sensitivity of 96% and specificity of 82%.

  8. Under-reported data analysis with INAR-hidden Markov chains.

    PubMed

    Fernández-Fontelo, Amanda; Cabaña, Alejandra; Puig, Pedro; Moriña, David

    2016-11-20

    In this work, we deal with correlated under-reported data through INAR(1)-hidden Markov chain models. These models are very flexible and can be identified through its autocorrelation function, which has a very simple form. A naïve method of parameter estimation is proposed, jointly with the maximum likelihood method based on a revised version of the forward algorithm. The most-probable unobserved time series is reconstructed by means of the Viterbi algorithm. Several examples of application in the field of public health are discussed illustrating the utility of the models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Symbolic Insight and Inhibitory Control: Two Problems Facing Young Children on Symbolic Retrieval Tasks

    ERIC Educational Resources Information Center

    Kuhlmeier, Valerie

    2005-01-01

    Many recent studies have explored young children's ability to use information from physical representations of space to guide search within the real world. In one commonly used procedure, children are asked to find a hidden toy in a room after observing a smaller toy being hidden in the analogous location in a scale model of the room.…

  10. A Hidden Markov Model for Urban-Scale Traffic Estimation Using Floating Car Data.

    PubMed

    Wang, Xiaomeng; Peng, Ling; Chi, Tianhe; Li, Mengzhu; Yao, Xiaojing; Shao, Jing

    2015-01-01

    Urban-scale traffic monitoring plays a vital role in reducing traffic congestion. Owing to its low cost and wide coverage, floating car data (FCD) serves as a novel approach to collecting traffic data. However, sparse probe data represents the vast majority of the data available on arterial roads in most urban environments. In order to overcome the problem of data sparseness, this paper proposes a hidden Markov model (HMM)-based traffic estimation model, in which the traffic condition on a road segment is considered as a hidden state that can be estimated according to the conditions of road segments having similar traffic characteristics. An algorithm based on clustering and pattern mining rather than on adjacency relationships is proposed to find clusters with road segments having similar traffic characteristics. A multi-clustering strategy is adopted to achieve a trade-off between clustering accuracy and coverage. Finally, the proposed model is designed and implemented on the basis of a real-time algorithm. Results of experiments based on real FCD confirm the applicability, accuracy, and efficiency of the model. In addition, the results indicate that the model is practicable for traffic estimation on urban arterials and works well even when more than 70% of the probe data are missing.

  11. Hidden in plain sight: the formal, informal, and hidden curricula of a psychiatry clerkship.

    PubMed

    Wear, Delese; Skillicorn, Jodie

    2009-04-01

    To examine perceptions of the formal, informal, and hidden curricula in psychiatry as they are observed and experienced by (1) attending physicians who have teaching responsibilities for residents and medical students, (2) residents who are taught by those same physicians and who have teaching responsibilities for medical students, and (3) medical students who are taught by attendings and residents during their psychiatry rotation. From June to November 2007, the authors conducted focus groups with attendings, residents, and students in one midwestern academic setting. The sessions were audiotaped, transcribed, and analyzed for themes surrounding the formal, informal, and hidden curricula. All three groups offered a similar belief that the knowledge, skills, and values of the formal curriculum focused on building relationships. Similarly, all three suggested that elements of the informal and hidden curricula were expressed primarily as the values arising from attendings' role modeling, as the nature and amount of time attendings spend with patients, and as attendings' advice arising from experience and intuition versus "textbook learning." Whereas students and residents offered negative values arising from the informal and hidden curricula, attendings did not, offering instead the more positive values they intended to encourage through the informal and hidden curricula. The process described here has great potential in local settings across all disciplines. Asking teachers and learners in any setting to think about how they experience the educational environment and what sense they make of all curricular efforts can provide a reality check for educators and a values check for learners as they critically reflect on the meanings of what they are learning.

  12. Mediation and moderation of treatment effects in randomised controlled trials of complex interventions.

    PubMed

    Emsley, Richard; Dunn, Graham; White, Ian R

    2010-06-01

    Complex intervention trials should be able to answer both pragmatic and explanatory questions in order to test the theories motivating the intervention and help understand the underlying nature of the clinical problem being tested. Key to this is the estimation of direct effects of treatment and indirect effects acting through intermediate variables which are measured post-randomisation. Using psychological treatment trials as an example of complex interventions, we review statistical methods which crucially evaluate both direct and indirect effects in the presence of hidden confounding between mediator and outcome. We review the historical literature on mediation and moderation of treatment effects. We introduce two methods from within the existing causal inference literature, principal stratification and structural mean models, and demonstrate how these can be applied in a mediation context before discussing approaches and assumptions necessary for attaining identifiability of key parameters of the basic causal model. Assuming that there is modification by baseline covariates of the effect of treatment (i.e. randomisation) on the mediator (i.e. covariate by treatment interactions), but no direct effect on the outcome of these treatment by covariate interactions leads to the use of instrumental variable methods. We describe how moderation can occur through post-randomisation variables, and extend the principal stratification approach to multiple group methods with explanatory models nested within the principal strata. We illustrate the new methodology with motivating examples of randomised trials from the mental health literature.

  13. DM-BLD: differential methylation detection using a hierarchical Bayesian model exploiting local dependency.

    PubMed

    Wang, Xiao; Gu, Jinghua; Hilakivi-Clarke, Leena; Clarke, Robert; Xuan, Jianhua

    2017-01-15

    The advent of high-throughput DNA methylation profiling techniques has enabled the possibility of accurate identification of differentially methylated genes for cancer research. The large number of measured loci facilitates whole genome methylation study, yet posing great challenges for differential methylation detection due to the high variability in tumor samples. We have developed a novel probabilistic approach, D: ifferential M: ethylation detection using a hierarchical B: ayesian model exploiting L: ocal D: ependency (DM-BLD), to detect differentially methylated genes based on a Bayesian framework. The DM-BLD approach features a joint model to capture both the local dependency of measured loci and the dependency of methylation change in samples. Specifically, the local dependency is modeled by Leroux conditional autoregressive structure; the dependency of methylation changes is modeled by a discrete Markov random field. A hierarchical Bayesian model is developed to fully take into account the local dependency for differential analysis, in which differential states are embedded as hidden variables. Simulation studies demonstrate that DM-BLD outperforms existing methods for differential methylation detection, particularly when the methylation change is moderate and the variability of methylation in samples is high. DM-BLD has been applied to breast cancer data to identify important methylated genes (such as polycomb target genes and genes involved in transcription factor activity) associated with breast cancer recurrence. A Matlab package of DM-BLD is available at http://www.cbil.ece.vt.edu/software.htm CONTACT: Xuan@vt.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Dissipative hidden sector dark matter

    NASA Astrophysics Data System (ADS)

    Foot, R.; Vagnozzi, S.

    2015-01-01

    A simple way of explaining dark matter without modifying known Standard Model physics is to require the existence of a hidden (dark) sector, which interacts with the visible one predominantly via gravity. We consider a hidden sector containing two stable particles charged under an unbroken U (1 )' gauge symmetry, hence featuring dissipative interactions. The massless gauge field associated with this symmetry, the dark photon, can interact via kinetic mixing with the ordinary photon. In fact, such an interaction of strength ε ˜10-9 appears to be necessary in order to explain galactic structure. We calculate the effect of this new physics on big bang nucleosynthesis and its contribution to the relativistic energy density at hydrogen recombination. We then examine the process of dark recombination, during which neutral dark states are formed, which is important for large-scale structure formation. Galactic structure is considered next, focusing on spiral and irregular galaxies. For these galaxies we modeled the dark matter halo (at the current epoch) as a dissipative plasma of dark matter particles, where the energy lost due to dissipation is compensated by the energy produced from ordinary supernovae (the core-collapse energy is transferred to the hidden sector via kinetic mixing induced processes in the supernova core). We find that such a dynamical halo model can reproduce several observed features of disk galaxies, including the cored density profile and the Tully-Fisher relation. We also discuss how elliptical and dwarf spheroidal galaxies could fit into this picture. Finally, these analyses are combined to set bounds on the parameter space of our model, which can serve as a guideline for future experimental searches.

  15. Pc -like pentaquarks in a hidden strange sector

    NASA Astrophysics Data System (ADS)

    Huang, Hongxia; Zhu, Xinmei; Ping, Jialun

    2018-05-01

    Analogous to the work of hidden charm molecular pentaquarks, we study possible hidden strange molecular pentaquarks composed of Σ (or Σ*) and K (or K*) in the framework of a quark delocalization color screening model. Our results suggest that the Σ K , Σ K*, and Σ*K* with I JP=1/2 1/2- and Σ K*, Σ*K , and Σ*K* with I JP=1/2 3/2- are all resonance states by coupling the open channels. The molecular pentaquark Σ*K with quantum numbers I JP=1/2 3/2- can be seen as a strange partner of the LHCb Pc(4380 ) state. The possibility of identifying the resonances as nucleon resonances is proposed.

  16. Linking Costs and Postsecondary Degrees: Key Issues for Policymakers. Working Paper 2011-03

    ERIC Educational Resources Information Center

    Johnson, Nate

    2011-01-01

    In this paper the author offers practical advice for decision-makers who are struggling to rein in college costs while improving productivity. He provides a step-by-step guide to different approaches for calculating costs, highlights the tremendous variability in cost across programs within institutions, and documents some of the "hidden costs" of…

  17. The Hidden Factor in Early Field Experience: Teachers' Perception of the Quality of Life at Work.

    ERIC Educational Resources Information Center

    Divins, Barbara; And Others

    This project identified work environment factors in eight schools where a teacher preparation program placed early field experience students and where the university students reported experiencing positive field placements. The purpose was to determine the impact of certain variables on teachers' perception of the quality of their own professional…

  18. Hidden Broad-line Regions in Seyfert 2 Galaxies: From the Spectropolarimetric Perspective

    NASA Astrophysics Data System (ADS)

    Du, Pu; Wang, Jian-Min; Zhang, Zhi-Xiang

    2017-05-01

    The hidden broad-line regions (BLRs) in Seyfert 2 galaxies, which display broad emission lines (BELs) in their polarized spectra, are a key piece of evidence in support of the unified model for active galactic nuclei (AGNs). However, the detailed kinematics and geometry of hidden BLRs are still not fully understood. The virial factor obtained from reverberation mapping of type 1 AGNs may be a useful diagnostic of the nature of hidden BLRs in type 2 objects. In order to understand the hidden BLRs, we compile six type 2 objects from the literature with polarized BELs and dynamical measurements of black hole masses. All of them contain pseudobulges. We estimate their virial factors, and find the average value is 0.60 and the standard deviation is 0.69, which agree well with the value of type 1 AGNs with pseudobulges. This study demonstrates that (1) the geometry and kinematics of BLR are similar in type 1 and type 2 AGNs of the same bulge type (pseudobulges), and (2) the small values of virial factors in Seyfert 2 galaxies suggest that, similar to type 1 AGNs, BLRs tend to be very thick disks in type 2 objects.

  19. Hidden Broad-line Regions in Seyfert 2 Galaxies: From the Spectropolarimetric Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Pu; Wang, Jian-Min; Zhang, Zhi-Xiang, E-mail: dupu@ihep.ac.cn

    2017-05-01

    The hidden broad-line regions (BLRs) in Seyfert 2 galaxies, which display broad emission lines (BELs) in their polarized spectra, are a key piece of evidence in support of the unified model for active galactic nuclei (AGNs). However, the detailed kinematics and geometry of hidden BLRs are still not fully understood. The virial factor obtained from reverberation mapping of type 1 AGNs may be a useful diagnostic of the nature of hidden BLRs in type 2 objects. In order to understand the hidden BLRs, we compile six type 2 objects from the literature with polarized BELs and dynamical measurements of blackmore » hole masses. All of them contain pseudobulges. We estimate their virial factors, and find the average value is 0.60 and the standard deviation is 0.69, which agree well with the value of type 1 AGNs with pseudobulges. This study demonstrates that (1) the geometry and kinematics of BLR are similar in type 1 and type 2 AGNs of the same bulge type (pseudobulges), and (2) the small values of virial factors in Seyfert 2 galaxies suggest that, similar to type 1 AGNs, BLRs tend to be very thick disks in type 2 objects.« less

  20. Effect of high-frequency spectral components in computer recognition of dysarthric speech based on a Mel-cepstral stochastic model.

    PubMed

    Polur, Prasad D; Miller, Gerald E

    2005-01-01

    Computer speech recognition of individuals with dysarthria, such as cerebral palsy patients, requires a robust technique that can handle conditions of very high variability and limited training data. In this study, a hidden Markov model (HMM) was constructed and conditions investigated that would provide improved performance for a dysarthric speech (isolated word) recognition system intended to act as an assistive/control tool. In particular, we investigated the effect of high-frequency spectral components on the recognition rate of the system to determine if they contributed useful additional information to the system. A small-size vocabulary spoken by three cerebral palsy subjects was chosen. Mel-frequency cepstral coefficients extracted with the use of 15 ms frames served as training input to an ergodic HMM setup. Subsequent results demonstrated that no significant useful information was available to the system for enhancing its ability to discriminate dysarthric speech above 5.5 kHz in the current set of dysarthric data. The level of variability in input dysarthric speech patterns limits the reliability of the system. However, its application as a rehabilitation/control tool to assist dysarthric motor-impaired individuals such as cerebral palsy subjects holds sufficient promise.

  1. Modeling genome coverage in single-cell sequencing

    PubMed Central

    Daley, Timothy; Smith, Andrew D.

    2014-01-01

    Motivation: Single-cell DNA sequencing is necessary for examining genetic variation at the cellular level, which remains hidden in bulk sequencing experiments. But because they begin with such small amounts of starting material, the amount of information that is obtained from single-cell sequencing experiment is highly sensitive to the choice of protocol employed and variability in library preparation. In particular, the fraction of the genome represented in single-cell sequencing libraries exhibits extreme variability due to quantitative biases in amplification and loss of genetic material. Results: We propose a method to predict the genome coverage of a deep sequencing experiment using information from an initial shallow sequencing experiment mapped to a reference genome. The observed coverage statistics are used in a non-parametric empirical Bayes Poisson model to estimate the gain in coverage from deeper sequencing. This approach allows researchers to know statistical features of deep sequencing experiments without actually sequencing deeply, providing a basis for optimizing and comparing single-cell sequencing protocols or screening libraries. Availability and implementation: The method is available as part of the preseq software package. Source code is available at http://smithlabresearch.org/preseq. Contact: andrewds@usc.edu Supplementary information: Supplementary material is available at Bioinformatics online. PMID:25107873

  2. A Hidden Markov Model for Single Particle Tracks Quantifies Dynamic Interactions between LFA-1 and the Actin Cytoskeleton

    PubMed Central

    Das, Raibatak; Cairo, Christopher W.; Coombs, Daniel

    2009-01-01

    The extraction of hidden information from complex trajectories is a continuing problem in single-particle and single-molecule experiments. Particle trajectories are the result of multiple phenomena, and new methods for revealing changes in molecular processes are needed. We have developed a practical technique that is capable of identifying multiple states of diffusion within experimental trajectories. We model single particle tracks for a membrane-associated protein interacting with a homogeneously distributed binding partner and show that, with certain simplifying assumptions, particle trajectories can be regarded as the outcome of a two-state hidden Markov model. Using simulated trajectories, we demonstrate that this model can be used to identify the key biophysical parameters for such a system, namely the diffusion coefficients of the underlying states, and the rates of transition between them. We use a stochastic optimization scheme to compute maximum likelihood estimates of these parameters. We have applied this analysis to single-particle trajectories of the integrin receptor lymphocyte function-associated antigen-1 (LFA-1) on live T cells. Our analysis reveals that the diffusion of LFA-1 is indeed approximately two-state, and is characterized by large changes in cytoskeletal interactions upon cellular activation. PMID:19893741

  3. Application of the Artificial Neural Network model for prediction of monthly Standardized Precipitation and Evapotranspiration Index using hydrometeorological parameters and climate indices in eastern Australia

    NASA Astrophysics Data System (ADS)

    Deo, Ravinesh C.; Şahin, Mehmet

    2015-07-01

    The forecasting of drought based on cumulative influence of rainfall, temperature and evaporation is greatly beneficial for mitigating adverse consequences on water-sensitive sectors such as agriculture, ecosystems, wildlife, tourism, recreation, crop health and hydrologic engineering. Predictive models of drought indices help in assessing water scarcity situations, drought identification and severity characterization. In this paper, we tested the feasibility of the Artificial Neural Network (ANN) as a data-driven model for predicting the monthly Standardized Precipitation and Evapotranspiration Index (SPEI) for eight candidate stations in eastern Australia using predictive variable data from 1915 to 2005 (training) and simulated data for the period 2006-2012. The predictive variables were: monthly rainfall totals, mean temperature, minimum temperature, maximum temperature and evapotranspiration, which were supplemented by large-scale climate indices (Southern Oscillation Index, Pacific Decadal Oscillation, Southern Annular Mode and Indian Ocean Dipole) and the Sea Surface Temperatures (Nino 3.0, 3.4 and 4.0). A total of 30 ANN models were developed with 3-layer ANN networks. To determine the best combination of learning algorithms, hidden transfer and output functions of the optimum model, the Levenberg-Marquardt and Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton backpropagation algorithms were utilized to train the network, tangent and logarithmic sigmoid equations used as the activation functions and the linear, logarithmic and tangent sigmoid equations used as the output function. The best ANN architecture had 18 input neurons, 43 hidden neurons and 1 output neuron, trained using the Levenberg-Marquardt learning algorithm using tangent sigmoid equation as the activation and output functions. An evaluation of the model performance based on statistical rules yielded time-averaged Coefficient of Determination, Root Mean Squared Error and the Mean Absolute Error ranging from 0.9945-0.9990, 0.0466-0.1117, and 0.0013-0.0130, respectively for individual stations. Also, the Willmott's Index of Agreement and the Nash-Sutcliffe Coefficient of Efficiency were between 0.932-0.959 and 0.977-0.998, respectively. When checked for the severity (S), duration (D) and peak intensity (I) of drought events determined from the simulated and observed SPEI, differences in drought parameters ranged from - 1.41-0.64%, - 2.17-1.92% and - 3.21-1.21%, respectively. Based on performance evaluation measures, we aver that the Artificial Neural Network model is a useful data-driven tool for forecasting monthly SPEI and its drought-related properties in the region of study.

  4. The problem of contextuality and the impossibility of experimental metaphysics thereof

    NASA Astrophysics Data System (ADS)

    Hermens, Ronnie

    Recently a new impulse has been given to the experimental investigation of contextuality. In this paper we show that for a widely used definition of contextuality there can be no decisive experiment on the existence of contextuality. To this end, we give a clear presentation of the hidden variable models due to Meyer, Kent and Clifton (MKC), which would supposedly nullify the Kochen-Specker theorem. Although we disagree with this last statement, the models do play a significant role in the discussion on the meaning of contextuality. In fact, we introduce a specific MKC-model of which we show that it is non-contextual and completely in agreement with quantum mechanical predictions. We also investigate the possibility of other definitions of non-contextuality-with an emphasis on operational definitions-and argue that any useful definition relies on the specification of a theoretical framework. It is therefore concluded that no experimental test can yield any conclusions about contextuality on a metaphysical level.

  5. [Determination of process variable pH in solid-state fermentation by FT-NIR spectroscopy and extreme learning machine (ELM)].

    PubMed

    Liu, Guo-hai; Jiang, Hui; Xiao, Xia-hong; Zhang, Dong-juan; Mei, Cong-li; Ding, Yu-han

    2012-04-01

    Fourier transform near-infrared (FT-NIR) spectroscopy was attempted to determine pH, which is one of the key process parameters in solid-state fermentation of crop straws. First, near infrared spectra of 140 solid-state fermented product samples were obtained by near infrared spectroscopy system in the wavelength range of 10 000-4 000 cm(-1), and then the reference measurement results of pH were achieved by pH meter. Thereafter, the extreme learning machine (ELM) was employed to calibrate model. In the calibration model, the optimal number of PCs and the optimal number of hidden-layer nodes of ELM network were determined by the cross-validation. Experimental results showed that the optimal ELM model was achieved with 1040-1 topology construction as follows: R(p) = 0.961 8 and RMSEP = 0.104 4 in the prediction set. The research achievement could provide technological basis for the on-line measurement of the process parameters in solid-state fermentation.

  6. Tissue multifractality and hidden Markov model based integrated framework for optimum precancer detection

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sabyasachi; Das, Nandan K.; Kurmi, Indrajit; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2017-10-01

    We report the application of a hidden Markov model (HMM) on multifractal tissue optical properties derived via the Born approximation-based inverse light scattering method for effective discrimination of precancerous human cervical tissue sites from the normal ones. Two global fractal parameters, generalized Hurst exponent and the corresponding singularity spectrum width, computed by multifractal detrended fluctuation analysis (MFDFA), are used here as potential biomarkers. We develop a methodology that makes use of these multifractal parameters by integrating with different statistical classifiers like the HMM and support vector machine (SVM). It is shown that the MFDFA-HMM integrated model achieves significantly better discrimination between normal and different grades of cancer as compared to the MFDFA-SVM integrated model.

  7. Triadic Closure in Configuration Models with Unbounded Degree Fluctuations

    NASA Astrophysics Data System (ADS)

    van der Hofstad, Remco; van Leeuwaarden, Johan S. H.; Stegehuis, Clara

    2018-01-01

    The configuration model generates random graphs with any given degree distribution, and thus serves as a null model for scale-free networks with power-law degrees and unbounded degree fluctuations. For this setting, we study the local clustering c(k), i.e., the probability that two neighbors of a degree-k node are neighbors themselves. We show that c(k) progressively falls off with k and the graph size n and eventually for k=Ω (√{n}) settles on a power law c(k)˜ n^{5-2τ }k^{-2(3-τ )} with τ \\in (2,3) the power-law exponent of the degree distribution. This fall-off has been observed in the majority of real-world networks and signals the presence of modular or hierarchical structure. Our results agree with recent results for the hidden-variable model and also give the expected number of triangles in the configuration model when counting triangles only once despite the presence of multi-edges. We show that only triangles consisting of triplets with uniquely specified degrees contribute to the triangle counting.

  8. Dimensional Model for Estimating Factors influencing Childhood Obesity: Path Analysis Based Modeling

    PubMed Central

    Kheirollahpour, Maryam; Shohaimi, Shamarina

    2014-01-01

    The main objective of this study is to identify and develop a comprehensive model which estimates and evaluates the overall relations among the factors that lead to weight gain in children by using structural equation modeling. The proposed models in this study explore the connection among the socioeconomic status of the family, parental feeding practice, and physical activity. Six structural models were tested to identify the direct and indirect relationship between the socioeconomic status and parental feeding practice general level of physical activity, and weight status of children. Finally, a comprehensive model was devised to show how these factors relate to each other as well as to the body mass index (BMI) of the children simultaneously. Concerning the methodology of the current study, confirmatory factor analysis (CFA) was applied to reveal the hidden (secondary) effect of socioeconomic factors on feeding practice and ultimately on the weight status of the children and also to determine the degree of model fit. The comprehensive structural model tested in this study suggested that there are significant direct and indirect relationships among variables of interest. Moreover, the results suggest that parental feeding practice and physical activity are mediators in the structural model. PMID:25097878

  9. Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models.

    PubMed

    Daunizeau, J; Friston, K J; Kiebel, S J

    2009-11-01

    In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.

  10. Asymmetric author-topic model for knowledge discovering of big data in toxicogenomics.

    PubMed

    Chung, Ming-Hua; Wang, Yuping; Tang, Hailin; Zou, Wen; Basinger, John; Xu, Xiaowei; Tong, Weida

    2015-01-01

    The advancement of high-throughput screening technologies facilitates the generation of massive amount of biological data, a big data phenomena in biomedical science. Yet, researchers still heavily rely on keyword search and/or literature review to navigate the databases and analyses are often done in rather small-scale. As a result, the rich information of a database has not been fully utilized, particularly for the information embedded in the interactive nature between data points that are largely ignored and buried. For the past 10 years, probabilistic topic modeling has been recognized as an effective machine learning algorithm to annotate the hidden thematic structure of massive collection of documents. The analogy between text corpus and large-scale genomic data enables the application of text mining tools, like probabilistic topic models, to explore hidden patterns of genomic data and to the extension of altered biological functions. In this paper, we developed a generalized probabilistic topic model to analyze a toxicogenomics dataset that consists of a large number of gene expression data from the rat livers treated with drugs in multiple dose and time-points. We discovered the hidden patterns in gene expression associated with the effect of doses and time-points of treatment. Finally, we illustrated the ability of our model to identify the evidence of potential reduction of animal use.

  11. Adiabatic density perturbations and matter generation from the minimal supersymmetric standard model.

    PubMed

    Enqvist, Kari; Kasuya, Shinta; Mazumdar, Anupam

    2003-03-07

    We propose that the inflaton is coupled to ordinary matter only gravitationally and that it decays into a completely hidden sector. In this scenario both baryonic and dark matter originate from the decay of a flat direction of the minimal supersymmetric standard model, which is shown to generate the desired adiabatic perturbation spectrum via the curvaton mechanism. The requirement that the energy density along the flat direction dominates over the inflaton decay products fixes the flat direction almost uniquely. The present residual energy density in the hidden sector is typically shown to be small.

  12. Mixture Hidden Markov Models in Finance Research

    NASA Astrophysics Data System (ADS)

    Dias, José G.; Vermunt, Jeroen K.; Ramos, Sofia

    Finite mixture models have proven to be a powerful framework whenever unobserved heterogeneity cannot be ignored. We introduce in finance research the Mixture Hidden Markov Model (MHMM) that takes into account time and space heterogeneity simultaneously. This approach is flexible in the sense that it can deal with the specific features of financial time series data, such as asymmetry, kurtosis, and unobserved heterogeneity. This methodology is applied to model simultaneously 12 time series of Asian stock markets indexes. Because we selected a heterogeneous sample of countries including both developed and emerging countries, we expect that heterogeneity in market returns due to country idiosyncrasies will show up in the results. The best fitting model was the one with two clusters at country level with different dynamics between the two regimes.

  13. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.

    Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values aremore » constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.« less

  14. Informatic analysis for hidden pulse attack exploiting spectral characteristics of optics in plug-and-play quantum key distribution system

    NASA Astrophysics Data System (ADS)

    Ko, Heasin; Lim, Kyongchun; Oh, Junsang; Rhee, June-Koo Kevin

    2016-10-01

    Quantum channel loopholes due to imperfect implementations of practical devices expose quantum key distribution (QKD) systems to potential eavesdropping attacks. Even though QKD systems are implemented with optical devices that are highly selective on spectral characteristics, information theory-based analysis about a pertinent attack strategy built with a reasonable framework exploiting it has never been clarified. This paper proposes a new type of trojan horse attack called hidden pulse attack that can be applied in a plug-and-play QKD system, using general and optimal attack strategies that can extract quantum information from phase-disturbed quantum states of eavesdropper's hidden pulses. It exploits spectral characteristics of a photodiode used in a plug-and-play QKD system in order to probe modulation states of photon qubits. We analyze the security performance of the decoy-state BB84 QKD system under the optimal hidden pulse attack model that shows enormous performance degradation in terms of both secret key rate and transmission distance.

  15. Development of a brain MRI-based hidden Markov model for dementia recognition

    PubMed Central

    2013-01-01

    Background Dementia is an age-related cognitive decline which is indicated by an early degeneration of cortical and sub-cortical structures. Characterizing those morphological changes can help to understand the disease development and contribute to disease early prediction and prevention. But modeling that can best capture brain structural variability and can be valid in both disease classification and interpretation is extremely challenging. The current study aimed to establish a computational approach for modeling the magnetic resonance imaging (MRI)-based structural complexity of the brain using the framework of hidden Markov models (HMMs) for dementia recognition. Methods Regularity dimension and semi-variogram were used to extract structural features of the brains, and vector quantization method was applied to convert extracted feature vectors to prototype vectors. The output VQ indices were then utilized to estimate parameters for HMMs. To validate its accuracy and robustness, experiments were carried out on individuals who were characterized as non-demented and mild Alzheimer's diseased. Four HMMs were constructed based on the cohort of non-demented young, middle-aged, elder and demented elder subjects separately. Classification was carried out using a data set including both non-demented and demented individuals with a wide age range. Results The proposed HMMs have succeeded in recognition of individual who has mild Alzheimer's disease and achieved a better classification accuracy compared to other related works using different classifiers. Results have shown the ability of the proposed modeling for recognition of early dementia. Conclusion The findings from this research will allow individual classification to support the early diagnosis and prediction of dementia. By using the brain MRI-based HMMs developed in our proposed research, it will be more efficient, robust and can be easily used by clinicians as a computer-aid tool for validating imaging bio-markers for early prediction of dementia. PMID:24564961

  16. Semantic Context Detection Using Audio Event Fusion

    NASA Astrophysics Data System (ADS)

    Chu, Wei-Ta; Cheng, Wen-Huang; Wu, Ja-Ling

    2006-12-01

    Semantic-level content analysis is a crucial issue in achieving efficient content retrieval and management. We propose a hierarchical approach that models audio events over a time series in order to accomplish semantic context detection. Two levels of modeling, audio event and semantic context modeling, are devised to bridge the gap between physical audio features and semantic concepts. In this work, hidden Markov models (HMMs) are used to model four representative audio events, that is, gunshot, explosion, engine, and car braking, in action movies. At the semantic context level, generative (ergodic hidden Markov model) and discriminative (support vector machine (SVM)) approaches are investigated to fuse the characteristics and correlations among audio events, which provide cues for detecting gunplay and car-chasing scenes. The experimental results demonstrate the effectiveness of the proposed approaches and provide a preliminary framework for information mining by using audio characteristics.

  17. A hidden Markov model approach to neuron firing patterns.

    PubMed

    Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G

    1996-11-01

    Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing.

  18. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.

  19. Religious Beliefs: A Hidden Variable in the Performance of Science Teachers in the Classroom

    ERIC Educational Resources Information Center

    Mansour, Nasser

    2008-01-01

    This article focuses on some of the challenges of teaching science in a culture where science and religion sometimes appear to be or are set at odds with each other. Apparent conflicts between scholarly claims and religious claims are not limited to science, however--they occur in almost every subject. Many topics included in science education are…

  20. Non-local boxes and their implementation in Minecraft

    NASA Astrophysics Data System (ADS)

    Simnacher, Timo Yannick

    PR-boxes are binary devices connecting two remote parties satisfying x AND y = a + b mod 2, where x and y denote the binary inputs and a and b are the respective outcomes without signaling. These devices are named after their inventors Sandu Popescu and Daniel Rohrlich and saturate the Clauser-Horne-Shimony-Holt (CHSH) inequality. This Bell-like inequality bounds the correlation that can exist between two remote, non-signaling, classical systems described by local hidden variable theories. Experiments have now convincingly shown that quantum entanglement cannot be explained by local hidden variable theories. Furthermore, the CHSH inequality provides a method to distinguish quantum systems from super-quantum correlations. The correlation between the outputs of the PR-box goes beyond any quantum entanglement. Though PR-boxes would have impressive consequences, as far as we know they are not physically realizable. However, by introducing PR-boxes to Minecraft as part of the redstone system, which simulates the electrical components for binary computing, we can experience the consequences of super-quantum correlations. For instance, Wim van Dam proved that two parties can use a sufficient number of PR-boxes to compute any Boolean function f(x,y) with only one bit of communication.

  1. Exponential gain of randomness certified by quantum contextuality

    NASA Astrophysics Data System (ADS)

    Um, Mark; Zhang, Junhua; Wang, Ye; Wang, Pengfei; Kim, Kihwan

    2017-04-01

    We demonstrate the protocol of exponential gain of randomness certified by quantum contextuality in a trapped ion system. The genuine randomness can be produced by quantum principle and certified by quantum inequalities. Recently, randomness expansion protocols based on inequality of Bell-text and Kochen-Specker (KS) theorem, have been demonstrated. These schemes have been theoretically innovated to exponentially expand the randomness and amplify the randomness from weak initial random seed. Here, we report the experimental evidence of such exponential expansion of randomness. In the experiment, we use three states of a 138Ba + ion between a ground state and two quadrupole states. In the 138Ba + ion system, we do not have detection loophole and we apply a methods to rule out certain hidden variable models that obey a kind of extended noncontextuality.

  2. 3.55 keV line from exciting dark matter without a hidden sector

    DOE PAGES

    Berlin, Asher; DiFranzo, Anthony; Hooper, Dan

    2015-04-24

    In this study, models in which dark matter particles can scatter into a slightly heavier state which promptly decays to the lighter state and a photon (known as eXciting Dark Matter, or XDM) have been shown to be capable of generating the 3.55 keV line observed from galaxy clusters, while suppressing the flux of such a line from smaller halos, including dwarf galaxies. In most of the XDM models discussed in the literature, this up-scattering is mediated by a new light particle, and dark matter annihilations proceed into pairs of this same light state. In these models, the dark matter andmore » the mediator effectively reside within a hidden sector, without sizable couplings to the Standard Model. In this paper, we explore a model of XDM that does not include a hidden sector. Instead, the dark matter both up-scatters and annihilates through the near resonant exchange of an O(10 2) GeV pseudoscalar with large Yukawa couplings to the dark matter and smaller, but non-neglibile, couplings to Standard Model fermions. The dark matter and the mediator are each mixtures of Standard Model singlets and SU(2) W doublets. We identify parameter space in which this model can simultaneously generate the 3.55 keV line and the gamma-ray excess observed from the Galactic center, without conflicting with constraints from colliders, direct detection experiments, or observations of dwarf galaxies.« less

  3. The hidden life of integrative and conjugative elements

    PubMed Central

    Delavat, François; Miyazaki, Ryo; Carraro, Nicolas; Pradervand, Nicolas

    2017-01-01

    Abstract Integrative and conjugative elements (ICEs) are widespread mobile DNA that transmit both vertically, in a host-integrated state, and horizontally, through excision and transfer to new recipients. Different families of ICEs have been discovered with more or less restricted host ranges, which operate by similar mechanisms but differ in regulatory networks, evolutionary origin and the types of variable genes they contribute to the host. Based on reviewing recent experimental data, we propose a general model of ICE life style that explains the transition between vertical and horizontal transmission as a result of a bistable decision in the ICE–host partnership. In the large majority of cells, the ICE remains silent and integrated, but hidden at low to very low frequencies in the population specialized host cells appear in which the ICE starts its process of horizontal transmission. This bistable process leads to host cell differentiation, ICE excision and transfer, when suitable recipients are present. The ratio of ICE bistability (i.e. ratio of horizontal to vertical transmission) is the outcome of a balance between fitness costs imposed by the ICE horizontal transmission process on the host cell, and selection for ICE distribution (i.e. ICE ‘fitness’). From this emerges a picture of ICEs as elements that have adapted to a mostly confined life style within their host, but with a very effective and dynamic transfer from a subpopulation of dedicated cells. PMID:28369623

  4. Stylistic gait synthesis based on hidden Markov models

    NASA Astrophysics Data System (ADS)

    Tilmanne, Joëlle; Moinet, Alexis; Dutoit, Thierry

    2012-12-01

    In this work we present an expressive gait synthesis system based on hidden Markov models (HMMs), following and modifying a procedure originally developed for speaking style adaptation, in speech synthesis. A large database of neutral motion capture walk sequences was used to train an HMM of average walk. The model was then used for automatic adaptation to a particular style of walk using only a small amount of training data from the target style. The open source toolkit that we adapted for motion modeling also enabled us to take into account the dynamics of the data and to model accurately the duration of each HMM state. We also address the assessment issue and propose a procedure for qualitative user evaluation of the synthesized sequences. Our tests show that the style of these sequences can easily be recognized and look natural to the evaluators.

  5. Statistical learning and adaptive decision-making underlie human response time variability in inhibitory control.

    PubMed

    Ma, Ning; Yu, Angela J

    2015-01-01

    Response time (RT) is an oft-reported behavioral measure in psychological and neurocognitive experiments, but the high level of observed trial-to-trial variability in this measure has often limited its usefulness. Here, we combine computational modeling and psychophysics to examine the hypothesis that fluctuations in this noisy measure reflect dynamic computations in human statistical learning and corresponding cognitive adjustments. We present data from the stop-signal task (SST), in which subjects respond to a go stimulus on each trial, unless instructed not to by a subsequent, infrequently presented stop signal. We model across-trial learning of stop signal frequency, P(stop), and stop-signal onset time, SSD (stop-signal delay), with a Bayesian hidden Markov model, and within-trial decision-making with an optimal stochastic control model. The combined model predicts that RT should increase with both expected P(stop) and SSD. The human behavioral data (n = 20) bear out this prediction, showing P(stop) and SSD both to be significant, independent predictors of RT, with P(stop) being a more prominent predictor in 75% of the subjects, and SSD being more prominent in the remaining 25%. The results demonstrate that humans indeed readily internalize environmental statistics and adjust their cognitive/behavioral strategy accordingly, and that subtle patterns in RT variability can serve as a valuable tool for validating models of statistical learning and decision-making. More broadly, the modeling tools presented in this work can be generalized to a large body of behavioral paradigms, in order to extract insights about cognitive and neural processing from apparently quite noisy behavioral measures. We also discuss how this behaviorally validated model can then be used to conduct model-based analysis of neural data, in order to help identify specific brain areas for representing and encoding key computational quantities in learning and decision-making.

  6. Normality of raw data in general linear models: The most widespread myth in statistics

    USGS Publications Warehouse

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  7. Feedforward neural network model estimating pollutant removal process within mesophilic upflow anaerobic sludge blanket bioreactor treating industrial starch processing wastewater.

    PubMed

    Antwi, Philip; Li, Jianzheng; Meng, Jia; Deng, Kaiwen; Koblah Quashie, Frank; Li, Jiuling; Opoku Boadi, Portia

    2018-06-01

    In this a, three-layered feedforward-backpropagation artificial neural network (BPANN) model was developed and employed to evaluate COD removal an upflow anaerobic sludge blanket (UASB) reactor treating industrial starch processing wastewater. At the end of UASB operation, microbial community characterization revealed satisfactory composition of microbes whereas morphology depicted rod-shaped archaea. pH, COD, NH 4 + , VFA, OLR and biogas yield were selected by principal component analysis and used as input variables. Whilst tangent sigmoid function (tansig) and linear function (purelin) were assigned as activation functions at the hidden-layer and output-layer, respectively, optimum BPANN architecture was achieved with Levenberg-Marquardt algorithm (trainlm) after eleven training algorithms had been tested. Based on performance indicators such the mean squared errors, fractional variance, index of agreement and coefficient of determination (R 2 ), the BPANN model demonstrated significant performance with R 2 reaching 87%. The study revealed that, control and optimization of an anaerobic digestion process with BPANN model was feasible. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Hyperspectral material identification on radiance data using single-atmosphere or multiple-atmosphere modeling

    NASA Astrophysics Data System (ADS)

    Mariano, Adrian V.; Grossmann, John M.

    2010-11-01

    Reflectance-domain methods convert hyperspectral data from radiance to reflectance using an atmospheric compensation model. Material detection and identification are performed by comparing the compensated data to target reflectance spectra. We introduce two radiance-domain approaches, Single atmosphere Adaptive Cosine Estimator (SACE) and Multiple atmosphere ACE (MACE) in which the target reflectance spectra are instead converted into sensor-reaching radiance using physics-based models. For SACE, known illumination and atmospheric conditions are incorporated in a single atmospheric model. For MACE the conditions are unknown so the algorithm uses many atmospheric models to cover the range of environmental variability, and it approximates the result using a subspace model. This approach is sometimes called the invariant method, and requires the choice of a subspace dimension for the model. We compare these two radiance-domain approaches to a Reflectance-domain ACE (RACE) approach on a HYDICE image featuring concealed materials. All three algorithms use the ACE detector, and all three techniques are able to detect most of the hidden materials in the imagery. For MACE we observe a strong dependence on the choice of the material subspace dimension. Increasing this value can lead to a decline in performance.

  9. Bayesian networks for maritime traffic accident prevention: benefits and challenges.

    PubMed

    Hänninen, Maria

    2014-12-01

    Bayesian networks are quantitative modeling tools whose applications to the maritime traffic safety context are becoming more popular. This paper discusses the utilization of Bayesian networks in maritime safety modeling. Based on literature and the author's own experiences, the paper studies what Bayesian networks can offer to maritime accident prevention and safety modeling and discusses a few challenges in their application to this context. It is argued that the capability of representing rather complex, not necessarily causal but uncertain relationships makes Bayesian networks an attractive modeling tool for the maritime safety and accidents. Furthermore, as the maritime accident and safety data is still rather scarce and has some quality problems, the possibility to combine data with expert knowledge and the easy way of updating the model after acquiring more evidence further enhance their feasibility. However, eliciting the probabilities from the maritime experts might be challenging and the model validation can be tricky. It is concluded that with the utilization of several data sources, Bayesian updating, dynamic modeling, and hidden nodes for latent variables, Bayesian networks are rather well-suited tools for the maritime safety management and decision-making. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Optimization of neural network architecture for classification of radar jamming FM signals

    NASA Astrophysics Data System (ADS)

    Soto, Alberto; Mendoza, Ariadna; Flores, Benjamin C.

    2017-05-01

    The purpose of this study is to investigate several artificial Neural Network (NN) architectures in order to design a cognitive radar system capable of optimally distinguishing linear Frequency-Modulated (FM) signals from bandlimited Additive White Gaussian Noise (AWGN). The goal is to create a theoretical framework to determine an optimal NN architecture to achieve a Probability of Detection (PD) of 95% or higher and a Probability of False Alarm (PFA) of 1.5% or lower at 5 dB Signal to Noise Ratio (SNR). Literature research reveals that the frequency-domain power spectral densities characterize a signal more efficiently than its time-domain counterparts. Therefore, the input data is preprocessed by calculating the magnitude square of the Discrete Fourier Transform of the digitally sampled bandlimited AWGN and linear FM signals to populate a matrix containing N number of samples and M number of spectra. This matrix is used as input for the NN, and the spectra are divided as follows: 70% for training, 15% for validation, and 15% for testing. The study begins by experimentally deducing the optimal number of hidden neurons (1-40 neurons), then the optimal number of hidden layers (1-5 layers), and lastly, the most efficient learning algorithm. The training algorithms examined are: Resilient Backpropagation, Scaled Conjugate Gradient, Conjugate Gradient with Powell/Beale Restarts, Polak-Ribiére Conjugate Gradient, and Variable Learning Rate Backpropagation. We determine that an architecture with ten hidden neurons (or higher), one hidden layer, and a Scaled Conjugate Gradient for training algorithm encapsulates an optimal architecture for our application.

  11. Hidden U (1 ) gauge symmetry realizing a neutrinophilic two-Higgs-doublet model with dark matter

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2018-04-01

    We propose a neutrinophilic two-Higgs-doublet model with hidden local U (1 ) symmetry, where active neutrinos are Dirac type, and a fermionic dark matter (DM) candidate is naturally induced as a result of remnant symmetry even after the spontaneous symmetry breaking. In addition, a physical Goldstone boson arises as a consequence of two types of gauge singlet bosons and contributes to the DM phenomenologies as well as an additional neutral gauge boson. Then, we analyze the relic density of DM within the safe range of direct detection searches and show the allowed region of dark matter mass.

  12. Memetic Approaches for Optimizing Hidden Markov Models: A Case Study in Time Series Prediction

    NASA Astrophysics Data System (ADS)

    Bui, Lam Thu; Barlow, Michael

    We propose a methodology for employing memetics (local search) within the framework of evolutionary algorithms to optimize parameters of hidden markov models. With this proposal, the rate and frequency of using local search are automatically changed over time either at a population or individual level. At the population level, we allow the rate of using local search to decay over time to zero (at the final generation). At the individual level, each individual is equipped with information of when it will do local search and for how long. This information evolves over time alongside the main elements of the chromosome representing the individual.

  13. A model for metastable magnetism in the hidden-order phase of URu2Si2

    NASA Astrophysics Data System (ADS)

    Boyer, Lance; Yakovenko, Victor M.

    2018-01-01

    We propose an explanation for the experiment by Schemm et al. (2015) where the polar Kerr effect (PKE), indicating time-reversal symmetry (TRS) breaking, was observed in the hidden-order (HO) phase of URu2Si2. The PKE signal on warmup was seen only if a training magnetic field was present on cool-down. Using a Ginzburg-Landau model for a complex order parameter, we show that the system can have a metastable ferromagnetic state producing the PKE, even if the HO ground state respects TRS. We predict that a strong reversed magnetic field should reset the PKE to zero.

  14. A radiative neutrino mass model in light of DAMPE excess with hidden gauged U(1) symmetry

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi; Wu, Peiwen

    2018-05-01

    We propose a one-loop induced neutrino mass model with hidden U(1) gauge symmetry, in which we successfully involve a bosonic dark matter (DM) candidate propagating inside a loop diagram in neutrino mass generation to explain the e+e‑ excess recently reported by the DArk Matter Particle Explorer (DAMPE) experiment. In our scenario dark matter annihilates into four leptons through Z' boson as DM DM → Z' Z' (Z' → l+ l‑) and Z' decays into leptons via one-loop effect. We then investigate branching ratios of Z' taking into account lepton flavor violations and neutrino oscillation data.

  15. Curvature and temperature of complex networks.

    PubMed

    Krioukov, Dmitri; Papadopoulos, Fragkiskos; Vahdat, Amin; Boguñá, Marián

    2009-09-01

    We show that heterogeneous degree distributions in observed scale-free topologies of complex networks can emerge as a consequence of the exponential expansion of hidden hyperbolic space. Fermi-Dirac statistics provides a physical interpretation of hyperbolic distances as energies of links. The hidden space curvature affects the heterogeneity of the degree distribution, while clustering is a function of temperature. We embed the internet into the hyperbolic plane and find a remarkable congruency between the embedding and our hyperbolic model. Besides proving our model realistic, this embedding may be used for routing with only local information, which holds significant promise for improving the performance of internet routing.

  16. Self-Organizing Hidden Markov Model Map (SOHMMM): Biological Sequence Clustering and Cluster Visualization.

    PubMed

    Ferles, Christos; Beaufort, William-Scott; Ferle, Vanessa

    2017-01-01

    The present study devises mapping methodologies and projection techniques that visualize and demonstrate biological sequence data clustering results. The Sequence Data Density Display (SDDD) and Sequence Likelihood Projection (SLP) visualizations represent the input symbolical sequences in a lower-dimensional space in such a way that the clusters and relations of data elements are depicted graphically. Both operate in combination/synergy with the Self-Organizing Hidden Markov Model Map (SOHMMM). The resulting unified framework is in position to analyze automatically and directly raw sequence data. This analysis is carried out with little, or even complete absence of, prior information/domain knowledge.

  17. Dark matter freeze-out in a nonrelativistic sector

    NASA Astrophysics Data System (ADS)

    Pappadopulo, Duccio; Ruderman, Joshua T.; Trevisan, Gabriele

    2016-08-01

    A thermally decoupled hidden sector of particles, with a mass gap, generically enters a phase of cannibalism in the early Universe. The Standard Model sector becomes exponentially colder than the hidden sector. We propose the cannibal dark matter framework, where dark matter resides in a cannibalizing sector with a relic density set by 2-to-2 annihilations. Observable signals of cannibal dark matter include a boosted rate for indirect detection, new relativistic degrees of freedom, and warm dark matter.

  18. Modeling Multiple Risks: Hidden Domain of Attraction

    DTIC Science & Technology

    2012-01-01

    improve joint tail probability approximation but the deficiency can be remedied by a more general approach which we call hidden domain of attraction ( HDA ...HRV is a special case of HDA . If the distribution of X does not have MRV but (1.2) still holds, we may retrieve the MRV setup by transforming the...potential advantage in some circumstances of the notion of HDA is that it does not require that we transform components. Performing such transformations on

  19. Biological engineering applications of feedforward neural networks designed and parameterized by genetic algorithms.

    PubMed

    Ferentinos, Konstantinos P

    2005-09-01

    Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.

  20. An Indoor Pedestrian Positioning Method Using HMM with a Fuzzy Pattern Recognition Algorithm in a WLAN Fingerprint System

    PubMed Central

    Ni, Yepeng; Liu, Jianbo; Liu, Shan; Bai, Yaxin

    2016-01-01

    With the rapid development of smartphones and wireless networks, indoor location-based services have become more and more prevalent. Due to the sophisticated propagation of radio signals, the Received Signal Strength Indicator (RSSI) shows a significant variation during pedestrian walking, which introduces critical errors in deterministic indoor positioning. To solve this problem, we present a novel method to improve the indoor pedestrian positioning accuracy by embedding a fuzzy pattern recognition algorithm into a Hidden Markov Model. The fuzzy pattern recognition algorithm follows the rule that the RSSI fading has a positive correlation to the distance between the measuring point and the AP location even during a dynamic positioning measurement. Through this algorithm, we use the RSSI variation trend to replace the specific RSSI value to achieve a fuzzy positioning. The transition probability of the Hidden Markov Model is trained by the fuzzy pattern recognition algorithm with pedestrian trajectories. Using the Viterbi algorithm with the trained model, we can obtain a set of hidden location states. In our experiments, we demonstrate that, compared with the deterministic pattern matching algorithm, our method can greatly improve the positioning accuracy and shows robust environmental adaptability. PMID:27618053

  1. Hidden sector behind the CKM matrix

    NASA Astrophysics Data System (ADS)

    Okawa, Shohei; Omura, Yuji

    2017-08-01

    The small quark mixing, described by the Cabibbo-Kobayashi-Maskawa (CKM) matrix in the standard model, may be a clue to reveal new physics around the TeV scale. We consider a simple scenario that extra particles in a hidden sector radiatively mediate the flavor violation to the quark sector around the TeV scale and effectively realize the observed CKM matrix. The lightest particle in the hidden sector, whose contribution to the CKM matrix is expected to be dominant, is a good dark matter (DM) candidate. There are many possible setups to describe this scenario, so that we investigate some universal predictions of this kind of model, focusing on the contribution of DM to the quark mixing and flavor physics. In this scenario, there is an explicit relation between the CKM matrix and flavor violating couplings, such as four-quark couplings, because both are radiatively induced by the particles in the hidden sector. Then, we can explicitly find the DM mass region and the size of Yukawa couplings between the DM and quarks, based on the study of flavor physics and DM physics. In conclusion, we show that DM mass in our scenario is around the TeV scale, and the Yukawa couplings are between O (0.01 ) and O (1 ). The spin-independent DM scattering cross section is estimated as O (10-9) [pb]. An extra colored particle is also predicted at the O (10 ) TeV scale.

  2. Quasifixed points from scalar sequestering and the little hierarchy problem in supersymmetry

    NASA Astrophysics Data System (ADS)

    Martin, Stephen P.

    2018-02-01

    In supersymmetric models with scalar sequestering, superconformal strong dynamics in the hidden sector suppresses the low-energy couplings of mass dimension 2, compared to the squares of the dimension-1 parameters. Taking into account restrictions on the anomalous dimensions in superconformal theories, I point out that the interplay between the hidden and visible sector renormalizations gives rise to quasifixed point running for the supersymmetric Standard Model squared mass parameters, rather than driving them to 0. The extent to which this dynamics can ameliorate the little hierarchy problem in supersymmetry is studied. Models of this type in which the gaugino masses do not unify are arguably more natural, and are certainly more likely to be accessible, eventually, to the Large Hadron Collider.

  3. Asymmetric dark matter and the hadronic spectra of hidden QCD

    NASA Astrophysics Data System (ADS)

    Lonsdale, Stephen J.; Schroor, Martine; Volkas, Raymond R.

    2017-09-01

    The idea that dark matter may be a composite state of a hidden non-Abelian gauge sector has received great attention in recent years. Frameworks such as asymmetric dark matter motivate the idea that dark matter may have similar mass to the proton, while mirror matter and G ×G grand unified theories provide rationales for additional gauge sectors which may have minimal interactions with standard model particles. In this work we explore the hadronic spectra that these dark QCD models can allow. The effects of the number of light colored particles and the value of the confinement scale on the lightest stable state, the dark matter candidate, are examined in the hyperspherical constituent quark model for baryonic and mesonic states.

  4. Hidden Markov Item Response Theory Models for Responses and Response Times.

    PubMed

    Molenaar, Dylan; Oberski, Daniel; Vermunt, Jeroen; De Boeck, Paul

    2016-01-01

    Current approaches to model responses and response times to psychometric tests solely focus on between-subject differences in speed and ability. Within subjects, speed and ability are assumed to be constants. Violations of this assumption are generally absorbed in the residual of the model. As a result, within-subject departures from the between-subject speed and ability level remain undetected. These departures may be of interest to the researcher as they reflect differences in the response processes adopted on the items of a test. In this article, we propose a dynamic approach for responses and response times based on hidden Markov modeling to account for within-subject differences in responses and response times. A simulation study is conducted to demonstrate acceptable parameter recovery and acceptable performance of various fit indices in distinguishing between different models. In addition, both a confirmatory and an exploratory application are presented to demonstrate the practical value of the modeling approach.

  5. A hidden Markov model approach to neuron firing patterns.

    PubMed Central

    Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G

    1996-01-01

    Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing. Images FIGURE 3 PMID:8913581

  6. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  7. Nonrecurrence and Bell-like inequalities

    NASA Astrophysics Data System (ADS)

    Danforth, Douglas G.

    2017-12-01

    The general class, Λ, of Bell hidden variables is composed of two subclasses ΛR and ΛN such that ΛR⋃ΛN = Λ and ΛR∩ ΛN = {}. The class ΛN is very large and contains random variables whose domain is the continuum, the reals. There are an uncountable infinite number of reals. Every instance of a real random variable is unique. The probability of two instances being equal is zero, exactly zero. ΛN induces sample independence. All correlations are context dependent but not in the usual sense. There is no "spooky action at a distance". Random variables, belonging to ΛN, are independent from one experiment to the next. The existence of the class ΛN makes it impossible to derive any of the standard Bell inequalities used to define quantum entanglement.

  8. Inherent characteristics of sawtooth cycles can explain different glacial periodicities

    NASA Astrophysics Data System (ADS)

    Omta, Anne Willem; Kooi, Bob W.; van Voorn, George A. K.; Rickaby, Rosalind E. M.; Follows, Michael J.

    2016-01-01

    At the Mid-Pleistocene Transition about 1 Ma, the dominant periodicity of the glacial-interglacial cycles shifted from 40 to 100 kyr. Here, we use a previously developed mathematical model to investigate the possible dynamical origin of these different periodicities. The model has two variables, one of which exhibits sawtooth oscillations, resembling the glacial-interglacial cycles, whereas the other variable exhibits spikes at the rapid transitions. When applying a sinusoidal forcing with a fixed period, there emerges a rich variety of cycles with different periodicities, each being a multiple of the forcing period. Furthermore, the dominant periodicity of the system can change, while the forcing periodicity remains fixed, due to either random variations or different frequency components of the orbital forcing. Two key relationships stand out as predictions to be tested against observations: (1) the amplitude and the periodicity of the cycles are approximately linearly proportional to each other, a relationship that is also found in the δ ^{18}O temperature proxy. (2) The magnitude of the spikes increases with increasing periodicity and amplitude of the sawtooth. This prediction could be used to identify one or more currently hidden spiking variables driving the glacial-interglacial transitions. Essentially, the quest would be for any proxy record, concurrent with a dynamical model prediction, that exhibits deglacial spikes which increase at times when the amplitude/periodicity of the glacial cycles increases. In the specific context of our calcifier-alkalinity mechanism, the records of interest would be calcifier productivity and calcite accumulation. We believe that such a falsifiable hypothesis should provide a strong motivation for the collection of further records.

  9. Quasi-Supervised Scoring of Human Sleep in Polysomnograms Using Augmented Input Variables

    PubMed Central

    Yaghouby, Farid; Sunderam, Sridhar

    2015-01-01

    The limitations of manual sleep scoring make computerized methods highly desirable. Scoring errors can arise from human rater uncertainty or inter-rater variability. Sleep scoring algorithms either come as supervised classifiers that need scored samples of each state to be trained, or as unsupervised classifiers that use heuristics or structural clues in unscored data to define states. We propose a quasi-supervised classifier that models observations in an unsupervised manner but mimics a human rater wherever training scores are available. EEG, EMG, and EOG features were extracted in 30s epochs from human-scored polysomnograms recorded from 42 healthy human subjects (18 to 79 years) and archived in an anonymized, publicly accessible database. Hypnograms were modified so that: 1. Some states are scored but not others; 2. Samples of all states are scored but not for transitional epochs; and 3. Two raters with 67% agreement are simulated. A framework for quasi-supervised classification was devised in which unsupervised statistical models—specifically Gaussian mixtures and hidden Markov models—are estimated from unlabeled training data, but the training samples are augmented with variables whose values depend on available scores. Classifiers were fitted to signal features incorporating partial scores, and used to predict scores for complete recordings. Performance was assessed using Cohen's K statistic. The quasi-supervised classifier performed significantly better than an unsupervised model and sometimes as well as a completely supervised model despite receiving only partial scores. The quasi-supervised algorithm addresses the need for classifiers that mimic scoring patterns of human raters while compensating for their limitations. PMID:25679475

  10. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    PubMed Central

    Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide

    2017-01-01

    Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889

  11. Smoothing tautologies, hidden dynamics, and sigmoid asymptotics for piecewise smooth systems

    NASA Astrophysics Data System (ADS)

    Jeffrey, Mike R.

    2015-10-01

    Switches in real systems take many forms, such as impacts, electronic relays, mitosis, and the implementation of decisions or control strategies. To understand what is lost, and what can be retained, when we model a switch as an instantaneous event, requires a consideration of so-called hidden terms. These are asymptotically vanishing outside the switch, but can be encoded in the form of nonlinear switching terms. A general expression for the switch can be developed in the form of a series of sigmoid functions. We review the key steps in extending Filippov's method of sliding modes to such systems. We show how even slight nonlinear effects can hugely alter the behaviour of an electronic control circuit, and lead to "hidden" attractors inside the switching surface.

  12. Multistability and hidden attractors in a relay system with hysteresis

    NASA Astrophysics Data System (ADS)

    Zhusubaliyev, Zhanybai T.; Mosekilde, Erik; Rubanov, Vasily G.; Nabokov, Roman A.

    2015-06-01

    For nonlinear dynamic systems with switching control, the concept of a "hidden attractor" naturally applies to a stable dynamic state that either (1) coexists with the stable switching cycle or (2), if the switching cycle is unstable, has a basin of attraction that does not intersect with the neighborhood of that cycle. We show how the equilibrium point of a relay system disappears in a boundary-equilibrium bifurcation as the system enters the region of autonomous switching dynamics and demonstrate experimentally how a relay system can exhibit large amplitude chaotic oscillations at high values of the supply voltage. By investigating a four-dimensional model of the experimental relay system we finally show how a variety of hidden periodic, quasiperiodic and chaotic attractors arise, transform and disappear through different bifurcations.

  13. Smoothing tautologies, hidden dynamics, and sigmoid asymptotics for piecewise smooth systems.

    PubMed

    Jeffrey, Mike R

    2015-10-01

    Switches in real systems take many forms, such as impacts, electronic relays, mitosis, and the implementation of decisions or control strategies. To understand what is lost, and what can be retained, when we model a switch as an instantaneous event, requires a consideration of so-called hidden terms. These are asymptotically vanishing outside the switch, but can be encoded in the form of nonlinear switching terms. A general expression for the switch can be developed in the form of a series of sigmoid functions. We review the key steps in extending Filippov's method of sliding modes to such systems. We show how even slight nonlinear effects can hugely alter the behaviour of an electronic control circuit, and lead to "hidden" attractors inside the switching surface.

  14. Quantum learning of classical stochastic processes: The completely positive realization problem

    NASA Astrophysics Data System (ADS)

    Monràs, Alex; Winter, Andreas

    2016-01-01

    Among several tasks in Machine Learning, a specially important one is the problem of inferring the latent variables of a system and their causal relations with the observed behavior. A paradigmatic instance of this is the task of inferring the hidden Markov model underlying a given stochastic process. This is known as the positive realization problem (PRP), [L. Benvenuti and L. Farina, IEEE Trans. Autom. Control 49(5), 651-664 (2004)] and constitutes a central problem in machine learning. The PRP and its solutions have far-reaching consequences in many areas of systems and control theory, and is nowadays an important piece in the broad field of positive systems theory. We consider the scenario where the latent variables are quantum (i.e., quantum states of a finite-dimensional system) and the system dynamics is constrained only by physical transformations on the quantum system. The observable dynamics is then described by a quantum instrument, and the task is to determine which quantum instrument — if any — yields the process at hand by iterative application. We take as a starting point the theory of quasi-realizations, whence a description of the dynamics of the process is given in terms of linear maps on state vectors and probabilities are given by linear functionals on the state vectors. This description, despite its remarkable resemblance with the hidden Markov model, or the iterated quantum instrument, is however devoid of any stochastic or quantum mechanical interpretation, as said maps fail to satisfy any positivity conditions. The completely positive realization problem then consists in determining whether an equivalent quantum mechanical description of the same process exists. We generalize some key results of stochastic realization theory, and show that the problem has deep connections with operator systems theory, giving possible insight to the lifting problem in quotient operator systems. Our results have potential applications in quantum machine learning, device-independent characterization and reverse-engineering of stochastic processes and quantum processors, and more generally, of dynamical processes with quantum memory [M. Guţă, Phys. Rev. A 83(6), 062324 (2011); M. Guţă and N. Yamamoto, e-print arXiv:1303.3771(2013)].

  15. Extended Friedberg-Lee hidden symmetries, quark masses, and CP violation with four generations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bar-Shalom, Shaouly; Oaknin, David; Soni, Amarjit

    2009-07-01

    Motivated in part by the several observed anomalies involving CP asymmetries of B and B{sub s} decays, we consider the standard model with a 4th sequential family (SM4) which seems to offer a rather simple resolution. We initially assume T-invariance by taking the up and down-quark 4x4 mass matrix to be real. Following Friedberg and Lee (FL), we then impose a hidden symmetry on the unobserved (hidden) up and down-quark SU(2) states. The hidden symmetry for four generations ensures the existence of two zero-mass eigenstates, which we take to be the (u,c) and (d,s) states in the up and down-quarkmore » sectors, respectively. Then, we simultaneously break T-invariance and the hidden symmetry by introducing two phase factors in each sector. This breaking mechanism generates the small quark masses m{sub u}, m{sub c} and m{sub d}, m{sub s}, which, along with the orientation of the hidden symmetry, determine the size of CP-violation in the SM4. For illustration we choose a specific physical picture for the hidden symmetry and the breaking mechanism that reproduces the observed quark masses, mixing angles and CP-violation, and at the same time allows us to further obtain very interesting relations/predictions for the mixing angles of t and t'. For example, with this choice we get V{sub td}{approx}(V{sub cb}/V{sub cd}-V{sub ts}/V{sub us})+O({lambda}{sup 2}) and V{sub t{sup '}}{sub b}{approx}V{sub t{sup '}}{sub d}{center_dot}(V{sub cb}/V{sub cd}), V{sub tb{sup '}}{approx}V{sub t{sup '}}{sub d}{center_dot}(V{sub ts}/V{sub us}), implying that V{sub t{sup '}}{sub d}>V{sub t{sup '}}{sub b}, V{sub tb{sup '}}. We furthermore find that the Cabibbo angle is related to the orientation of the hidden symmetry and that the key CP-violating quantity of our model at high energies, J{sub SM4}{identical_to}Im(V{sub tb}V{sub t{sup '}}{sub b}*V{sub t{sup '}}{sub b{sup '}}V{sub tb{sup '}}*), which is the high-energy analogue of the Jarlskog invariant of the SM, is proportional to the light-quark masses and the measured Cabibbo-Kobayashi-Maskawa quark-mixing matrix angles: |J{sub SM4}|{approx}A{sup 3}{lambda}{sup 5}x({radical}(m{sub u}/m{sub t})+{radical}(m{sub c}/m{sub t{sup '}})-{radical}(m{sub d}/m{sub b})+{radical}(m{sub s}/m{sub b{sup '}})){approx}10{sup -5}, where A{approx}0.81 and {lambda}=0.2257 are the Wolfenstein parameters. Other choices for the orientation of the hidden symmetry and/or the breaking mechanism may lead to different physical outcomes. A general solution, obtained numerically, will be presented in a forthcoming paper.« less

  16. Extended Friedberg-Lee hidden symmetries, quark masses,and CP violation with four generations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bar-Shalom, S.; Soni, A.; Oaknin, D.

    2009-07-16

    Motivated in part by the several observed anomalies involving CP asymmetries of B and B{sub s} decays, we consider the standard model with a 4th sequential family (SM4) which seems to offer a rather simple resolution. We initially assume T-invariance by taking the up and down-quark 4 x 4 mass matrix to be real. Following Friedberg and Lee (FL), we then impose a hidden symmetry on the unobserved (hidden) up and down-quark SU(2) states. The hidden symmetry for four generations ensures the existence of two zero-mass eigenstates, which we take to be the (u,c) and (d,s) states in the upmore » and down-quark sectors, respectively. Then, we simultaneously break T-invariance and the hidden symmetry by introducing two phase factors in each sector. This breaking mechanism generates the small quark masses m{sub u}, m{sub c} and m{sub d}, m{sub s}, which, along with the orientation of the hidden symmetry, determine the size of CP-violation in the SM4. For illustration we choose a specific physical picture for the hidden symmetry and the breaking mechanism that reproduces the observed quark masses, mixing angles and CP-violation, and at the same time allows us to further obtain very interesting relations/predictions for the mixing angles of t and t'. For example, with this choice we get V{sub td} {approx} (V{sub cb}/V{sub cd}-V{sub ts}/V{sub us}) + O({lambda}{sup 2}) and V{sub t'b}{approx}V{sub t'd{sm_bullet}}(V{sub cb}/V{sub cd}), V{sub tb'}V{sub t'd{sm_bullet}}(V{sub ts}/V{sub us}), implying that V{sub t'd} > V{sub t'b}, V{sub tb'}. We furthermore find that the Cabibbo angle is related to the orientation of the hidden symmetry and that the key CP-violating quantity of our model at high energies, J{sub SM4} {triple_bond} Im(V{sub tb}V{sub t'b*}V{sub t'b{prime}}V{sub tb'*}), which is the high-energy analogue of the Jarlskog invariant of the SM, is proportional to the light-quark masses and the measured Cabibbo-Kobayashi-Maskawa quark-mixing matrix angles: |J{sub SM4}|A{sup 3}{lambda}{sup 5} x ({radical}(m{sub u}/m{sub t}) + {radical}m{sub c}/m{sub t'}-{radical}(m{sub d}/m{sub b}) + {radical}m{sub s}/m{sub b'}) {approx} 10{sup -5}, where A {approx} 0.81 and {lambda} = 0.2257 are the Wolfenstein parameters. Other choices for the orientation of the hidden symmetry and/or the breaking mechanism may lead to different physical outcomes. A general solution, obtained numerically, will be presented in a forthcoming paper.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabello, Adan

    We introduce two two-player quantum pseudotelepathy games based on two recently proposed all-versus-nothing (AVN) proofs of Bell's theorem [A. Cabello, Phys. Rev. Lett. 95, 210401 (2005); Phys. Rev. A 72, 050101(R) (2005)]. These games prove that Broadbent and Methot's claim that these AVN proofs do not rule out local-hidden-variable theories in which it is possible to exchange unlimited information inside the same light cone (quant-ph/0511047) is incorrect.

  18. Repositioned Lives: Language, Ethnicity, and Narrative Identity among Chinese-Vietnamese Community College Students in Los Angeles' San Gabriel Valley.

    ERIC Educational Resources Information Center

    Frank, Russell Alan

    Chinese speakers from Vietnam are a distinctive but hidden ethnolinguistic minority group in the San Gabriel Valley region of Los Angeles. Many variables present barriers to their full participation in society from both the values and norms of dominant American society and non-Chinese co-nationals from Vietnam as well as higher status co-ethnics…

  19. Nonassociativity, supersymmetry, and hidden variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dzhunushaliev, Vladimir

    2008-04-15

    It is shown that the supersymmetric quantum mechanics has an octonionic generalization. The generalization is based on the inclusion of quaternions into octonions. The elements from the coset octonions/quaternions are unobservables because they cannot be considered as quantum operators as a consequence of their nonassociative properties. The idea that the octonionic generalization of the supersymmetric quantum mechanics describes an observable particle formed with unobservable ''particles'' is presented.

  20. Fundamental Study on Quantum Nanojets

    DTIC Science & Technology

    2004-08-01

    Pergamon Press. Bell , J. S . 1966 On the problem of hidden variables in quantum mechanics. Rev. of Modern Phys., 38, 447. Berndl, K., Daumer, M...fluid dynamics based on two quantum mechanical perspectives; Schrödinger’s wave mechanics and quantum fluid dynamics based on Hamilton-Jacoby...References 8 2). Direct Problems a). Quantum fluid dynamics formalism based on Hamilton-Jacoby equation are adapted for the numerical

  1. Quantum Locality?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stapp, Henry

    Robert Griffiths has recently addressed, within the framework of a ‘consistent quantum theory’ (CQT) that he has developed, the issue of whether, as is often claimed, quantum mechanics entails a need for faster-than-light transfers of information over long distances. He argues, on the basis of his examination of certain arguments that claim to demonstrate the existence of such nonlocal influences, that such influences do not exist. However, his examination was restricted mainly to hidden-variable-based arguments that include in their premises some essentially classical-physics-type assumptions that are fundamentally incompatible with the precepts of quantum physics. One cannot logically prove properties ofmore » a system by attributing to the system properties alien to that system. Hence Griffiths’ rejection of hidden-variable-based proofs is logically warranted. Griffiths mentions the existence of a certain alternative proof that does not involve hidden variables, and that uses only macroscopically described observable properties. He notes that he had examined in his book proofs of this general kind, and concluded that they provide no evidence for nonlocal influences. But he did not examine the particular proof that he cites. An examination of that particular proof by the method specified by his ‘consistent quantum theory’ shows that the cited proof is valid within that restrictive framework. This necessary existence, within the ‘consistent’ framework, of long range essentially instantaneous influences refutes the claim made by Griffiths that his ‘consistent’ framework is superior to the orthodox quantum theory of von Neumann because it does not entail instantaneous influences. An added section responds to Griffiths’ reply, which cites a litany of ambiguities that seem to restrict, devastatingly, the scope of his CQT formalism, apparently to buttress his claim that my use of that formalism to validate the nonlocality theorem is flawed. But the vagaries that he cites do not upset the proof in question. It is show here in detail why the precise statement of this theorem justifies the specified application of CQT. It is also shown, in response to his challenge, why a putative proof of locality that he has proposed is not valid.« less

  2. Hidden and antiferromagnetic order as a rank-5 superspin in URu2Si2

    NASA Astrophysics Data System (ADS)

    Rau, Jeffrey G.; Kee, Hae-Young

    2012-06-01

    We propose a candidate for the hidden order in URu2Si2: a rank-5 E type spin-density wave between uranium 5f crystal-field doublets Γ7(1) and Γ7(2), breaking time-reversal and lattice tetragonal symmetry in a manner consistent with recent torque measurements [Okazaki , ScienceSCIEAS0036-807510.1126/science.1197358 331, 439 (2011)]. We argue that coupling of this order parameter to magnetic probes can be hidden by crystal-field effects, while still having significant effects on transport, thermodynamics, and magnetic susceptibilities. In a simple tight-binding model for the heavy quasiparticles, we show the connection between the hidden order and antiferromagnetic phases arises since they form different components of this single rank-5 pseudospin vector. Using a phenomenological theory, we show that the experimental pressure-temperature phase diagram can be qualitatively reproduced by tuning terms which break pseudospin rotational symmetry. As a test of our proposal, we predict the presence of small magnetic moments in the basal plane oriented in the [110] direction ordered at the wave vector (0,0,1).

  3. A latent discriminative model-based approach for classification of imaginary motor tasks from EEG data.

    PubMed

    Saa, Jaime F Delgado; Çetin, Müjdat

    2012-04-01

    We consider the problem of classification of imaginary motor tasks from electroencephalography (EEG) data for brain-computer interfaces (BCIs) and propose a new approach based on hidden conditional random fields (HCRFs). HCRFs are discriminative graphical models that are attractive for this problem because they (1) exploit the temporal structure of EEG; (2) include latent variables that can be used to model different brain states in the signal; and (3) involve learned statistical models matched to the classification task, avoiding some of the limitations of generative models. Our approach involves spatial filtering of the EEG signals and estimation of power spectra based on autoregressive modeling of temporal segments of the EEG signals. Given this time-frequency representation, we select certain frequency bands that are known to be associated with execution of motor tasks. These selected features constitute the data that are fed to the HCRF, parameters of which are learned from training data. Inference algorithms on the HCRFs are used for the classification of motor tasks. We experimentally compare this approach to the best performing methods in BCI competition IV as well as a number of more recent methods and observe that our proposed method yields better classification accuracy.

  4. Ancestral state reconstruction, rate heterogeneity, and the evolution of reptile viviparity.

    PubMed

    King, Benedict; Lee, Michael S Y

    2015-05-01

    Virtually all models for reconstructing ancestral states for discrete characters make the crucial assumption that the trait of interest evolves at a uniform rate across the entire tree. However, this assumption is unlikely to hold in many situations, particularly as ancestral state reconstructions are being performed on increasingly large phylogenies. Here, we show how failure to account for such variable evolutionary rates can cause highly anomalous (and likely incorrect) results, while three methods that accommodate rate variability yield the opposite, more plausible, and more robust reconstructions. The random local clock method, implemented in BEAST, estimates the position and magnitude of rate changes on the tree; split BiSSE estimates separate rate parameters for pre-specified clades; and the hidden rates model partitions each character state into a number of rate categories. Simulations show the inadequacy of traditional models when characters evolve with both asymmetry (different rates of change between states within a character) and heterotachy (different rates of character evolution across different clades). The importance of accounting for rate heterogeneity in ancestral state reconstruction is highlighted empirically with a new analysis of the evolution of viviparity in squamate reptiles, which reveal a predominance of forward (oviparous-viviparous) transitions and very few reversals. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Deep greedy learning under thermal variability in full diurnal cycles

    NASA Astrophysics Data System (ADS)

    Rauss, Patrick; Rosario, Dalton

    2017-08-01

    We study the generalization and scalability behavior of a deep belief network (DBN) applied to a challenging long-wave infrared hyperspectral dataset, consisting of radiance from several manmade and natural materials within a fixed site located 500 m from an observation tower. The collections cover multiple full diurnal cycles and include different atmospheric conditions. Using complementary priors, a DBN uses a greedy algorithm that can learn deep, directed belief networks one layer at a time and has two layers form to provide undirected associative memory. The greedy algorithm initializes a slower learning procedure, which fine-tunes the weights, using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of spectral data and their labels, despite significant data variability between and within classes due to environmental and temperature variation occurring within and between full diurnal cycles. We argue, however, that more questions than answers are raised regarding the generalization capacity of these deep nets through experiments aimed at investigating their training and augmented learning behavior.

  6. Geologic and Seismologic Investigation

    DTIC Science & Technology

    1988-12-01

    Descriptions, Hidden and Buchanan Dams 4 1.6.1 Hidden Dam 4 1.6.2 Buchanan Dam 5 2 TECTONIC SETTING 2.1 General 7 2.2 Cretaceous-Cenozoic Tectonic ...Activity 7 2.2.1 Cretaceous-Paleogene 8 2.2.2 Neogene 9 2.2.3 Late Cenozoic Tectonic Model 9 3 REGIONAL GEOLOGY 3.1 General 11 3.2 Geologic Units 11...detected by the imagery analysis which indicates there has been no tectonic movement from about 100,000 to 400,000 years ago to the present. The field

  7. Probabilistic and Statistical Modeling of Complex Systems Exhibiting Long Range Dependence and Heavy Tails

    DTIC Science & Technology

    2010-07-01

    cluster input can look like a Fractional Brownian motion even in the slow growth regime’’. Advances in Applied Probability, 41(2), 393-427. Yeghiazarian, L... Brownian motion ? Ann. Appl. Probab., 12(1):23–68, 2002. [10] A. Mitra and S.I. Resnick. Hidden domain of attraction: extension of hidden regular variation...variance? A paradox and an explanation’’. Quantitative Finance , 1, 11 pages. Hult, H. and Samorodnitsky, G. (2010) ``Large deviations for point

  8. Solving the "Hidden Line" Problem

    NASA Technical Reports Server (NTRS)

    1984-01-01

    David Hedgley Jr., a mathematician at Dryden Flight Research Center, has developed an accurate computer program that considers whether a line in a graphic model of a three dimensional object should or should not be visible. The Hidden Line Computer Code, program automatically removes superfluous lines and permits the computer to display an object from specific viewpoints, just as the human eye would see it. Users include Rowland Institute for Science in Cambridge, MA, several departments of Lockheed Georgia Co., and Nebraska Public Power District (NPPD).

  9. Passive Acoustic Leak Detection for Sodium Cooled Fast Reactors Using Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Marklund, A. Riber; Kishore, S.; Prakash, V.; Rajan, K. K.; Michel, F.

    2016-06-01

    Acoustic leak detection for steam generators of sodium fast reactors have been an active research topic since the early 1970s and several methods have been tested over the years. Inspired by its success in the field of automatic speech recognition, we here apply hidden Markov models (HMM) in combination with Gaussian mixture models (GMM) to the problem. To achieve this, we propose a new feature calculation scheme, based on the temporal evolution of the power spectral density (PSD) of the signal. Using acoustic signals recorded during steam/water injection experiments done at the Indira Gandhi Centre for Atomic Research (IGCAR), the proposed method is tested. We perform parametric studies on the HMM+GMM model size and demonstrate that the proposed method a) performs well without a priori knowledge of injection noise, b) can incorporate several noise models and c) has an output distribution that simplifies false alarm rate control.

  10. Failure monitoring in dynamic systems: Model construction without fault training data

    NASA Technical Reports Server (NTRS)

    Smyth, P.; Mellstrom, J.

    1993-01-01

    Advances in the use of autoregressive models, pattern recognition methods, and hidden Markov models for on-line health monitoring of dynamic systems (such as DSN antennas) have recently been reported. However, the algorithms described in previous work have the significant drawback that data acquired under fault conditions are assumed to be available in order to train the model used for monitoring the system under observation. This article reports that this assumption can be relaxed and that hidden Markov monitoring models can be constructed using only data acquired under normal conditions and prior knowledge of the system characteristics being measured. The method is described and evaluated on data from the DSS 13 34-m beam wave guide antenna. The primary conclusion from the experimental results is that the method is indeed practical and holds considerable promise for application at the 70-m antenna sites where acquisition of fault data under controlled conditions is not realistic.

  11. Hidden Markov model-derived structural alphabet for proteins: the learning of protein local shapes captures sequence specificity.

    PubMed

    Camproux, A C; Tufféry, P

    2005-08-05

    Understanding and predicting protein structures depend on the complexity and the accuracy of the models used to represent them. We have recently set up a Hidden Markov Model to optimally compress protein three-dimensional conformations into a one-dimensional series of letters of a structural alphabet. Such a model learns simultaneously the shape of representative structural letters describing the local conformation and the logic of their connections, i.e. the transition matrix between the letters. Here, we move one step further and report some evidence that such a model of protein local architecture also captures some accurate amino acid features. All the letters have specific and distinct amino acid distributions. Moreover, we show that words of amino acids can have significant propensities for some letters. Perspectives point towards the prediction of the series of letters describing the structure of a protein from its amino acid sequence.

  12. A Hybrid of Deep Network and Hidden Markov Model for MCI Identification with Resting-State fMRI.

    PubMed

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2015-10-01

    In this paper, we propose a novel method for modelling functional dynamics in resting-state fMRI (rs-fMRI) for Mild Cognitive Impairment (MCI) identification. Specifically, we devise a hybrid architecture by combining Deep Auto-Encoder (DAE) and Hidden Markov Model (HMM). The roles of DAE and HMM are, respectively, to discover hierarchical non-linear relations among features, by which we transform the original features into a lower dimension space, and to model dynamic characteristics inherent in rs-fMRI, i.e. , internal state changes. By building a generative model with HMMs for each class individually, we estimate the data likelihood of a test subject as MCI or normal healthy control, based on which we identify the clinical label. In our experiments, we achieved the maximal accuracy of 81.08% with the proposed method, outperforming state-of-the-art methods in the literature.

  13. A Hybrid of Deep Network and Hidden Markov Model for MCI Identification with Resting-State fMRI

    PubMed Central

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2015-01-01

    In this paper, we propose a novel method for modelling functional dynamics in resting-state fMRI (rs-fMRI) for Mild Cognitive Impairment (MCI) identification. Specifically, we devise a hybrid architecture by combining Deep Auto-Encoder (DAE) and Hidden Markov Model (HMM). The roles of DAE and HMM are, respectively, to discover hierarchical non-linear relations among features, by which we transform the original features into a lower dimension space, and to model dynamic characteristics inherent in rs-fMRI, i.e., internal state changes. By building a generative model with HMMs for each class individually, we estimate the data likelihood of a test subject as MCI or normal healthy control, based on which we identify the clinical label. In our experiments, we achieved the maximal accuracy of 81.08% with the proposed method, outperforming state-of-the-art methods in the literature. PMID:27054199

  14. Grand average ERP-image plotting and statistics: A method for comparing variability in event-related single-trial EEG activities across subjects and conditions

    PubMed Central

    Delorme, Arnaud; Miyakoshi, Makoto; Jung, Tzyy-Ping; Makeig, Scott

    2014-01-01

    With the advent of modern computing methods, modeling trial-to-trial variability in biophysical recordings including electroencephalography (EEG) has become of increasingly interest. Yet no widely used method exists for comparing variability in ordered collections of single-trial data epochs across conditions and subjects. We have developed a method based on an ERP-image visualization tool in which potential, spectral power, or some other measure at each time point in a set of event-related single-trial data epochs are represented as color coded horizontal lines that are then stacked to form a 2-D colored image. Moving-window smoothing across trial epochs can make otherwise hidden event-related features in the data more perceptible. Stacking trials in different orders, for example ordered by subject reaction time, by context-related information such as inter-stimulus interval, or some other characteristic of the data (e.g., latency-window mean power or phase of some EEG source) can reveal aspects of the multifold complexities of trial-to-trial EEG data variability. This study demonstrates new methods for computing and visualizing grand ERP-image plots across subjects and for performing robust statistical testing on the resulting images. These methods have been implemented and made freely available in the EEGLAB signal-processing environment that we maintain and distribute. PMID:25447029

  15. Regionalisation of Hydrological Indices to Assess Land-Use Change Impacts in the Tropical Andes

    NASA Astrophysics Data System (ADS)

    Buytaert, W.; Ochoa Tocachi, B. F.

    2014-12-01

    Andean ecosystems are major water sources for cities and communities located in the Tropical Andes; however, there is a considerable lack of knowledge about their hydrology. Two problems are especially important: (i) the lack of monitoring to assess the impacts of historical land-use and cover change and degradation (LUCCD) at catchment scale, and (ii) the high variability in climatic and hydrological conditions that complicate the evaluation of land management practices. This study analyses how a reliable LUCCD impacts assessment can be performed in an environment of high variability combined with data-scarcity and low-quality records. We use data from participatory hydrological monitoring activities in 20 catchments distributed along the tropical Andes. A set of 46 hydrological indices is calculated and regionalized by relating them to 42 physical catchment properties. Principal Component Analysis (PCA) is performed to maximise available data while minimising redundancy in the sets of variables. Hydrological model parameters are constrained by estimated indices, and different behavioural predictions are assembled to provide a generalised response on which we assess LUCCD impacts. Results from this methodology show that the attributed effects of LUCCD in pair-wise catchment comparisons may be overstated or hidden by different sources of uncertainty, including measurement inaccuracies and model structural errors. We propose extrapolation and evaluation in ungauged catchments as a way to regionalize LUCCD predictions and to provide statistically significant conclusions in the Andean region. These estimations may deliver reliable knowledge to evaluate the hydrological impact of different watershed management practices.

  16. Quantitative thickness prediction of tectonically deformed coal using Extreme Learning Machine and Principal Component Analysis: a case study

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Li, Yan; Chen, Tongjun; Yan, Qiuyan; Ma, Li

    2017-04-01

    The thickness of tectonically deformed coal (TDC) has positive correlation associations with gas outbursts. In order to predict the TDC thickness of coal beds, we propose a new quantitative predicting method using an extreme learning machine (ELM) algorithm, a principal component analysis (PCA) algorithm, and seismic attributes. At first, we build an ELM prediction model using the PCA attributes of a synthetic seismic section. The results suggest that the ELM model can produce a reliable and accurate prediction of the TDC thickness for synthetic data, preferring Sigmoid activation function and 20 hidden nodes. Then, we analyze the applicability of the ELM model on the thickness prediction of the TDC with real application data. Through the cross validation of near-well traces, the results suggest that the ELM model can produce a reliable and accurate prediction of the TDC. After that, we use 250 near-well traces from 10 wells to build an ELM predicting model and use the model to forecast the TDC thickness of the No. 15 coal in the study area using the PCA attributes as the inputs. Comparing the predicted results, it is noted that the trained ELM model with two selected PCA attributes yields better predication results than those from the other combinations of the attributes. Finally, the trained ELM model with real seismic data have a different number of hidden nodes (10) than the trained ELM model with synthetic seismic data. In summary, it is feasible to use an ELM model to predict the TDC thickness using the calculated PCA attributes as the inputs. However, the input attributes, the activation function and the number of hidden nodes in the ELM model should be selected and tested carefully based on individual application.

  17. Life-cycle modification in open oceans accounts for genome variability in a cosmopolitan phytoplankton.

    PubMed

    von Dassow, Peter; John, Uwe; Ogata, Hiroyuki; Probert, Ian; Bendif, El Mahdi; Kegel, Jessica U; Audic, Stéphane; Wincker, Patrick; Da Silva, Corinne; Claverie, Jean-Michel; Doney, Scott; Glover, David M; Flores, Daniella Mella; Herrera, Yeritza; Lescot, Magali; Garet-Delmas, Marie-José; de Vargas, Colomban

    2015-06-01

    Emiliania huxleyi is the most abundant calcifying plankton in modern oceans with substantial intraspecific genome variability and a biphasic life cycle involving sexual alternation between calcified 2N and flagellated 1N cells. We show that high genome content variability in Emiliania relates to erosion of 1N-specific genes and loss of the ability to form flagellated cells. Analysis of 185 E. huxleyi strains isolated from world oceans suggests that loss of flagella occurred independently in lineages inhabiting oligotrophic open oceans over short evolutionary timescales. This environmentally linked physiogenomic change suggests life cycling is not advantageous in very large/diluted populations experiencing low biotic pressure and low ecological variability. Gene loss did not appear to reflect pressure for genome streamlining in oligotrophic oceans as previously observed in picoplankton. Life-cycle modifications might be common in plankton and cause major functional variability to be hidden from traditional taxonomic or molecular markers.

  18. Space coding for sensorimotor transformations can emerge through unsupervised learning.

    PubMed

    De Filippo De Grazia, Michele; Cutini, Simone; Lisi, Matteo; Zorzi, Marco

    2012-08-01

    The posterior parietal cortex (PPC) is fundamental for sensorimotor transformations because it combines multiple sensory inputs and posture signals into different spatial reference frames that drive motor programming. Here, we present a computational model mimicking the sensorimotor transformations occurring in the PPC. A recurrent neural network with one layer of hidden neurons (restricted Boltzmann machine) learned a stochastic generative model of the sensory data without supervision. After the unsupervised learning phase, the activity of the hidden neurons was used to compute a motor program (a population code on a bidimensional map) through a simple linear projection and delta rule learning. The average motor error, calculated as the difference between the expected and the computed output, was less than 3°. Importantly, analyses of the hidden neurons revealed gain-modulated visual receptive fields, thereby showing that space coding for sensorimotor transformations similar to that observed in the PPC can emerge through unsupervised learning. These results suggest that gain modulation is an efficient coding strategy to integrate visual and postural information toward the generation of motor commands.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Mark S.; Son Wonmin; Heaney, Libby

    Recently, it was demonstrated by Son et al., Phys. Rev. Lett. 102, 110404 (2009), that a separable bipartite continuous-variable quantum system can violate the Clauser-Horne-Shimony-Holt (CHSH) inequality via operationally local transformations. Operationally local transformations are parametrized only by local variables; however, in order to allow violation of the CHSH inequality, a maximally entangled ancilla was necessary. The use of the entangled ancilla in this scheme caused the state under test to become dependent on the measurement choice one uses to calculate the CHSH inequality, thus violating one of the assumptions used in deriving a Bell inequality, namely, the free willmore » or statistical independence assumption. The novelty in this scheme however is that the measurement settings can be external free parameters. In this paper, we generalize these operationally local transformations for multipartite Bell inequalities (with dichotomic observables) and provide necessary and sufficient conditions for violation within this scheme. Namely, a violation of a multipartite Bell inequality in this setting is contingent on whether an ancillary system admits any realistic local hidden variable model (i.e., whether the ancilla violates the given Bell inequality). These results indicate that violation of a Bell inequality performed on a system does not necessarily imply that the system is nonlocal. In fact, the system under test may be completely classical. However, nonlocality must have resided somewhere, this may have been in the environment, the physical variables used to manipulate the system or the detectors themselves provided the measurement settings are external free variables.« less

  20. A formal method for identifying distinct states of variability in time-varying sources: SGR A* as an example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, L.; Witzel, G.; Ghez, A. M.

    2014-08-10

    Continuously time variable sources are often characterized by their power spectral density and flux distribution. These quantities can undergo dramatic changes over time if the underlying physical processes change. However, some changes can be subtle and not distinguishable using standard statistical approaches. Here, we report a methodology that aims to identify distinct but similar states of time variability. We apply this method to the Galactic supermassive black hole, where 2.2 μm flux is observed from a source associated with Sgr A* and where two distinct states have recently been suggested. Our approach is taken from mathematical finance and works withmore » conditional flux density distributions that depend on the previous flux value. The discrete, unobserved (hidden) state variable is modeled as a stochastic process and the transition probabilities are inferred from the flux density time series. Using the most comprehensive data set to date, in which all Keck and a majority of the publicly available Very Large Telescope data have been merged, we show that Sgr A* is sufficiently described by a single intrinsic state. However, the observed flux densities exhibit two states: noise dominated and source dominated. Our methodology reported here will prove extremely useful to assess the effects of the putative gas cloud G2 that is on its way toward the black hole and might create a new state of variability.« less

  1. In search of superluminal quantum communications: recent experiments and possible improvements

    NASA Astrophysics Data System (ADS)

    Cocciaro, B.; Faetti, S.; Fronzoni, L.

    2013-06-01

    As shown in the famous EPR paper (Einstein, Podolsky e Rosen, 1935), Quantum Mechanics is non-local. The Bell theorem and the experiments by Aspect and many others, ruled out the possibility of explaining quantum correlations between entangled particles using local hidden variables models (except for implausible combinations of loopholes). Some authors (Bell, Eberhard, Bohm and Hiley) suggested that quantum correlations could be due to superluminal communications (tachyons) that propagate isotropically with velocity vt > c in a preferred reference frame. For finite values of vt, Quantum Mechanics and superluminal models lead to different predictions. Some years ago a Geneva group and our group did experiments on entangled photons to evidence possible discrepancies between experimental results and quantum predictions. Since no discrepancy was found, these experiments established only lower bounds for the possible tachyon velocities vt. Here we propose an improved experiment that should lead us to explore a much larger range of possible tachyon velocities Vt for any possible direction of velocity vec V of the tachyons preferred frame.

  2. Security of BB84 with weak randomness and imperfect qubit encoding

    NASA Astrophysics Data System (ADS)

    Zhao, Liang-Yuan; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Fang, Xi; Han, Zheng-Fu; Huang, Wei

    2018-03-01

    The main threats for the well-known Bennett-Brassard 1984 (BB84) practical quantum key distribution (QKD) systems are that its encoding is inaccurate and measurement device may be vulnerable to particular attacks. Thus, a general physical model or security proof to tackle these loopholes simultaneously and quantitatively is highly desired. Here we give a framework on the security of BB84 when imperfect qubit encoding and vulnerability of measurement device are both considered. In our analysis, the potential attacks to measurement device are generalized by the recently proposed weak randomness model which assumes the input random numbers are partially biased depending on a hidden variable planted by an eavesdropper. And the inevitable encoding inaccuracy is also introduced here. From a fundamental view, our work reveals the potential information leakage due to encoding inaccuracy and weak randomness input. For applications, our result can be viewed as a useful tool to quantitatively evaluate the security of a practical QKD system.

  3. Changes in corticostriatal connectivity during reinforcement learning in humans.

    PubMed

    Horga, Guillermo; Maia, Tiago V; Marsh, Rachel; Hao, Xuejun; Xu, Dongrong; Duan, Yunsuo; Tau, Gregory Z; Graniello, Barbara; Wang, Zhishun; Kangarlu, Alayar; Martinez, Diana; Packard, Mark G; Peterson, Bradley S

    2015-02-01

    Many computational models assume that reinforcement learning relies on changes in synaptic efficacy between cortical regions representing stimuli and striatal regions involved in response selection, but this assumption has thus far lacked empirical support in humans. We recorded hemodynamic signals with fMRI while participants navigated a virtual maze to find hidden rewards. We fitted a reinforcement-learning algorithm to participants' choice behavior and evaluated the neural activity and the changes in functional connectivity related to trial-by-trial learning variables. Activity in the posterior putamen during choice periods increased progressively during learning. Furthermore, the functional connections between the sensorimotor cortex and the posterior putamen strengthened progressively as participants learned the task. These changes in corticostriatal connectivity differentiated participants who learned the task from those who did not. These findings provide a direct link between changes in corticostriatal connectivity and learning, thereby supporting a central assumption common to several computational models of reinforcement learning. © 2014 Wiley Periodicals, Inc.

  4. SYNAPTIC DEPRESSION IN DEEP NEURAL NETWORKS FOR SPEECH PROCESSING.

    PubMed

    Zhang, Wenhao; Li, Hanyu; Yang, Minda; Mesgarani, Nima

    2016-03-01

    A characteristic property of biological neurons is their ability to dynamically change the synaptic efficacy in response to variable input conditions. This mechanism, known as synaptic depression, significantly contributes to the formation of normalized representation of speech features. Synaptic depression also contributes to the robust performance of biological systems. In this paper, we describe how synaptic depression can be modeled and incorporated into deep neural network architectures to improve their generalization ability. We observed that when synaptic depression is added to the hidden layers of a neural network, it reduces the effect of changing background activity in the node activations. In addition, we show that when synaptic depression is included in a deep neural network trained for phoneme classification, the performance of the network improves under noisy conditions not included in the training phase. Our results suggest that more complete neuron models may further reduce the gap between the biological performance and artificial computing, resulting in networks that better generalize to novel signal conditions.

  5. Generalized species sampling priors with latent Beta reinforcements

    PubMed Central

    Airoldi, Edoardo M.; Costa, Thiago; Bassetti, Federico; Leisen, Fabrizio; Guindani, Michele

    2014-01-01

    Many popular Bayesian nonparametric priors can be characterized in terms of exchangeable species sampling sequences. However, in some applications, exchangeability may not be appropriate. We introduce a novel and probabilistically coherent family of non-exchangeable species sampling sequences characterized by a tractable predictive probability function with weights driven by a sequence of independent Beta random variables. We compare their theoretical clustering properties with those of the Dirichlet Process and the two parameters Poisson-Dirichlet process. The proposed construction provides a complete characterization of the joint process, differently from existing work. We then propose the use of such process as prior distribution in a hierarchical Bayes modeling framework, and we describe a Markov Chain Monte Carlo sampler for posterior inference. We evaluate the performance of the prior and the robustness of the resulting inference in a simulation study, providing a comparison with popular Dirichlet Processes mixtures and Hidden Markov Models. Finally, we develop an application to the detection of chromosomal aberrations in breast cancer by leveraging array CGH data. PMID:25870462

  6. Foundational Forces & Hidden Variables in Technology Commercialization

    NASA Astrophysics Data System (ADS)

    Barnett, Brandon

    2011-03-01

    The science of physics seems vastly different from the process of technology commercialization. Physics strives to understand our world through the experimental deduction of immutable laws and dependent variables and the resulting macro-scale phenomenon. In comparison, the~goal of business is to make a profit by addressing the needs, preferences, and whims of individuals in a market. It may seem that this environment is too dynamic to identify all the hidden variables and deduct the foundational forces that impact a business's ability to commercialize innovative technologies. One example of a business ``force'' is found in the semiconductor industry. In 1965, Intel co-founder Gordon Moore predicted that the number of transistors incorporated in a chip will approximately double every 24 months. Known as Moore's Law, this prediction has become the guiding principle for the semiconductor industry for the last 40 years. Of course, Moore's Law is not really a law of nature; rather it is the result of efforts by Intel and the entire semiconductor industry. A closer examination suggests that there are foundational principles of business that underlie the macro-scale phenomenon of Moore's Law. Principles of profitability, incentive, and strategic alignment have resulted in a coordinated influx of resources that has driven technologies to market, increasing the profitability of the semiconductor industry and optimizing the fitness of its participants. New innovations in technology are subject to these same principles. So, in addition to traditional market forces, these often unrecognized forces and variables create challenges for new technology commercialization. In this talk, I will draw from ethnographic research, complex adaptive theory, and industry data to suggest a framework with which to think about new technology commercialization. Intel's bio-silicon initiative provides a case study.

  7. Rainfall extremes, weather and climatic characterization over complex terrain: A data-driven approach based on signal enhancement methods and extreme value modeling

    NASA Astrophysics Data System (ADS)

    Pineda, Luis E.; Willems, Patrick

    2017-04-01

    Weather and climatic characterization of rainfall extremes is both of scientific and societal value for hydrometeorogical risk management, yet discrimination of local and large-scale forcing remains challenging in data-scarce and complex terrain environments. Here, we present an analysis framework that separate weather (seasonal) regimes and climate (inter-annual) influences using data-driven process identification. The approach is based on signal-to-noise separation methods and extreme value (EV) modeling of multisite rainfall extremes. The EV models use a semi-automatic parameter learning [1] for model identification across temporal scales. At weather scale, the EV models are combined with a state-based hidden Markov model [2] to represent the spatio-temporal structure of rainfall as persistent weather states. At climatic scale, the EV models are used to decode the drivers leading to the shift of weather patterns. The decoding is performed into a climate-to-weather signal subspace, built via dimension reduction of climate model proxies (e.g. sea surface temperature and atmospheric circulation) We apply the framework to the Western Andean Ridge (WAR) in Ecuador and Peru (0-6°S) using ground data from the second half of the 20th century. We find that the meridional component of winds is what matters for the in-year and inter-annual variability of high rainfall intensities alongside the northern WAR (0-2.5°S). There, low-level southerly winds are found as advection drivers for oceanic moist of the normal-rainy season and weak/moderate the El Niño (EN) type; but, the strong EN type and its unique moisture surplus is locally advected at lowlands in the central WAR. Moreover, the coastal ridges, south of 3°S dampen meridional airflows, leaving local hygrothermal gradients to control the in-year distribution of rainfall extremes and their anomalies. Overall, we show that the framework, which does not make any prior assumption on the explanatory power of the weather and climate drivers, allows identification of well-known features of the regional climate in a purely data-driven fashion. Thus, this approach shows potential for characterization of precipitation extremes in data-scarce and orographically complex regions in which model reconstructions are the only climate proxies References [1] Mínguez, R., F.J. Méndez, C. Izaguirre, M. Menéndez, and I.J. Losada (2010), Pseudooptimal parameter selection of non-stationary generalized extreme value models for environmental variables, Environ. Modell. Softw. 25, 1592-1607. [2] Pineda, L., P. Willems (2016), Multisite Downscaling of Seasonal Predictions to Daily Rainfall Characteristics over Pacific-Andean River Basins in Ecuador and Peru using a non-homogenous hidden Markov model, J. Hydrometeor, 17(2), 481-498, doi:10.1175/JHM-D-15-0040.1, http://journals.ametsoc.org/doi/full/10.1175/JHM-D-15-0040.1

  8. Hidden Fermi liquid; the moral: a good effective low-energy theory is worth all of Monte Carlo with Las Vegas thrown in

    NASA Astrophysics Data System (ADS)

    Anderson, Philip W.; Casey, Philip A.

    2010-04-01

    We present a formalism for dealing directly with the effects of the Gutzwiller projection implicit in the t-J model which is widely believed to underlie the phenomenology of the high-Tc cuprates. We suggest that a true Bardeen-Cooper-Schrieffer condensation from a Fermi liquid state takes place, but in the unphysical space prior to projection. At low doping, however, instead of a hidden Fermi liquid one gets a 'hidden' non-superconducting resonating valence bond state which develops hole pockets upon doping. The theory which results upon projection does not follow conventional rules of diagram theory and in fact in the normal state is a Z = 0 non-Fermi liquid. Anomalous properties of the 'strange metal' normal state are predicted and compared against experimental findings.

  9. A physical and economic model of the nuclear fuel cycle

    NASA Astrophysics Data System (ADS)

    Schneider, Erich Alfred

    A model of the nuclear fuel cycle that is suitable for use in strategic planning and economic forecasting is presented. The model, to be made available as a stand-alone software package, requires only a small set of fuel cycle and reactor specific input parameters. Critical design criteria include ease of use by nonspecialists, suppression of errors to within a range dictated by unit cost uncertainties, and limitation of runtime to under one minute on a typical desktop computer. Collision probability approximations to the neutron transport equation that lead to a computationally efficient decoupling of the spatial and energy variables are presented and implemented. The energy dependent flux, governed by coupled integral equations, is treated by multigroup or continuous thermalization methods. The model's output includes a comprehensive nuclear materials flowchart that begins with ore requirements, calculates the buildup of 24 actinides as well as fission products, and concludes with spent fuel or reprocessed material composition. The costs, direct and hidden, of the fuel cycle under study are also computed. In addition to direct disposal and plutonium recycling strategies in current use, the model addresses hypothetical cycles. These include cycles chosen for minor actinide burning and for their low weapons-usable content.

  10. Quantum protocols within Spekkens' toy model

    NASA Astrophysics Data System (ADS)

    Disilvestro, Leonardo; Markham, Damian

    2017-05-01

    Quantum mechanics is known to provide significant improvements in information processing tasks when compared to classical models. These advantages range from computational speedups to security improvements. A key question is where these advantages come from. The toy model developed by Spekkens [R. W. Spekkens, Phys. Rev. A 75, 032110 (2007), 10.1103/PhysRevA.75.032110] mimics many of the features of quantum mechanics, such as entanglement and no cloning, regarded as being important in this regard, despite being a local hidden variable theory. In this work, we study several protocols within Spekkens' toy model where we see it can also mimic the advantages and limitations shown in the quantum case. We first provide explicit proofs for the impossibility of toy bit commitment and the existence of a toy error correction protocol and consequent k -threshold secret sharing. Then, defining a toy computational model based on the quantum one-way computer, we prove the existence of blind and verified protocols. Importantly, these two last quantum protocols are known to achieve a better-than-classical security. Our results suggest that such quantum improvements need not arise from any Bell-type nonlocality or contextuality, but rather as a consequence of steering correlations.

  11. Bias correction in the realized stochastic volatility model for daily volatility on the Tokyo Stock Exchange

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2018-06-01

    The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.

  12. Hideen Markov Models and Neural Networks for Fault Detection in Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic

    1994-01-01

    None given. (From conclusion): Neural networks plus Hidden Markov Models(HMM)can provide excellene detection and false alarm rate performance in fault detection applications. Modified models allow for novelty detection. Also covers some key contributions of neural network model, and application status.

  13. A Heavy Tailed Expectation Maximization Hidden Markov Random Field Model with Applications to Segmentation of MRI

    PubMed Central

    Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego

    2017-01-01

    A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194

  14. Phytoremediation of palm oil mill secondary effluent (POMSE) by Chrysopogon zizanioides (L.) using artificial neural networks.

    PubMed

    Darajeh, Negisa; Idris, Azni; Fard Masoumi, Hamid Reza; Nourani, Abolfazl; Truong, Paul; Rezania, Shahabaldin

    2017-05-04

    Artificial neural networks (ANNs) have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the nonlinear relationships between variables in complex systems. In this study, ANN was applied for modeling of Chemical Oxygen Demand (COD) and biodegradable organic matter (BOD) removal from palm oil mill secondary effluent (POMSE) by vetiver system. The independent variable, including POMSE concentration, vetiver slips density, and removal time, has been considered as input parameters to optimize the network, while the removal percentage of COD and BOD were selected as output. To determine the number of hidden layer nodes, the root mean squared error of testing set was minimized, and the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the quick propagation (QP) algorithm had minimum root mean squared error and absolute average deviation, and maximum coefficient of determination. The importance values of the variables was included vetiver slips density with 42.41%, time with 29.8%, and the POMSE concentration with 27.79%, which showed none of them, is negligible. Results show that the ANN has great potential ability in prediction of COD and BOD removal from POMSE with residual standard error (RSE) of less than 0.45%.

  15. Learning a single-hidden layer feedforward neural network using a rank correlation-based strategy with application to high dimensional gene expression and proteomic spectra datasets in cancer detection.

    PubMed

    Belciug, Smaranda; Gorunescu, Florin

    2018-06-08

    Methods based on microarrays (MA), mass spectrometry (MS), and machine learning (ML) algorithms have evolved rapidly in recent years, allowing for early detection of several types of cancer. A pitfall of these approaches, however, is the overfitting of data due to large number of attributes and small number of instances -- a phenomenon known as the 'curse of dimensionality'. A potentially fruitful idea to avoid this drawback is to develop algorithms that combine fast computation with a filtering module for the attributes. The goal of this paper is to propose a statistical strategy to initiate the hidden nodes of a single-hidden layer feedforward neural network (SLFN) by using both the knowledge embedded in data and a filtering mechanism for attribute relevance. In order to attest its feasibility, the proposed model has been tested on five publicly available high-dimensional datasets: breast, lung, colon, and ovarian cancer regarding gene expression and proteomic spectra provided by cDNA arrays, DNA microarray, and MS. The novel algorithm, called adaptive SLFN (aSLFN), has been compared with four major classification algorithms: traditional ELM, radial basis function network (RBF), single-hidden layer feedforward neural network trained by backpropagation algorithm (BP-SLFN), and support vector-machine (SVM). Experimental results showed that the classification performance of aSLFN is competitive with the comparison models. Copyright © 2018. Published by Elsevier Inc.

  16. Comparison of RF spectrum prediction methods for dynamic spectrum access

    NASA Astrophysics Data System (ADS)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  17. Adaptive hidden Markov model-based online learning framework for bearing faulty detection and performance degradation monitoring

    NASA Astrophysics Data System (ADS)

    Yu, Jianbo

    2017-01-01

    This study proposes an adaptive-learning-based method for machine faulty detection and health degradation monitoring. The kernel of the proposed method is an "evolving" model that uses an unsupervised online learning scheme, in which an adaptive hidden Markov model (AHMM) is used for online learning the dynamic health changes of machines in their full life. A statistical index is developed for recognizing the new health states in the machines. Those new health states are then described online by adding of new hidden states in AHMM. Furthermore, the health degradations in machines are quantified online by an AHMM-based health index (HI) that measures the similarity between two density distributions that describe the historic and current health states, respectively. When necessary, the proposed method characterizes the distinct operating modes of the machine and can learn online both abrupt as well as gradual health changes. Our method overcomes some drawbacks of the HIs (e.g., relatively low comprehensibility and applicability) based on fixed monitoring models constructed in the offline phase. Results from its application in a bearing life test reveal that the proposed method is effective in online detection and adaptive assessment of machine health degradation. This study provides a useful guide for developing a condition-based maintenance (CBM) system that uses an online learning method without considerable human intervention.

  18. Proposed Test of Relative Phase as Hidden Variable in Quantum Mechanics

    DTIC Science & Technology

    2012-01-01

    implicitly due to its ubiquity in quantum theory , but searches for dependence of measurement outcome on other parameters have been lacking. For a two -state...implemen- tation for the specific case of an atomic two -state system with laser-induced fluores- cence for measurement. Keywords Quantum measurement...Measurement postulate · Born rule 1 Introduction 1.1 Problems with Quantum Measurement Quantum theory prescribes probabilities for outcomes of measurements

  19. Lessons from conducting trans-national Internet-mediated participatory research with hidden populations of cannabis cultivators.

    PubMed

    Barratt, Monica J; Potter, Gary R; Wouters, Marije; Wilkins, Chris; Werse, Bernd; Perälä, Jussi; Pedersen, Michael Mulbjerg; Nguyen, Holly; Malm, Aili; Lenton, Simon; Korf, Dirk; Klein, Axel; Heyde, Julie; Hakkarainen, Pekka; Frank, Vibeke Asmussen; Decorte, Tom; Bouchard, Martin; Blok, Thomas

    2015-03-01

    Internet-mediated research methods are increasingly used to access hidden populations. The International Cannabis Cultivation Questionnaire (ICCQ) is an online survey designed to facilitate international comparisons into the relatively under-researched but increasingly significant phenomenon of domestic cannabis cultivation. The Global Cannabis Cultivation Research Consortium has used the ICCQ to survey over 6000 cannabis cultivators across 11 countries. In this paper, we describe and reflect upon our methodological approach, focusing on the digital and traditional recruitment methods used to access this hidden population and the challenges of working across multiple countries, cultures and languages. Descriptive statistics showing eligibility and completion rates and recruitment source by country of residence. Over three quarters of eligible respondents who were presented with the survey were included in the final sample of n=6528. English-speaking countries expended more effort to recruit participants than non-English-speaking countries. The most effective recruitment modes were cannabis websites/groups (33%), Facebook (14%) and news articles (11%). While respondents recruited through news articles were older, growing practice variables were strikingly similar between these main recruitment modes. Through this process, we learnt that there are trade-offs between hosting multiple surveys in each country vs. using one integrated database. We also found that although perceived anonymity is routinely assumed to be a benefit of using digital research methodologies, there are significant limits to research participant anonymity in the current era of mass digital surveillance, especially when the target group is particularly concerned about evading law enforcement. Finally, we list a number of specific recommendations for future researchers utilising Internet-mediated approaches to researching hidden populations. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Efficient free energy calculations by combining two complementary tempering sampling methods.

    PubMed

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-14

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

Top