Generalized species sampling priors with latent Beta reinforcements
Airoldi, Edoardo M.; Costa, Thiago; Bassetti, Federico; Leisen, Fabrizio; Guindani, Michele
2014-01-01
Many popular Bayesian nonparametric priors can be characterized in terms of exchangeable species sampling sequences. However, in some applications, exchangeability may not be appropriate. We introduce a novel and probabilistically coherent family of non-exchangeable species sampling sequences characterized by a tractable predictive probability function with weights driven by a sequence of independent Beta random variables. We compare their theoretical clustering properties with those of the Dirichlet Process and the two parameters Poisson-Dirichlet process. The proposed construction provides a complete characterization of the joint process, differently from existing work. We then propose the use of such process as prior distribution in a hierarchical Bayes modeling framework, and we describe a Markov Chain Monte Carlo sampler for posterior inference. We evaluate the performance of the prior and the robustness of the resulting inference in a simulation study, providing a comparison with popular Dirichlet Processes mixtures and Hidden Markov Models. Finally, we develop an application to the detection of chromosomal aberrations in breast cancer by leveraging array CGH data. PMID:25870462
Infinite hidden conditional random fields for human behavior analysis.
Bousmalis, Konstantinos; Zafeiriou, Stefanos; Morency, Louis-Philippe; Pantic, Maja
2013-01-01
Hidden conditional random fields (HCRFs) are discriminative latent variable models that have been shown to successfully learn the hidden structure of a given classification problem (provided an appropriate validation of the number of hidden states). In this brief, we present the infinite HCRF (iHCRF), which is a nonparametric model based on hierarchical Dirichlet processes and is capable of automatically learning the optimal number of hidden states for a classification task. We show how we learn the model hyperparameters with an effective Markov-chain Monte Carlo sampling technique, and we explain the process that underlines our iHCRF model with the Restaurant Franchise Rating Agencies analogy. We show that the iHCRF is able to converge to a correct number of represented hidden states, and outperforms the best finite HCRFs--chosen via cross-validation--for the difficult tasks of recognizing instances of agreement, disagreement, and pain. Moreover, the iHCRF manages to achieve this performance in significantly less total training, validation, and testing time.
Hu, Weiming; Tian, Guodong; Kang, Yongxin; Yuan, Chunfeng; Maybank, Stephen
2017-09-25
In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequence of atomic activities, the action represented by the trajectory can be described in natural language in as automatic a way as possible. The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene.
ERIC Educational Resources Information Center
Li, Dingcheng
2011-01-01
Coreference resolution (CR) and entity relation detection (ERD) aim at finding predefined relations between pairs of entities in text. CR focuses on resolving identity relations while ERD focuses on detecting non-identity relations. Both CR and ERD are important as they can potentially improve other natural language processing (NLP) related tasks…
Griffin, William A.; Li, Xun
2016-01-01
Sequential affect dynamics generated during the interaction of intimate dyads, such as married couples, are associated with a cascade of effects—some good and some bad—on each partner, close family members, and other social contacts. Although the effects are well documented, the probabilistic structures associated with micro-social processes connected to the varied outcomes remain enigmatic. Using extant data we developed a method of classifying and subsequently generating couple dynamics using a Hierarchical Dirichlet Process Hidden semi-Markov Model (HDP-HSMM). Our findings indicate that several key aspects of existing models of marital interaction are inadequate: affect state emissions and their durations, along with the expected variability differences between distressed and nondistressed couples are present but highly nuanced; and most surprisingly, heterogeneity among highly satisfied couples necessitate that they be divided into subgroups. We review how this unsupervised learning technique generates plausible dyadic sequences that are sensitive to relationship quality and provide a natural mechanism for computational models of behavioral and affective micro-social processes. PMID:27187319
Meta-analysis using Dirichlet process.
Muthukumarana, Saman; Tiwari, Ram C
2016-02-01
This article develops a Bayesian approach for meta-analysis using the Dirichlet process. The key aspect of the Dirichlet process in meta-analysis is the ability to assess evidence of statistical heterogeneity or variation in the underlying effects across study while relaxing the distributional assumptions. We assume that the study effects are generated from a Dirichlet process. Under a Dirichlet process model, the study effects parameters have support on a discrete space and enable borrowing of information across studies while facilitating clustering among studies. We illustrate the proposed method by applying it to a dataset on the Program for International Student Assessment on 30 countries. Results from the data analysis, simulation studies, and the log pseudo-marginal likelihood model selection procedure indicate that the Dirichlet process model performs better than conventional alternative methods. © The Author(s) 2012.
A stochastic diffusion process for Lochner's generalized Dirichlet distribution
Bakosi, J.; Ristorcelli, J. R.
2013-10-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less
Augmenting Latent Dirichlet Allocation and Rank Threshold Detection with Ontologies
2010-03-01
Probabilistic Latent Semantic Indexing (PLSI) is an automated indexing information retrieval model [20]. It is based on a statistical latent class model which is...uses a statistical foundation that is more accurate in finding hidden semantic relationships [20]. The model uses factor analysis of count data, number...principle of statistical infer- ence which asserts that all of the information in a sample is contained in the likelihood function [20]. The statistical
A Stochastic Diffusion Process for the Dirichlet Distribution
Bakosi, J.; Ristorcelli, J. R.
2013-03-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less
Feature extraction for document text using Latent Dirichlet Allocation
NASA Astrophysics Data System (ADS)
Prihatini, P. M.; Suryawan, I. K.; Mandia, IN
2018-01-01
Feature extraction is one of stages in the information retrieval system that used to extract the unique feature values of a text document. The process of feature extraction can be done by several methods, one of which is Latent Dirichlet Allocation. However, researches related to text feature extraction using Latent Dirichlet Allocation method are rarely found for Indonesian text. Therefore, through this research, a text feature extraction will be implemented for Indonesian text. The research method consists of data acquisition, text pre-processing, initialization, topic sampling and evaluation. The evaluation is done by comparing Precision, Recall and F-Measure value between Latent Dirichlet Allocation and Term Frequency Inverse Document Frequency KMeans which commonly used for feature extraction. The evaluation results show that Precision, Recall and F-Measure value of Latent Dirichlet Allocation method is higher than Term Frequency Inverse Document Frequency KMeans method. This shows that Latent Dirichlet Allocation method is able to extract features and cluster Indonesian text better than Term Frequency Inverse Document Frequency KMeans method.
Prior Design for Dependent Dirichlet Processes: An Application to Marathon Modeling
F. Pradier, Melanie; J. R. Ruiz, Francisco; Perez-Cruz, Fernando
2016-01-01
This paper presents a novel application of Bayesian nonparametrics (BNP) for marathon data modeling. We make use of two well-known BNP priors, the single-p dependent Dirichlet process and the hierarchical Dirichlet process, in order to address two different problems. First, we study the impact of age, gender and environment on the runners’ performance. We derive a fair grading method that allows direct comparison of runners regardless of their age and gender. Unlike current grading systems, our approach is based not only on top world records, but on the performances of all runners. The presented methodology for comparison of densities can be adopted in many other applications straightforwardly, providing an interesting perspective to build dependent Dirichlet processes. Second, we analyze the running patterns of the marathoners in time, obtaining information that can be valuable for training purposes. We also show that these running patterns can be used to predict finishing time given intermediate interval measurements. We apply our models to New York City, Boston and London marathons. PMID:26821155
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
NASA Astrophysics Data System (ADS)
Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.
2018-06-01
We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.
Semiparametric Bayesian classification with longitudinal markers
De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter
2013-01-01
Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871
Modeling unobserved sources of heterogeneity in animal abundance using a Dirichlet process prior
Dorazio, R.M.; Mukherjee, B.; Zhang, L.; Ghosh, M.; Jelks, H.L.; Jordan, F.
2008-01-01
In surveys of natural populations of animals, a sampling protocol is often spatially replicated to collect a representative sample of the population. In these surveys, differences in abundance of animals among sample locations may induce spatial heterogeneity in the counts associated with a particular sampling protocol. For some species, the sources of heterogeneity in abundance may be unknown or unmeasurable, leading one to specify the variation in abundance among sample locations stochastically. However, choosing a parametric model for the distribution of unmeasured heterogeneity is potentially subject to error and can have profound effects on predictions of abundance at unsampled locations. In this article, we develop an alternative approach wherein a Dirichlet process prior is assumed for the distribution of latent abundances. This approach allows for uncertainty in model specification and for natural clustering in the distribution of abundances in a data-adaptive way. We apply this approach in an analysis of counts based on removal samples of an endangered fish species, the Okaloosa darter. Results of our data analysis and simulation studies suggest that our implementation of the Dirichlet process prior has several attractive features not shared by conventional, fully parametric alternatives. ?? 2008, The International Biometric Society.
Quantum "violation" of Dirichlet boundary condition
NASA Astrophysics Data System (ADS)
Park, I. Y.
2017-02-01
Dirichlet boundary conditions have been widely used in general relativity. They seem at odds with the holographic property of gravity simply because a boundary configuration can be varying and dynamic instead of dying out as required by the conditions. In this work we report what should be a tension between the Dirichlet boundary conditions and quantum gravitational effects, and show that a quantum-corrected black hole solution of the 1PI action no longer obeys, in the naive manner one may expect, the Dirichlet boundary conditions imposed at the classical level. We attribute the 'violation' of the Dirichlet boundary conditions to a certain mechanism of the information storage on the boundary.
USING DIRICHLET TESSELLATION TO HELP ESTIMATE MICROBIAL BIOMASS CONCENTRATIONS
Dirichlet tessellation was applied to estimate microbial concentrations from microscope well slides. The use of microscopy/Dirichlet tessellation to quantify biomass was illustrated with two species of morphologically distinct cyanobacteria, and validated empirically by compariso...
Study on monostable and bistable reaction-diffusion equations by iteration of travelling wave maps
NASA Astrophysics Data System (ADS)
Yi, Taishan; Chen, Yuming
2017-12-01
In this paper, based on the iterative properties of travelling wave maps, we develop a new method to obtain spreading speeds and asymptotic propagation for monostable and bistable reaction-diffusion equations. Precisely, for Dirichlet problems of monostable reaction-diffusion equations on the half line, by making links between travelling wave maps and integral operators associated with the Dirichlet diffusion kernel (the latter is NOT invariant under translation), we obtain some iteration properties of the Dirichlet diffusion and some a priori estimates on nontrivial solutions of Dirichlet problems under travelling wave transformation. We then provide the asymptotic behavior of nontrivial solutions in the space-time region for Dirichlet problems. These enable us to develop a unified method to obtain results on heterogeneous steady states, travelling waves, spreading speeds, and asymptotic spreading behavior for Dirichlet problem of monostable reaction-diffusion equations on R+ as well as of monostable/bistable reaction-diffusion equations on R.
Hierarchical Dirichlet process model for gene expression clustering
2013-01-01
Clustering is an important data processing tool for interpreting microarray data and genomic network inference. In this article, we propose a clustering algorithm based on the hierarchical Dirichlet processes (HDP). The HDP clustering introduces a hierarchical structure in the statistical model which captures the hierarchical features prevalent in biological data such as the gene express data. We develop a Gibbs sampling algorithm based on the Chinese restaurant metaphor for the HDP clustering. We apply the proposed HDP algorithm to both regulatory network segmentation and gene expression clustering. The HDP algorithm is shown to outperform several popular clustering algorithms by revealing the underlying hierarchical structure of the data. For the yeast cell cycle data, we compare the HDP result to the standard result and show that the HDP algorithm provides more information and reduces the unnecessary clustering fragments. PMID:23587447
Scalable Topic Modeling: Online Learning, Diagnostics, and Recommendation
2017-03-01
Chinese restaurant processes. Journal of Machine Learning Research, 12:2461–2488, 2011. 15. L. Hannah, D. Blei and W. Powell. Dirichlet process mixtures of...34. S. Ghosh, A. Ungureanu, E. Sudderth, and D. Blei. A Spatial distance dependent Chinese restaurant process for image segmentation. In Neural
Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.
Yu, Kezi; Quirk, J Gerald; Djurić, Petar M
2017-01-01
In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting.
Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models
Yu, Kezi; Quirk, J. Gerald
2017-01-01
In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting. PMID:28953927
Spectral multigrid methods for elliptic equations 2
NASA Technical Reports Server (NTRS)
Zang, T. A.; Wong, Y. S.; Hussaini, M. Y.
1983-01-01
A detailed description of spectral multigrid methods is provided. This includes the interpolation and coarse-grid operators for both periodic and Dirichlet problems. The spectral methods for periodic problems use Fourier series and those for Dirichlet problems are based upon Chebyshev polynomials. An improved preconditioning for Dirichlet problems is given. Numerical examples and practical advice are included.
Quantum Gravitational Effects on the Boundary
NASA Astrophysics Data System (ADS)
James, F.; Park, I. Y.
2018-04-01
Quantum gravitational effects might hold the key to some of the outstanding problems in theoretical physics. We analyze the perturbative quantum effects on the boundary of a gravitational system and the Dirichlet boundary condition imposed at the classical level. Our analysis reveals that for a black hole solution, there is a contradiction between the quantum effects and the Dirichlet boundary condition: the black hole solution of the one-particle-irreducible action no longer satisfies the Dirichlet boundary condition as would be expected without going into details. The analysis also suggests that the tension between the Dirichlet boundary condition and loop effects is connected with a certain mechanism of information storage on the boundary.
Memoized Online Variational Inference for Dirichlet Process Mixture Models
2014-06-27
breaking process [7], which places artifically large mass on the final component. It is more efficient and broadly applicable than an alternative trunction...models. In Uncertainty in Artificial Intelligence , 2008. [13] N. Le Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mejri, Youssef, E-mail: josef-bizert@hotmail.fr; Dép. des Mathématiques, Faculté des Sciences de Bizerte, 7021 Jarzouna; Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT BP 37, Le Belvedere, 1002 Tunis
In this article, we study the boundary inverse problem of determining the aligned magnetic field appearing in the magnetic Schrödinger equation in a periodic quantum cylindrical waveguide, by knowledge of the Dirichlet-to-Neumann map. We prove a Hölder stability estimate with respect to the Dirichlet-to-Neumann map, by means of the geometrical optics solutions of the magnetic Schrödinger equation.
Constructing Weyl group multiple Dirichlet series
NASA Astrophysics Data System (ADS)
Chinta, Gautam; Gunnells, Paul E.
2010-01-01
Let Phi be a reduced root system of rank r . A Weyl group multiple Dirichlet series for Phi is a Dirichlet series in r complex variables s_1,dots,s_r , initially converging for {Re}(s_i) sufficiently large, that has meromorphic continuation to {{C}}^r and satisfies functional equations under the transformations of {{C}}^r corresponding to the Weyl group of Phi . A heuristic definition of such a series was given by Brubaker, Bump, Chinta, Friedberg, and Hoffstein, and they have been investigated in certain special cases by others. In this paper we generalize results by Chinta and Gunnells to construct Weyl group multiple Dirichlet series by a uniform method and show in all cases that they have the expected properties.
Yu, Zhiguo; Nguyen, Thang; Dhombres, Ferdinand; Johnson, Todd; Bodenreider, Olivier
2018-01-01
Extracting and understanding information, themes and relationships from large collections of documents is an important task for biomedical researchers. Latent Dirichlet Allocation is an unsupervised topic modeling technique using the bag-of-words assumption that has been applied extensively to unveil hidden thematic information within large sets of documents. In this paper, we added MeSH descriptors to the bag-of-words assumption to generate ‘hybrid topics’, which are mixed vectors of words and descriptors. We evaluated this approach on the quality and interpretability of topics in both a general corpus and a specialized corpus. Our results demonstrated that the coherence of ‘hybrid topics’ is higher than that of regular bag-of-words topics in the specialized corpus. We also found that the proportion of topics that are not associated with MeSH descriptors is higher in the specialized corpus than in the general corpus. PMID:29295179
A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times
Heath, Tracy A.
2012-01-01
In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343
Negative Binomial Process Count and Mixture Modeling.
Zhou, Mingyuan; Carin, Lawrence
2015-02-01
The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.
Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers
ERIC Educational Resources Information Center
Anaya, Leticia H.
2011-01-01
In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed.…
A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION
We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...
Bounded solutions in a T-shaped waveguide and the spectral properties of the Dirichlet ladder
NASA Astrophysics Data System (ADS)
Nazarov, S. A.
2014-08-01
The Dirichlet problem is considered on the junction of thin quantum waveguides (of thickness h ≪ 1) in the shape of an infinite two-dimensional ladder. Passage to the limit as h → +0 is discussed. It is shown that the asymptotically correct transmission conditions at nodes of the corresponding one-dimensional quantum graph are Dirichlet conditions rather than the conventional Kirchhoff transmission conditions. The result is obtained by analyzing bounded solutions of a problem in the T-shaped waveguide that the boundary layer phenomenon.
General stability of memory-type thermoelastic Timoshenko beam acting on shear force
NASA Astrophysics Data System (ADS)
Apalara, Tijani A.
2018-03-01
In this paper, we consider a linear thermoelastic Timoshenko system with memory effects where the thermoelastic coupling is acting on shear force under Neumann-Dirichlet-Dirichlet boundary conditions. The same system with fully Dirichlet boundary conditions was considered by Messaoudi and Fareh (Nonlinear Anal TMA 74(18):6895-6906, 2011, Acta Math Sci 33(1):23-40, 2013), but they obtained a general stability result which depends on the speeds of wave propagation. In our case, we obtained a general stability result irrespective of the wave speeds of the system.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
Diffusion Processes Satisfying a Conservation Law Constraint
Bakosi, J.; Ristorcelli, J. R.
2014-03-04
We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less
Diffusion Processes Satisfying a Conservation Law Constraint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakosi, J.; Ristorcelli, J. R.
We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less
Application of the perfectly matched layer in 3-D marine controlled-source electromagnetic modelling
NASA Astrophysics Data System (ADS)
Li, Gang; Li, Yuguo; Han, Bo; Liu, Zhan
2018-01-01
In this study, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 3-D frequency-domain marine controlled-source electromagnetic (CSEM) field modelling. The Dirichlet boundary, which is usually used within the traditional framework of EM modelling algorithms, assumes that the electric or magnetic field values are zero at the boundaries. This requires the boundaries to be sufficiently far away from the area of interest. To mitigate the boundary artefacts, a large modelling area may be necessary even though cell sizes are allowed to grow toward the boundaries due to the diffusion of the electromagnetic wave propagation. Compared with the conventional Dirichlet boundary, the PML boundary is preferred as the modelling area of interest could be restricted to the target region and only a few absorbing layers surrounding can effectively depress the artificial boundary effect without losing the numerical accuracy. Furthermore, for joint inversion of seismic and marine CSEM data, if we use the PML for CSEM field simulation instead of the conventional Dirichlet, the modelling area for these two different geophysical data collected from the same survey area could be the same, which is convenient for joint inversion grid matching. We apply the CFS-PML boundary to 3-D marine CSEM modelling by using the staggered finite-difference discretization. Numerical test indicates that the modelling algorithm using the CFS-PML also shows good accuracy compared to the Dirichlet. Furthermore, the modelling algorithm using the CFS-PML shows advantages in computational time and memory saving than that using the Dirichlet boundary. For the 3-D example in this study, the memory saving using the PML is nearly 42 per cent and the time saving is around 48 per cent compared to using the Dirichlet.
A Dirichlet-Multinomial Bayes Classifier for Disease Diagnosis with Microbial Compositions.
Gao, Xiang; Lin, Huaiying; Dong, Qunfeng
2017-01-01
Dysbiosis of microbial communities is associated with various human diseases, raising the possibility of using microbial compositions as biomarkers for disease diagnosis. We have developed a Bayes classifier by modeling microbial compositions with Dirichlet-multinomial distributions, which are widely used to model multicategorical count data with extra variation. The parameters of the Dirichlet-multinomial distributions are estimated from training microbiome data sets based on maximum likelihood. The posterior probability of a microbiome sample belonging to a disease or healthy category is calculated based on Bayes' theorem, using the likelihood values computed from the estimated Dirichlet-multinomial distribution, as well as a prior probability estimated from the training microbiome data set or previously published information on disease prevalence. When tested on real-world microbiome data sets, our method, called DMBC (for Dirichlet-multinomial Bayes classifier), shows better classification accuracy than the only existing Bayesian microbiome classifier based on a Dirichlet-multinomial mixture model and the popular random forest method. The advantage of DMBC is its built-in automatic feature selection, capable of identifying a subset of microbial taxa with the best classification accuracy between different classes of samples based on cross-validation. This unique ability enables DMBC to maintain and even improve its accuracy at modeling species-level taxa. The R package for DMBC is freely available at https://github.com/qunfengdong/DMBC. IMPORTANCE By incorporating prior information on disease prevalence, Bayes classifiers have the potential to estimate disease probability better than other common machine-learning methods. Thus, it is important to develop Bayes classifiers specifically tailored for microbiome data. Our method shows higher classification accuracy than the only existing Bayesian classifier and the popular random forest method, and thus provides an alternative option for using microbial compositions for disease diagnosis.
NASA Astrophysics Data System (ADS)
Feehan, Paul M. N.
2017-09-01
We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].
Organizing Books and Authors by Multilayer SOM.
Zhang, Haijun; Chow, Tommy W S; Wu, Q M Jonathan
2016-12-01
This paper introduces a new framework for the organization of electronic books (e-books) and their corresponding authors using a multilayer self-organizing map (MLSOM). An author is modeled by a rich tree-structured representation, and an MLSOM-based system is used as an efficient solution to the organizational problem of structured data. The tree-structured representation formulates author features in a hierarchy of author biography, books, pages, and paragraphs. To efficiently tackle the tree-structured representation, we used an MLSOM algorithm that serves as a clustering technique to handle e-books and their corresponding authors. A book and author recommender system is then implemented using the proposed framework. The effectiveness of our approach was examined in a large-scale data set containing 3868 authors along with the 10500 e-books that they wrote. We also provided visualization results of MLSOM for revealing the relevance patterns hidden from presented author clusters. The experimental results corroborate that the proposed method outperforms other content-based models (e.g., rate adapting poisson, latent Dirichlet allocation, probabilistic latent semantic indexing, and so on) and offers a promising solution to book recommendation, author recommendation, and visualization.
Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Luo, Xiangfeng
2015-12-01
Graph mining has been a popular research area because of its numerous application scenarios. Many unstructured and structured data can be represented as graphs, such as, documents, chemical molecular structures, and images. However, an issue in relation to current research on graphs is that they cannot adequately discover the topics hidden in graph-structured data which can be beneficial for both the unsupervised learning and supervised learning of the graphs. Although topic models have proved to be very successful in discovering latent topics, the standard topic models cannot be directly applied to graph-structured data due to the "bag-of-word" assumption. In this paper, an innovative graph topic model (GTM) is proposed to address this issue, which uses Bernoulli distributions to model the edges between nodes in a graph. It can, therefore, make the edges in a graph contribute to latent topic discovery and further improve the accuracy of the supervised and unsupervised learning of graphs. The experimental results on two different types of graph datasets show that the proposed GTM outperforms the latent Dirichlet allocation on classification by using the unveiled topics of these two models to represent graphs.
Sine-gordon type field in spacetime of arbitrary dimension. II: Stochastic quantization
NASA Astrophysics Data System (ADS)
Kirillov, A. I.
1995-11-01
Using the theory of Dirichlet forms, we prove the existence of a distribution-valued diffusion process such that the Nelson measure of a field with a bounded interaction density is its invariant probability measure. A Langevin equation in mathematically correct form is formulated which is satisfied by the process. The drift term of the equation is interpreted as a renormalized Euclidean current operator.
NASA Astrophysics Data System (ADS)
Ben Amara, Jamel; Bouzidi, Hedi
2018-01-01
In this paper, we consider a linear hybrid system which is composed by two non-homogeneous rods connected by a point mass with Dirichlet boundary conditions on the left end and a boundary control acts on the right end. We prove that this system is null controllable with Dirichlet or Neumann boundary controls. Our approach is mainly based on a detailed spectral analysis together with the moment method. In particular, we show that the associated spectral gap in both cases (Dirichlet or Neumann boundary controls) is positive without further conditions on the coefficients other than the regularities.
NASA Astrophysics Data System (ADS)
Grobbelaar-Van Dalsen, Marié
2015-02-01
In this article, we are concerned with the polynomial stabilization of a two-dimensional thermoelastic Mindlin-Timoshenko plate model with no mechanical damping. The model is subject to Dirichlet boundary conditions on the elastic as well as the thermal variables. The work complements our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 64:1305-1325, 2013) on the polynomial stabilization of a Mindlin-Timoshenko model in a radially symmetric domain under Dirichlet boundary conditions on the displacement and thermal variables and free boundary conditions on the shear angle variables. In particular, our aim is to investigate the effect of the Dirichlet boundary conditions on all the variables on the polynomial decay rate of the model. By once more applying a frequency domain method in which we make critical use of an inequality for the trace of Sobolev functions on the boundary of a bounded, open connected set we show that the decay is slower than in the model considered in the cited work. A comparison of our result with our polynomial decay result for a magnetoelastic Mindlin-Timoshenko model subject to Dirichlet boundary conditions on the elastic variables in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) also indicates a correlation between the robustness of the coupling between parabolic and hyperbolic dynamics and the polynomial decay rate in the two models.
Latent Dirichlet Allocation (LDA) Model and kNN Algorithm to Classify Research Project Selection
NASA Astrophysics Data System (ADS)
Safi’ie, M. A.; Utami, E.; Fatta, H. A.
2018-03-01
Universitas Sebelas Maret has a teaching staff more than 1500 people, and one of its tasks is to carry out research. In the other side, the funding support for research and service is limited, so there is need to be evaluated to determine the Research proposal submission and devotion on society (P2M). At the selection stage, research proposal documents are collected as unstructured data and the data stored is very large. To extract information contained in the documents therein required text mining technology. This technology applied to gain knowledge to the documents by automating the information extraction. In this articles we use Latent Dirichlet Allocation (LDA) to the documents as a model in feature extraction process, to get terms that represent its documents. Hereafter we use k-Nearest Neighbour (kNN) algorithm to classify the documents based on its terms.
A Bayesian Semiparametric Item Response Model with Dirichlet Process Priors
ERIC Educational Resources Information Center
Miyazaki, Kei; Hoshino, Takahiro
2009-01-01
In Item Response Theory (IRT), item characteristic curves (ICCs) are illustrated through logistic models or normal ogive models, and the probability that examinees give the correct answer is usually a monotonically increasing function of their ability parameters. However, since only limited patterns of shapes can be obtained from logistic models…
Using Dirichlet Processes for Modeling Heterogeneous Treatment Effects across Sites
ERIC Educational Resources Information Center
Miratrix, Luke; Feller, Avi; Pillai, Natesh; Pati, Debdeep
2016-01-01
Modeling the distribution of site level effects is an important problem, but it is also an incredibly difficult one. Current methods rely on distributional assumptions in multilevel models for estimation. There it is hoped that the partial pooling of site level estimates with overall estimates, designed to take into account individual variation as…
Nonparametric Bayesian predictive distributions for future order statistics
Richard A. Johnson; James W. Evans; David W. Green
1999-01-01
We derive the predictive distribution for a specified order statistic, determined from a future random sample, under a Dirichlet process prior. Two variants of the approach are treated and some limiting cases studied. A practical application to monitoring the strength of lumber is discussed including choices of prior expectation and comparisons made to a Bayesian...
An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.
Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei
2013-05-01
Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.
Chen, Yun; Yang, Hui
2016-01-01
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581
Chen, Yun; Yang, Hui
2016-12-14
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.
Gross, Alexander; Murthy, Dhiraj
2014-10-01
This paper explores a variety of methods for applying the Latent Dirichlet Allocation (LDA) automated topic modeling algorithm to the modeling of the structure and behavior of virtual organizations found within modern social media and social networking environments. As the field of Big Data reveals, an increase in the scale of social data available presents new challenges which are not tackled by merely scaling up hardware and software. Rather, they necessitate new methods and, indeed, new areas of expertise. Natural language processing provides one such method. This paper applies LDA to the study of scientific virtual organizations whose members employ social technologies. Because of the vast data footprint in these virtual platforms, we found that natural language processing was needed to 'unlock' and render visible latent, previously unseen conversational connections across large textual corpora (spanning profiles, discussion threads, forums, and other social media incarnations). We introduce variants of LDA and ultimately make the argument that natural language processing is a critical interdisciplinary methodology to make better sense of social 'Big Data' and we were able to successfully model nested discussion topics from forums and blog posts using LDA. Importantly, we found that LDA can move us beyond the state-of-the-art in conventional Social Network Analysis techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.
On the Dirichlet's Box Principle
ERIC Educational Resources Information Center
Poon, Kin-Keung; Shiu, Wai-Chee
2008-01-01
In this note, we will focus on several applications on the Dirichlet's box principle in Discrete Mathematics lesson and number theory lesson. In addition, the main result is an innovative game on a triangular board developed by the authors. The game has been used in teaching and learning mathematics in Discrete Mathematics and some high schools in…
Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave
2014-01-01
We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...
Uniform gradient estimates on manifolds with a boundary and applications
NASA Astrophysics Data System (ADS)
Cheng, Li-Juan; Thalmaier, Anton; Thompson, James
2018-04-01
We revisit the problem of obtaining uniform gradient estimates for Dirichlet and Neumann heat semigroups on Riemannian manifolds with boundary. As applications, we obtain isoperimetric inequalities, using Ledoux's argument, and uniform quantitative gradient estimates, firstly for C^2_b functions with boundary conditions and then for the unit spectral projection operators of Dirichlet and Neumann Laplacians.
Dirichlet to Neumann operator for Abelian Yang-Mills gauge fields
NASA Astrophysics Data System (ADS)
Díaz-Marín, Homero G.
We consider the Dirichlet to Neumann operator for Abelian Yang-Mills boundary conditions. The aim is constructing a complex structure for the symplectic space of boundary conditions of Euler-Lagrange solutions modulo gauge for space-time manifolds with smooth boundary. Thus we prepare a suitable scenario for geometric quantization within the reduced symplectic space of boundary conditions of Abelian gauge fields.
An improved approximate-Bayesian model-choice method for estimating shared evolutionary history
2014-01-01
Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937
Stochastic search, optimization and regression with energy applications
NASA Astrophysics Data System (ADS)
Hannah, Lauren A.
Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.
NASA Astrophysics Data System (ADS)
Ciarlet, P.
1994-09-01
Hereafter, we describe and analyze, from both a theoretical and a numerical point of view, an iterative method for efficiently solving symmetric elliptic problems with possibly discontinuous coefficients. In the following, we use the Preconditioned Conjugate Gradient method to solve the symmetric positive definite linear systems which arise from the finite element discretization of the problems. We focus our interest on sparse and efficient preconditioners. In order to define the preconditioners, we perform two steps: first we reorder the unknowns and then we carry out a (modified) incomplete factorization of the original matrix. We study numerically and theoretically two preconditioners, the second preconditioner corresponding to the one investigated by Brand and Heinemann [2]. We prove convergence results about the Poisson equation with either Dirichlet or periodic boundary conditions. For a meshsizeh, Brand proved that the condition number of the preconditioned system is bounded byO(h-1/2) for Dirichlet boundary conditions. By slightly modifying the preconditioning process, we prove that the condition number is bounded byO(h-1/3).
Li, Xian-Ying; Hu, Shi-Min
2013-02-01
Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.
Machine learning in sentiment reconstruction of the simulated stock market
NASA Astrophysics Data System (ADS)
Goykhman, Mikhail; Teimouri, Ali
2018-02-01
In this paper we continue the study of the simulated stock market framework defined by the driving sentiment processes. We focus on the market environment driven by the buy/sell trading sentiment process of the Markov chain type. We apply the methodology of the Hidden Markov Models and the Recurrent Neural Networks to reconstruct the transition probabilities matrix of the Markov sentiment process and recover the underlying sentiment states from the observed stock price behavior. We demonstrate that the Hidden Markov Model can successfully recover the transition probabilities matrix for the hidden sentiment process of the Markov Chain type. We also demonstrate that the Recurrent Neural Network can successfully recover the hidden sentiment states from the observed simulated stock price time series.
Generalized Riemann hypothesis and stochastic time series
NASA Astrophysics Data System (ADS)
Mussardo, Giuseppe; LeClair, André
2018-06-01
Using the Dirichlet theorem on the equidistribution of residue classes modulo q and the Lemke Oliver–Soundararajan conjecture on the distribution of pairs of residues on consecutive primes, we show that the domain of convergence of the infinite product of Dirichlet L-functions of non-principal characters can be extended from down to , without encountering any zeros before reaching this critical line. The possibility of doing so can be traced back to a universal diffusive random walk behavior of a series C N over the primes which underlies the convergence of the infinite product of the Dirichlet functions. The series C N presents several aspects in common with stochastic time series and its control requires to address a problem similar to the single Brownian trajectory problem in statistical mechanics. In the case of the Dirichlet functions of non principal characters, we show that this problem can be solved in terms of a self-averaging procedure based on an ensemble of block variables computed on extended intervals of primes. Those intervals, called inertial intervals, ensure the ergodicity and stationarity of the time series underlying the quantity C N . The infinity of primes also ensures the absence of rare events which would have been responsible for a different scaling behavior than the universal law of the random walks.
How hidden are hidden processes? A primer on crypticity and entropy convergence
NASA Astrophysics Data System (ADS)
Mahoney, John R.; Ellison, Christopher J.; James, Ryan G.; Crutchfield, James P.
2011-09-01
We investigate a stationary process's crypticity—a measure of the difference between its hidden state information and its observed information—using the causal states of computational mechanics. Here, we motivate crypticity and cryptic order as physically meaningful quantities that monitor how hidden a hidden process is. This is done by recasting previous results on the convergence of block entropy and block-state entropy in a geometric setting, one that is more intuitive and that leads to a number of new results. For example, we connect crypticity to how an observer synchronizes to a process. We show that the block-causal-state entropy is a convex function of block length. We give a complete analysis of spin chains. We present a classification scheme that surveys stationary processes in terms of their possible cryptic and Markov orders. We illustrate related entropy convergence behaviors using a new form of foliated information diagram. Finally, along the way, we provide a variety of interpretations of crypticity and cryptic order to establish their naturalness and pervasiveness. This is also a first step in developing applications in spatially extended and network dynamical systems.
DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.
Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei
2018-01-01
Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Ages of Records in Random Walks
NASA Astrophysics Data System (ADS)
Szabó, Réka; Vető, Bálint
2016-12-01
We consider random walks with continuous and symmetric step distributions. We prove universal asymptotics for the average proportion of the age of the kth longest lasting record for k=1,2,ldots and for the probability that the record of the kth longest age is broken at step n. Due to the relation to the Chinese restaurant process, the ranked sequence of proportions of ages converges to the Poisson-Dirichlet distribution.
Lu, Yisu; Jiang, Jun; Yang, Wei; Feng, Qianjin; Chen, Wufan
2014-01-01
Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use.
Lu, Yisu; Jiang, Jun; Chen, Wufan
2014-01-01
Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use. PMID:25254064
Reuter, Martin; Wolter, Franz-Erich; Shenton, Martha; Niethammer, Marc
2009-01-01
This paper proposes the use of the surface based Laplace-Beltrami and the volumetric Laplace eigenvalues and -functions as shape descriptors for the comparison and analysis of shapes. These spectral measures are isometry invariant and therefore allow for shape comparisons with minimal shape pre-processing. In particular, no registration, mapping, or remeshing is necessary. The discriminatory power of the 2D surface and 3D solid methods is demonstrated on a population of female caudate nuclei (a subcortical gray matter structure of the brain, involved in memory function, emotion processing, and learning) of normal control subjects and of subjects with schizotypal personality disorder. The behavior and properties of the Laplace-Beltrami eigenvalues and -functions are discussed extensively for both the Dirichlet and Neumann boundary condition showing advantages of the Neumann vs. the Dirichlet spectra in 3D. Furthermore, topological analyses employing the Morse-Smale complex (on the surfaces) and the Reeb graph (in the solids) are performed on selected eigenfunctions, yielding shape descriptors, that are capable of localizing geometric properties and detecting shape differences by indirectly registering topological features such as critical points, level sets and integral lines of the gradient field across subjects. The use of these topological features of the Laplace-Beltrami eigenfunctions in 2D and 3D for statistical shape analysis is novel. PMID:20161035
Image segmentation using hidden Markov Gauss mixture models.
Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M
2007-07-01
Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.
NASA Astrophysics Data System (ADS)
Alessandrini, Giovanni; de Hoop, Maarten V.; Gaburro, Romina
2017-12-01
We discuss the inverse problem of determining the, possibly anisotropic, conductivity of a body Ω\\subset{R}n when the so-called Neumann-to-Dirichlet map is locally given on a non-empty curved portion Σ of the boundary \\partialΩ . We prove that anisotropic conductivities that are a priori known to be piecewise constant matrices on a given partition of Ω with curved interfaces can be uniquely determined in the interior from the knowledge of the local Neumann-to-Dirichlet map.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plotnikov, Mikhail G
2011-02-11
Multiple Walsh series (S) on the group G{sup m} are studied. It is proved that every at most countable set is a uniqueness set for series (S) under convergence over cubes. The recovery problem is solved for the coefficients of series (S) that converge outside countable sets or outside sets of Dirichlet type. A number of analogues of the de la Vallee Poussin theorem are established for series (S). Bibliography: 28 titles.
Sedghi, Aliasghar; Rezaei, Behrooz
2016-11-20
Using the Dirichlet-to-Neumann map method, we have calculated the photonic band structure of two-dimensional metallodielectric photonic crystals having the square and triangular lattices of circular metal rods in a dielectric background. We have selected the transverse electric mode of electromagnetic waves, and the resulting band structures showed the existence of photonic bandgap in these structures. We theoretically study the effect of background dielectric on the photonic bandgap.
NASA Astrophysics Data System (ADS)
Leonov, G. A.; Kuznetsov, N. V.
From a computational point of view, in nonlinear dynamical systems, attractors can be regarded as self-excited and hidden attractors. Self-excited attractors can be localized numerically by a standard computational procedure, in which after a transient process a trajectory, starting from a point of unstable manifold in a neighborhood of equilibrium, reaches a state of oscillation, therefore one can easily identify it. In contrast, for a hidden attractor, a basin of attraction does not intersect with small neighborhoods of equilibria. While classical attractors are self-excited, attractors can therefore be obtained numerically by the standard computational procedure. For localization of hidden attractors it is necessary to develop special procedures, since there are no similar transient processes leading to such attractors. At first, the problem of investigating hidden oscillations arose in the second part of Hilbert's 16th problem (1900). The first nontrivial results were obtained in Bautin's works, which were devoted to constructing nested limit cycles in quadratic systems, that showed the necessity of studying hidden oscillations for solving this problem. Later, the problem of analyzing hidden oscillations arose from engineering problems in automatic control. In the 50-60s of the last century, the investigations of widely known Markus-Yamabe's, Aizerman's, and Kalman's conjectures on absolute stability have led to the finding of hidden oscillations in automatic control systems with a unique stable stationary point. In 1961, Gubar revealed a gap in Kapranov's work on phase locked-loops (PLL) and showed the possibility of the existence of hidden oscillations in PLL. At the end of the last century, the difficulties in analyzing hidden oscillations arose in simulations of drilling systems and aircraft's control systems (anti-windup) which caused crashes. Further investigations on hidden oscillations were greatly encouraged by the present authors' discovery, in 2010 (for the first time), of chaotic hidden attractor in Chua's circuit. This survey is dedicated to efficient analytical-numerical methods for the study of hidden oscillations. Here, an attempt is made to reflect the current trends in the synthesis of analytical and numerical methods.
A fast hidden line algorithm with contour option. M.S. Thesis
NASA Technical Reports Server (NTRS)
Thue, R. E.
1984-01-01
The JonesD algorithm was modified to allow the processing of N-sided elements and implemented in conjunction with a 3-D contour generation algorithm. The total hidden line and contour subsystem is implemented in the MOVIE.BYU Display package, and is compared to the subsystems already existing in the MOVIE.BYU package. The comparison reveals that the modified JonesD hidden line and contour subsystem yields substantial processing time savings, when processing moderate sized models comprised of 1000 elements or less. There are, however, some limitations to the modified JonesD subsystem.
A Deep and Autoregressive Approach for Topic Modeling of Multimodal Data.
Zheng, Yin; Zhang, Yu-Jin; Larochelle, Hugo
2016-06-01
Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.
ERIC Educational Resources Information Center
Crouse, Janice Shaw; Crouse, Gilbert L.
Communication scholars have only recently begun to consider internal processes of thought as essential components of interpersonal communication. In 1964 a reorientation of thinking to include intrapersonal processes as integral to the communication process was first urged. The "hidden other" refers to the wellspring of the mind and its…
ERIC Educational Resources Information Center
Hubbard, Barry
2010-01-01
Understanding the influential factors at work within an online learning environment is a growing area of interest. Hidden or implicit expectations, skill sets, knowledge, and social process can help or hinder student achievement, belief systems, and persistence. This qualitative study investigated how hidden curricular issues transpired in an…
Structure and Randomness of Continuous-Time, Discrete-Event Processes
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2017-10-01
Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.
A New Family of Solvable Pearson-Dirichlet Random Walks
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2011-07-01
An n-step Pearson-Gamma random walk in ℝ d starts at the origin and consists of n independent steps with gamma distributed lengths and uniform orientations. The gamma distribution of each step length has a shape parameter q>0. Constrained random walks of n steps in ℝ d are obtained from the latter walks by imposing that the sum of the step lengths is equal to a fixed value. Simple closed-form expressions were obtained in particular for the distribution of the endpoint of such constrained walks for any d≥ d 0 and any n≥2 when q is either q = d/2 - 1 ( d 0=3) or q= d-1 ( d 0=2) (Le Caër in J. Stat. Phys. 140:728-751, 2010). When the total walk length is chosen, without loss of generality, to be equal to 1, then the constrained step lengths have a Dirichlet distribution whose parameters are all equal to q and the associated walk is thus named a Pearson-Dirichlet random walk. The density of the endpoint position of a n-step planar walk of this type ( n≥2), with q= d=2, was shown recently to be a weighted mixture of 1+ floor( n/2) endpoint densities of planar Pearson-Dirichlet walks with q=1 (Beghin and Orsingher in Stochastics 82:201-229, 2010). The previous result is generalized to any walk space dimension and any number of steps n≥2 when the parameter of the Pearson-Dirichlet random walk is q= d>1. We rely on the connection between an unconstrained random walk and a constrained one, which have both the same n and the same q= d, to obtain a closed-form expression of the endpoint density. The latter is a weighted mixture of 1+ floor( n/2) densities with simple forms, equivalently expressed as a product of a power and a Gauss hypergeometric function. The weights are products of factors which depends both on d and n and Bessel numbers independent of d.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthias C. M. Troffaes; Gero Walter; Dana Kelly
In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus onmore » elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model.« less
A TWO-STATE MIXED HIDDEN MARKOV MODEL FOR RISKY TEENAGE DRIVING BEHAVIOR
Jackson, John C.; Albert, Paul S.; Zhang, Zhiwei
2016-01-01
This paper proposes a joint model for longitudinal binary and count outcomes. We apply the model to a unique longitudinal study of teen driving where risky driving behavior and the occurrence of crashes or near crashes are measured prospectively over the first 18 months of licensure. Of scientific interest is relating the two processes and predicting crash and near crash outcomes. We propose a two-state mixed hidden Markov model whereby the hidden state characterizes the mean for the joint longitudinal crash/near crash outcomes and elevated g-force events which are a proxy for risky driving. Heterogeneity is introduced in both the conditional model for the count outcomes and the hidden process using a shared random effect. An estimation procedure is presented using the forward–backward algorithm along with adaptive Gaussian quadrature to perform numerical integration. The estimation procedure readily yields hidden state probabilities as well as providing for a broad class of predictors. PMID:27766124
Priming Interpretations: Contextual Impact on the Processing of Garden Path Jokes
ERIC Educational Resources Information Center
Mayerhofer, Bastian; Maier, Katja; Schacht, Annekathrin
2016-01-01
In garden path (GP) jokes, a first dominant interpretation is detected as incoherent and subsequently substituted by a hidden joke interpretation. Two important factors for the processing of GP jokes are salience of the initial interpretation and accessibility of the hidden interpretation. Both factors are assumed to be affected by contextual…
Thermodynamic Identities and Symmetry Breaking in Short-Range Spin Glasses
NASA Astrophysics Data System (ADS)
Arguin, L.-P.; Newman, C. M.; Stein, D. L.
2015-10-01
We present a technique to generate relations connecting pure state weights, overlaps, and correlation functions in short-range spin glasses. These are obtained directly from the unperturbed Hamiltonian and hold for general coupling distributions. All are satisfied in phases with simple thermodynamic structure, such as the droplet-scaling and chaotic pairs pictures. If instead nontrivial mixed-state pictures hold, the relations suggest that replica symmetry is broken as described by a Derrida-Ruelle cascade, with pure state weights distributed as a Poisson-Dirichlet process.
Low frequency acoustic and electromagnetic scattering
NASA Technical Reports Server (NTRS)
Hariharan, S. I.; Maccamy, R. C.
1986-01-01
This paper deals with two classes of problems arising from acoustics and electromagnetics scattering in the low frequency stations. The first class of problem is solving Helmholtz equation with Dirichlet boundary conditions on an arbitrary two dimensional body while the second one is an interior-exterior interface problem with Helmholtz equation in the exterior. Low frequency analysis show that there are two intermediate problems which solve the above problems accurate to 0(k/2/ log k) where k is the frequency. These solutions greatly differ from the zero frequency approximations. For the Dirichlet problem numerical examples are shown to verify the theoretical estimates.
The first eigenvalue of the p-Laplacian on quantum graphs
NASA Astrophysics Data System (ADS)
Del Pezzo, Leandro M.; Rossi, Julio D.
2016-12-01
We study the first eigenvalue of the p-Laplacian (with 1
Detecting Anisotropic Inclusions Through EIT
NASA Astrophysics Data System (ADS)
Cristina, Jan; Päivärinta, Lassi
2017-12-01
We study the evolution equation {partialtu=-Λtu} where {Λt} is the Dirichlet-Neumann operator of a decreasing family of Riemannian manifolds with boundary {Σt}. We derive a lower bound for the solution of such an equation, and apply it to a quantitative density estimate for the restriction of harmonic functions on M}=Σ_{0 to the boundaries of {partialΣt}. Consequently we are able to derive a lower bound for the difference of the Dirichlet-Neumann maps in terms of the difference of a background metrics g and an inclusion metric {g+χ_{Σ}(h-g)} on a manifold M.
Hidden Statistics of Schroedinger Equation
NASA Technical Reports Server (NTRS)
Zak, Michail
2011-01-01
Work was carried out in determination of the mathematical origin of randomness in quantum mechanics and creating a hidden statistics of Schr dinger equation; i.e., to expose the transitional stochastic process as a "bridge" to the quantum world. The governing equations of hidden statistics would preserve such properties of quantum physics as superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods.
The Hidden Messages of Secondary Reading Programs: What Students Learn vs. What Teachers Teach.
ERIC Educational Resources Information Center
Battraw, Judith L.
Hidden messages are part of the culture of reading at any school, particularly at the secondary level. In many schools, the overt message that reading is essential to success on state-mandated tests and in society is jeopardized due to hidden messages about the nature of the reading process and the place of reading in everyday life. A qualitative…
Quantification of Operational Risk Using A Data Mining
NASA Technical Reports Server (NTRS)
Perera, J. Sebastian
1999-01-01
What is Data Mining? - Data Mining is the process of finding actionable information hidden in raw data. - Data Mining helps find hidden patterns, trends, and important relationships often buried in a sea of data - Typically, automated software tools based on advanced statistical analysis and data modeling technology can be utilized to automate the data mining process
NASA Astrophysics Data System (ADS)
Hill, Peter; Shanahan, Brendan; Dudson, Ben
2017-04-01
We present a technique for handling Dirichlet boundary conditions with the Flux Coordinate Independent (FCI) parallel derivative operator with arbitrary-shaped material geometry in general 3D magnetic fields. The FCI method constructs a finite difference scheme for ∇∥ by following field lines between poloidal planes and interpolating within planes. Doing so removes the need for field-aligned coordinate systems that suffer from singularities in the metric tensor at null points in the magnetic field (or equivalently, when q → ∞). One cost of this method is that as the field lines are not on the mesh, they may leave the domain at any point between neighbouring planes, complicating the application of boundary conditions. The Leg Value Fill (LVF) boundary condition scheme presented here involves an extrapolation/interpolation of the boundary value onto the field line end point. The usual finite difference scheme can then be used unmodified. We implement the LVF scheme in BOUT++ and use the Method of Manufactured Solutions to verify the implementation in a rectangular domain, and show that it does not modify the error scaling of the finite difference scheme. The use of LVF for arbitrary wall geometry is outlined. We also demonstrate the feasibility of using the FCI approach in no n-axisymmetric configurations for a simple diffusion model in a "straight stellarator" magnetic field. A Gaussian blob diffuses along the field lines, tracing out flux surfaces. Dirichlet boundary conditions impose a last closed flux surface (LCFS) that confines the density. Including a poloidal limiter moves the LCFS to a smaller radius. The expected scaling of the numerical perpendicular diffusion, which is a consequence of the FCI method, in stellarator-like geometry is recovered. A novel technique for increasing the parallel resolution during post-processing, in order to reduce artefacts in visualisations, is described.
NASA Astrophysics Data System (ADS)
Nakamura, Gen; Wang, Haibing
2017-05-01
Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.
NASA Astrophysics Data System (ADS)
Reimer, Ashton S.; Cheviakov, Alexei F.
2013-03-01
A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.
2016-01-01
Identifying the hidden state is important for solving problems with hidden state. We prove any deterministic partially observable Markov decision processes (POMDP) can be represented by a minimal, looping hidden state transition model and propose a heuristic state transition model constructing algorithm. A new spatiotemporal associative memory network (STAMN) is proposed to realize the minimal, looping hidden state transition model. STAMN utilizes the neuroactivity decay to realize the short-term memory, connection weights between different nodes to represent long-term memory, presynaptic potentials, and synchronized activation mechanism to complete identifying and recalling simultaneously. Finally, we give the empirical illustrations of the STAMN and compare the performance of the STAMN model with that of other methods. PMID:27891146
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
2009-12-18
cannot be detected with univariate techniques, but require multivariate analysis instead (Kamitani and Tong [2005]). Two other time series analysis ...learning for time series analysis . The historical record of DBNs can be traced back to Dean and Kanazawa [1988] and Dean and Wellman [1991], with...Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: Hidden Process Models, probabilistic time series modeling, functional Magnetic Resonance Imaging
A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions
NASA Astrophysics Data System (ADS)
Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.
2014-01-01
We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.
On the exterior Dirichlet problem for Hessian quotient equations
NASA Astrophysics Data System (ADS)
Li, Dongsheng; Li, Zhisu
2018-06-01
In this paper, we establish the existence and uniqueness theorem for solutions of the exterior Dirichlet problem for Hessian quotient equations with prescribed asymptotic behavior at infinity. This extends the previous related results on the Monge-Ampère equations and on the Hessian equations, and rearranges them in a systematic way. Based on the Perron's method, the main ingredient of this paper is to construct some appropriate subsolutions of the Hessian quotient equation, which is realized by introducing some new quantities about the elementary symmetric polynomials and using them to analyze the corresponding ordinary differential equation related to the generalized radially symmetric subsolutions of the original equation.
A three dimensional Dirichlet-to-Neumann map for surface waves over topography
NASA Astrophysics Data System (ADS)
Nachbin, Andre; Andrade, David
2016-11-01
We consider three dimensional surface water waves in the potential theory regime. The bottom topography can have a quite general profile. In the case of linear waves the Dirichlet-to-Neumann operator is formulated in a matrix decomposition form. Computational simulations illustrate the performance of the method. Two dimensional periodic bottom variations are considered in both the Bragg resonance regime as well as the rapidly varying (homogenized) regime. In the three-dimensional case we use the Luneburg lens-shaped submerged mound, which promotes the focusing of the underlying rays. FAPERJ Cientistas do Nosso Estado Grant 102917/2011 and ANP/PRH-32.
Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar
2005-11-04
The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.
Two-point correlation function for Dirichlet L-functions
NASA Astrophysics Data System (ADS)
Bogomolny, E.; Keating, J. P.
2013-03-01
The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy-Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question.
Sheng, Yin; Zhang, Hao; Zeng, Zhigang
2017-10-01
This paper is concerned with synchronization for a class of reaction-diffusion neural networks with Dirichlet boundary conditions and infinite discrete time-varying delays. By utilizing theories of partial differential equations, Green's formula, inequality techniques, and the concept of comparison, algebraic criteria are presented to guarantee master-slave synchronization of the underlying reaction-diffusion neural networks via a designed controller. Additionally, sufficient conditions on exponential synchronization of reaction-diffusion neural networks with finite time-varying delays are established. The proposed criteria herein enhance and generalize some published ones. Three numerical examples are presented to substantiate the validity and merits of the obtained theoretical results.
Doja, Asif; Bould, M Dylan; Clarkin, Chantalle; Eady, Kaylee; Sutherland, Stephanie; Writer, Hilary
2016-04-01
The hidden and informal curricula refer to learning in response to unarticulated processes and constraints, falling outside the formal medical curriculum. The hidden curriculum has been identified as requiring attention across all levels of learning. We sought to assess the knowledge and perceptions of the hidden and informal curricula across the continuum of learning at a single institution. Focus groups were held with undergraduate and postgraduate learners and faculty to explore knowledge and perceptions relating to the hidden and informal curricula. Thematic analysis was conducted both inductively by research team members and deductively using questions structured by the existing literature. Participants highlighted several themes related to the presence of the hidden and informal curricula in medical training and practice, including: the privileging of some specialties over others; the reinforcement of hierarchies within medicine; and a culture of tolerance towards unprofessional behaviors. Participants acknowledged the importance of role modeling in the development of professional identities and discussed the deterioration in idealism that occurs. Common issues pertaining to the hidden curriculum exist across all levels of learners, including faculty. Increased awareness of these issues could allow for the further development of methods to address learning within the hidden curriculum.
Doja, Asif; Bould, M Dylan; Clarkin, Chantalle; Eady, Kaylee; Sutherland, Stephanie; Writer, Hilary
2016-01-01
The hidden and informal curricula refer to learning in response to unarticulated processes and constraints, falling outside the formal medical curriculum. The hidden curriculum has been identified as requiring attention across all levels of learning. We sought to assess the knowledge and perceptions of the hidden and informal curricula across the continuum of learning at a single institution. Focus groups were held with undergraduate and postgraduate learners and faculty to explore knowledge and perceptions relating to the hidden and informal curricula. Thematic analysis was conducted both inductively by research team members and deductively using questions structured by the existing literature. Participants highlighted several themes related to the presence of the hidden and informal curricula in medical training and practice, including: the privileging of some specialties over others; the reinforcement of hierarchies within medicine; and a culture of tolerance towards unprofessional behaviors. Participants acknowledged the importance of role modeling in the development of professional identities and discussed the deterioration in idealism that occurs. Common issues pertaining to the hidden curriculum exist across all levels of learners, including faculty. Increased awareness of these issues could allow for the further development of methods to address learning within the hidden curriculum.
Hafler, Janet P; Ownby, Allison R; Thompson, Britta M; Fasser, Carl E; Grigsby, Kevin; Haidet, Paul; Kahn, Marc J; Hafferty, Frederic W
2011-04-01
Medical student literature has broadly established the importance of differentiating between formal-explicit and hidden-tacit dimensions of the physician education process. The hidden curriculum refers to cultural mores that are transmitted, but not openly acknowledged, through formal and informal educational endeavors. The authors extend the concept of the hidden curriculum from students to faculty, and in so doing, they frame the acquisition by faculty of knowledge, skills, and values as a more global process of identity formation. This process includes a subset of formal, formative activities labeled "faculty development programs" that target specific faculty skills such as teaching effectiveness or leadership; however, it also includes informal, tacit messages that faculty absorb. As faculty members are socialized into faculty life, they often encounter conflicting messages about their role. In this article, the authors examine how faculty development programs have functioned as a source of conflict, and they ask how these programs might be retooled to assist faculty in understanding the tacit institutional culture shaping effective socialization and in managing the inconsistencies that so often dominate faculty life. © by the Association of American Medical Colleges.
Generalization and capacity of extensively large two-layered perceptrons.
Rosen-Zvi, Michal; Engel, Andreas; Kanter, Ido
2002-09-01
The generalization ability and storage capacity of a treelike two-layered neural network with a number of hidden units scaling as the input dimension is examined. The mapping from the input to the hidden layer is via Boolean functions; the mapping from the hidden layer to the output is done by a perceptron. The analysis is within the replica framework where an order parameter characterizing the overlap between two networks in the combined space of Boolean functions and hidden-to-output couplings is introduced. The maximal capacity of such networks is found to scale linearly with the logarithm of the number of Boolean functions per hidden unit. The generalization process exhibits a first-order phase transition from poor to perfect learning for the case of discrete hidden-to-output couplings. The critical number of examples per input dimension, alpha(c), at which the transition occurs, again scales linearly with the logarithm of the number of Boolean functions. In the case of continuous hidden-to-output couplings, the generalization error decreases according to the same power law as for the perceptron, with the prefactor being different.
Multilayer neural networks with extensively many hidden units.
Rosen-Zvi, M; Engel, A; Kanter, I
2001-08-13
The information processing abilities of a multilayer neural network with a number of hidden units scaling as the input dimension are studied using statistical mechanics methods. The mapping from the input layer to the hidden units is performed by general symmetric Boolean functions, whereas the hidden layer is connected to the output by either discrete or continuous couplings. Introducing an overlap in the space of Boolean functions as order parameter, the storage capacity is found to scale with the logarithm of the number of implementable Boolean functions. The generalization behavior is smooth for continuous couplings and shows a discontinuous transition to perfect generalization for discrete ones.
Estimating Density and Temperature Dependence of Juvenile Vital Rates Using a Hidden Markov Model
McElderry, Robert M.
2017-01-01
Organisms in the wild have cryptic life stages that are sensitive to changing environmental conditions and can be difficult to survey. In this study, I used mark-recapture methods to repeatedly survey Anaea aidea (Nymphalidae) caterpillars in nature, then modeled caterpillar demography as a hidden Markov process to assess if temporal variability in temperature and density influence the survival and growth of A. aidea over time. Individual encounter histories result from the joint likelihood of being alive and observed in a particular stage, and I have included hidden states by separating demography and observations into parallel and independent processes. I constructed a demographic matrix containing the probabilities of all possible fates for each stage, including hidden states, e.g., eggs and pupae. I observed both dead and live caterpillars with high probability. Peak caterpillar abundance attracted multiple predators, and survival of fifth instars declined as per capita predation rate increased through spring. A time lag between predator and prey abundance was likely the cause of improved fifth instar survival estimated at high density. Growth rates showed an increase with temperature, but the preferred model did not include temperature. This work illustrates how state-space models can include unobservable stages and hidden state processes to evaluate how environmental factors influence vital rates of cryptic life stages in the wild. PMID:28505138
Student portfolios and the hidden curriculum on gender: mapping exclusion.
Phillips, Christine B
2009-09-01
The hidden curriculum - the norms, values and practices that are transmitted to students through modelling by preceptors and teachers, and decisions about curricular exclusions and inclusions - can be profoundly important in the socialising of trainee doctors. However, tracking the hidden curriculum as it evolves can be challenging for medical schools. This study aimed to explore the content of student e-portfolios on gender issues, a key perspective often taught through a hidden curriculum. Online posts for a gender and medicine e-portfolio task completed by two cohorts of students in Year 3 of a 4-year medical course (n = 167, 66% female) were analysed using a grounded theory approach. A process of gendered 'othering' was applied to both men and women in the medical school using different pedagogical strategies. Curricular emphases on women's health and lack of support for male students to acquire gynaecological examination skills were seen as explicit ways of excluding males. For female medical students, exclusion tended to be implicit, operating through modelling and aphoristic comments about so-called 'female-friendly' career choices and the negative impact of motherhood on career. E-portfolios can be a useful way of tracking the hidden curriculum as it evolves. Responses to gendered exclusion may be developed more readily for the explicit processes impacting on male students than for the implicit processes impacting on female students, which often reflect structural issues related to training and employment.
NASA Astrophysics Data System (ADS)
Smith, Keith; Ricaud, Benjamin; Shahid, Nauman; Rhodes, Stephen; Starr, John M.; Ibáñez, Augustin; Parra, Mario A.; Escudero, Javier; Vandergheynst, Pierre
2017-02-01
Visual short-term memory binding tasks are a promising early marker for Alzheimer’s disease (AD). To uncover functional deficits of AD in these tasks it is meaningful to first study unimpaired brain function. Electroencephalogram recordings were obtained from encoding and maintenance periods of tasks performed by healthy young volunteers. We probe the task’s transient physiological underpinnings by contrasting shape only (Shape) and shape-colour binding (Bind) conditions, displayed in the left and right sides of the screen, separately. Particularly, we introduce and implement a novel technique named Modular Dirichlet Energy (MDE) which allows robust and flexible analysis of the functional network with unprecedented temporal precision. We find that connectivity in the Bind condition is less integrated with the global network than in the Shape condition in occipital and frontal modules during the encoding period of the right screen condition. Using MDE we are able to discern driving effects in the occipital module between 100-140 ms, coinciding with the P100 visually evoked potential, followed by a driving effect in the frontal module between 140-180 ms, suggesting that the differences found constitute an information processing difference between these modules. This provides temporally precise information over a heterogeneous population in promising tasks for the detection of AD.
NASA Astrophysics Data System (ADS)
Bertini, Lorenzo; Gabrielli, Davide; Landim, Claudio
2009-07-01
We consider the weakly asymmetric exclusion process on a bounded interval with particles reservoirs at the endpoints. The hydrodynamic limit for the empirical density, obtained in the diffusive scaling, is given by the viscous Burgers equation with Dirichlet boundary conditions. In the case in which the bulk asymmetry is in the same direction as the drift due to the boundary reservoirs, we prove that the quasi-potential can be expressed in terms of the solution to a one-dimensional boundary value problem which has been introduced by Enaud and Derrida [16]. We consider the strong asymmetric limit of the quasi-potential and recover the functional derived by Derrida, Lebowitz, and Speer [15] for the asymmetric exclusion process.
Asymptotic stability of a nonlinear Korteweg-de Vries equation with critical lengths
NASA Astrophysics Data System (ADS)
Chu, Jixun; Coron, Jean-Michel; Shang, Peipei
2015-10-01
We study an initial-boundary-value problem of a nonlinear Korteweg-de Vries equation posed on the finite interval (0, 2 kπ) where k is a positive integer. The whole system has Dirichlet boundary condition at the left end-point, and both of Dirichlet and Neumann homogeneous boundary conditions at the right end-point. It is known that the origin is not asymptotically stable for the linearized system around the origin. We prove that the origin is (locally) asymptotically stable for the nonlinear system if the integer k is such that the kernel of the linear Korteweg-de Vries stationary equation is of dimension 1. This is for example the case if k = 1.
The Hidden Cost of Buying a Computer.
ERIC Educational Resources Information Center
Johnson, Michael
1983-01-01
In order to process data in a computer, application software must be either developed or purchased. Costs for modifications of the software package and maintenance are often hidden. The decision to buy or develop software packages should be based upon factors of time and maintenance. (MLF)
Hidden asymmetry and forward-backward correlations
NASA Astrophysics Data System (ADS)
Bialas, A.; Zalewski, K.
2010-09-01
A model-independent method of studying the forward-backward correlations in symmetric high-energy processes is developed. The method allows a systematic study of the properties of various particle sources and allows one to uncover asymmetric structures hidden in symmetric hadron-hadron and nucleus-nucleus inelastic reactions.
Pirani, Monica; Best, Nicky; Blangiardo, Marta; Liverani, Silvia; Atkinson, Richard W.; Fuller, Gary W.
2015-01-01
Background Airborne particles are a complex mix of organic and inorganic compounds, with a range of physical and chemical properties. Estimation of how simultaneous exposure to air particles affects the risk of adverse health response represents a challenge for scientific research and air quality management. In this paper, we present a Bayesian approach that can tackle this problem within the framework of time series analysis. Methods We used Dirichlet process mixture models to cluster time points with similar multipollutant and response profiles, while adjusting for seasonal cycles, trends and temporal components. Inference was carried out via Markov Chain Monte Carlo methods. We illustrated our approach using daily data of a range of particle metrics and respiratory mortality for London (UK) 2002–2005. To better quantify the average health impact of these particles, we measured the same set of metrics in 2012, and we computed and compared the posterior predictive distributions of mortality under the exposure scenario in 2012 vs 2005. Results The model resulted in a partition of the days into three clusters. We found a relative risk of 1.02 (95% credible intervals (CI): 1.00, 1.04) for respiratory mortality associated with days characterised by high posterior estimates of non-primary particles, especially nitrate and sulphate. We found a consistent reduction in the airborne particles in 2012 vs 2005 and the analysis of the posterior predictive distributions of respiratory mortality suggested an average annual decrease of − 3.5% (95% CI: − 0.12%, − 5.74%). Conclusions We proposed an effective approach that enabled the better understanding of hidden structures in multipollutant health effects within time series analysis. It allowed the identification of exposure metrics associated with respiratory mortality and provided a tool to assess the changes in health effects from various policies to control the ambient particle matter mixtures. PMID:25795926
Pirani, Monica; Best, Nicky; Blangiardo, Marta; Liverani, Silvia; Atkinson, Richard W; Fuller, Gary W
2015-06-01
Airborne particles are a complex mix of organic and inorganic compounds, with a range of physical and chemical properties. Estimation of how simultaneous exposure to air particles affects the risk of adverse health response represents a challenge for scientific research and air quality management. In this paper, we present a Bayesian approach that can tackle this problem within the framework of time series analysis. We used Dirichlet process mixture models to cluster time points with similar multipollutant and response profiles, while adjusting for seasonal cycles, trends and temporal components. Inference was carried out via Markov Chain Monte Carlo methods. We illustrated our approach using daily data of a range of particle metrics and respiratory mortality for London (UK) 2002-2005. To better quantify the average health impact of these particles, we measured the same set of metrics in 2012, and we computed and compared the posterior predictive distributions of mortality under the exposure scenario in 2012 vs 2005. The model resulted in a partition of the days into three clusters. We found a relative risk of 1.02 (95% credible intervals (CI): 1.00, 1.04) for respiratory mortality associated with days characterised by high posterior estimates of non-primary particles, especially nitrate and sulphate. We found a consistent reduction in the airborne particles in 2012 vs 2005 and the analysis of the posterior predictive distributions of respiratory mortality suggested an average annual decrease of -3.5% (95% CI: -0.12%, -5.74%). We proposed an effective approach that enabled the better understanding of hidden structures in multipollutant health effects within time series analysis. It allowed the identification of exposure metrics associated with respiratory mortality and provided a tool to assess the changes in health effects from various policies to control the ambient particle matter mixtures. Copyright © 2015. Published by Elsevier Ltd.
Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.
Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry
2016-09-01
Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Scalar Casimir densities and forces for parallel plates in cosmic string spacetime
NASA Astrophysics Data System (ADS)
Bezerra de Mello, E. R.; Saharian, A. A.; Abajyan, S. V.
2018-04-01
We analyze the Green function, the Casimir densities and forces associated with a massive scalar quantum field confined between two parallel plates in a higher dimensional cosmic string spacetime. The plates are placed orthogonal to the string, and the field obeys the Robin boundary conditions on them. The boundary-induced contributions are explicitly extracted in the vacuum expectation values (VEVs) of the field squared and of the energy-momentum tensor for both the single plate and two plates geometries. The VEV of the energy-momentum tensor, in additional to the diagonal components, contains an off diagonal component corresponding to the shear stress. The latter vanishes on the plates in special cases of Dirichlet and Neumann boundary conditions. For points outside the string core the topological contributions in the VEVs are finite on the plates. Near the string the VEVs are dominated by the boundary-free part, whereas at large distances the boundary-induced contributions dominate. Due to the nonzero off diagonal component of the vacuum energy-momentum tensor, in addition to the normal component, the Casimir forces have nonzero component parallel to the boundary (shear force). Unlike the problem on the Minkowski bulk, the normal forces acting on the separate plates, in general, do not coincide if the corresponding Robin coefficients are different. Another difference is that in the presence of the cosmic string the Casimir forces for Dirichlet and Neumann boundary conditions differ. For Dirichlet boundary condition the normal Casimir force does not depend on the curvature coupling parameter. This is not the case for other boundary conditions. A new qualitative feature induced by the cosmic string is the appearance of the shear stress acting on the plates. The corresponding force is directed along the radial coordinate and vanishes for Dirichlet and Neumann boundary conditions. Depending on the parameters of the problem, the radial component of the shear force can be either positive or negative.
Matrix Determination of Reflectance of Hidden Object via Indirect Photography
2012-03-01
the hidden object. This thesis provides an alternative method of processing the camera images by modeling the system as a set of transport and...Distribution Function ( BRDF ). Figure 1. Indirect photography with camera field of view dictated by point of illumination. 3 1.3 Research Focus In an...would need to be modeled using radiometric principles. A large amount of the improvement in this process was due to the use of a blind
Effect of baking on reduction of free and hidden fumonisins in gluten-free bread.
Bryła, Marcin; Roszko, Marek; Szymczyk, Krystyna; Jędrzejczak, Renata; Słowik, Elżbieta; Obiedziński, Mieczysław W
2014-10-22
The aim of the present work was to assess the influence of the baking process on the fumonisin content in gluten-free bread. The dough was made using two methods: without sourdough and with sourdough. Fumonisins were determined using high-performance liquid chromatography with ion-trap mass spectrometry. This study showed that the bread baking process caused a statistically significant drop in the mean concentration of free fumonisins: the reduction levels were 30 and 32% for the direct and sourdough-based methods, respectively. The lower reduction after baking was observed for hidden fumonisins: 19 and 10%, respectively. The presence of some compounds (such as proteins or starch) capable of stabilizing fumonisins during the baking process might be responsible for the observed increase in the hidden-to-free ratio from an initial 0.72 in flour to 0.83 in bread made from sourdough and to 0.95 in sourdough-free bread.
"Glitch Logic" and Applications to Computing and Information Security
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Katkoori, Srinivas
2009-01-01
This paper introduces a new method of information processing in digital systems, and discusses its potential benefits to computing and information security. The new method exploits glitches caused by delays in logic circuits for carrying and processing information. Glitch processing is hidden to conventional logic analyses and undetectable by traditional reverse engineering techniques. It enables the creation of new logic design methods that allow for an additional controllable "glitch logic" processing layer embedded into a conventional synchronous digital circuits as a hidden/covert information flow channel. The combination of synchronous logic with specific glitch logic design acting as an additional computing channel reduces the number of equivalent logic designs resulting from synthesis, thus implicitly reducing the possibility of modification and/or tampering with the design. The hidden information channel produced by the glitch logic can be used: 1) for covert computing/communication, 2) to prevent reverse engineering, tampering, and alteration of design, and 3) to act as a channel for information infiltration/exfiltration and propagation of viruses/spyware/Trojan horses.
Doing School Time: The Hidden Curriculum Goes to Prison
ERIC Educational Resources Information Center
García, José; De Lissovoy, Noah
2013-01-01
The hidden curriculum is generally understood as the process by which daily exposure to school expectations and routines transmits norms and values of the dominant society to students. In the present, through the regimentation of thought, control of bodies and movement, and proliferation of punishment, contemporary accountability and testing…
Hidden Dimensions in the So-Called Reality of a Mathematics Classroom.
ERIC Educational Resources Information Center
Bauersfeld, Heinrich
1980-01-01
Teaching and learning mathematics in classrooms is interpreted as human interaction in an institutionalized setting. Using theories and categories from different disciplines, a classroom episode is reanalyzed. Four hidden dimensions in the classroom process and thus deficient areas of research are identified. Consequences for teacher training are…
A Meinardus Theorem with Multiple Singularities
NASA Astrophysics Data System (ADS)
Granovsky, Boris L.; Stark, Dudley
2012-09-01
Meinardus proved a general theorem about the asymptotics of the number of weighted partitions, when the Dirichlet generating function for weights has a single pole on the positive real axis. Continuing (Granovsky et al., Adv. Appl. Math. 41:307-328, 2008), we derive asymptotics for the numbers of three basic types of decomposable combinatorial structures (or, equivalently, ideal gas models in statistical mechanics) of size n, when their Dirichlet generating functions have multiple simple poles on the positive real axis. Examples to which our theorem applies include ones related to vector partitions and quantum field theory. Our asymptotic formula for the number of weighted partitions disproves the belief accepted in the physics literature that the main term in the asymptotics is determined by the rightmost pole.
NASA Astrophysics Data System (ADS)
Ding, Xiao-Li; Nieto, Juan J.
2017-11-01
In this paper, we consider the analytical solutions of coupling fractional partial differential equations (FPDEs) with Dirichlet boundary conditions on a finite domain. Firstly, the method of successive approximations is used to obtain the analytical solutions of coupling multi-term time fractional ordinary differential equations. Then, the technique of spectral representation of the fractional Laplacian operator is used to convert the coupling FPDEs to the coupling multi-term time fractional ordinary differential equations. By applying the obtained analytical solutions to the resulting multi-term time fractional ordinary differential equations, the desired analytical solutions of the coupling FPDEs are given. Our results are applied to derive the analytical solutions of some special cases to demonstrate their applicability.
Bayesian structural inference for hidden processes.
Strelioff, Christopher C; Crutchfield, James P
2014-04-01
We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ε-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ε-machines, irrespective of estimated transition probabilities. Properties of ε-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.
Bayesian structural inference for hidden processes
NASA Astrophysics Data System (ADS)
Strelioff, Christopher C.; Crutchfield, James P.
2014-04-01
We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.
May Stakeholders be Involved in Design Without Informed Consent? The Case of Hidden Design.
Pols, A J K
2017-06-01
Stakeholder involvement in design is desirable from both a practical and an ethical point of view. It is difficult to do well, however, and some problems recur again and again, both of a practical nature, e.g. stakeholders acting strategically rather than openly, and of an ethical nature, e.g. power imbalances unduly affecting the outcome of the process. Hidden Design has been proposed as a method to deal with the practical problems of stakeholder involvement. It aims to do so by taking the observation of stakeholder actions, rather than the outcomes of a deliberative process, as its input. Furthermore, it hides from stakeholders the fact that a design process is taking place so that they will not behave differently than they otherwise would. Both aspects of Hidden Design have raised ethical worries. In this paper I make an ethical analysis of what it means for a design process to leave participants uninformed or deceived rather than acquiring their informed consent beforehand, and to use observation of actions rather than deliberation as input for design, using Hidden Design as a case study. This analysis is based on two sets of normative guidelines: the ethical guidelines for psychological research involving deception or uninformed participants from two professional psychological organisations, and Habermasian norms for a fair and just (deliberative) process. It supports the conclusion that stakeholder involvement in design organised in this way can be ethically acceptable, though under a number of conditions and constraints.
NASA Astrophysics Data System (ADS)
Li, Dong; Guo, Shangjiang
Chemotaxis is an observed phenomenon in which a biological individual moves preferentially toward a relatively high concentration, which is contrary to the process of natural diffusion. In this paper, we study a reaction-diffusion model with chemotaxis and nonlocal delay effect under Dirichlet boundary condition by using Lyapunov-Schmidt reduction and the implicit function theorem. The existence, multiplicity, stability and Hopf bifurcation of spatially nonhomogeneous steady state solutions are investigated. Moreover, our results are illustrated by an application to the model with a logistic source, homogeneous kernel and one-dimensional spatial domain.
Inference for dynamics of continuous variables: the extended Plefka expansion with hidden nodes
NASA Astrophysics Data System (ADS)
Bravi, B.; Sollich, P.
2017-06-01
We consider the problem of a subnetwork of observed nodes embedded into a larger bulk of unknown (i.e. hidden) nodes, where the aim is to infer these hidden states given information about the subnetwork dynamics. The biochemical networks underlying many cellular and metabolic processes are important realizations of such a scenario as typically one is interested in reconstructing the time evolution of unobserved chemical concentrations starting from the experimentally more accessible ones. We present an application to this problem of a novel dynamical mean field approximation, the extended Plefka expansion, which is based on a path integral description of the stochastic dynamics. As a paradigmatic model we study the stochastic linear dynamics of continuous degrees of freedom interacting via random Gaussian couplings. The resulting joint distribution is known to be Gaussian and this allows us to fully characterize the posterior statistics of the hidden nodes. In particular the equal-time hidden-to-hidden variance—conditioned on observations—gives the expected error at each node when the hidden time courses are predicted based on the observations. We assess the accuracy of the extended Plefka expansion in predicting these single node variances as well as error correlations over time, focussing on the role of the system size and the number of observed nodes.
Job Clubs: Getting into the Hidden Labor Market.
ERIC Educational Resources Information Center
Kimeldorf, Martin; Tornow, Janice A.
1984-01-01
A job club approach for secondary disabled youth focuses on mastering job seeking skills by behaviorally sequenced steps learned in situational experiences within a self-help group process framework. Students learn to penetrate the hidden job market, to use social networking via the telephone, and to participate successfully in job interviews. (CL)
ERIC Educational Resources Information Center
Oseroff-Varnell, Dee
1998-01-01
Examines the socialization process of newcomers to a residential high school for performing arts. Finds that communication appeared particularly useful in reducing affective uncertainty and providing students with reassurance and support. Analyzes the hidden curriculum of this school, identifying four aspects: control versus freedom, inclusion…
Progressing to University: Hidden Messages at Two State Schools
ERIC Educational Resources Information Center
Donnelly, Michael
2015-01-01
This paper considers some of the ways that schools play a role in shaping higher education (HE) decision-making. Through their everyday practices and processes, schools can carry hidden messages about progression to HE, including choice of university. The sorts of routine aspects of school life dealt with here include events and activities,…
Intelligent classifier for dynamic fault patterns based on hidden Markov model
NASA Astrophysics Data System (ADS)
Xu, Bo; Feng, Yuguang; Yu, Jinsong
2006-11-01
It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.
Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images
Zhou, Mingyuan; Chen, Haojun; Paisley, John; Ren, Lu; Li, Lingbo; Xing, Zhengming; Dunson, David; Sapiro, Guillermo; Carin, Lawrence
2013-01-01
Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete, and/or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, uniformly selected at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature. PMID:21693421
The Casimir effect for parallel plates revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawakami, N. A.; Nemes, M. C.; Wreszinski, Walter F.
2007-10-15
The Casimir effect for a massless scalar field with Dirichlet and periodic boundary conditions (bc's) on infinite parallel plates is revisited in the local quantum field theory (lqft) framework introduced by Kay [Phys. Rev. D 20, 3052 (1979)]. The model displays a number of more realistic features than the ones he treated. In addition to local observables, as the energy density, we propose to consider intensive variables, such as the energy per unit area {epsilon}, as fundamental observables. Adopting this view, lqft rejects Dirichlet (the same result may be proved for Neumann or mixed) bc, and accepts periodic bc: inmore » the former case {epsilon} diverges, in the latter it is finite, as is shown by an expression for the local energy density obtained from lqft through the use of the Poisson summation formula. Another way to see this uses methods from the Euler summation formula: in the proof of regularization independence of the energy per unit area, a regularization-dependent surface term arises upon use of Dirichlet bc, but not periodic bc. For the conformally invariant scalar quantum field, this surface term is absent due to the condition of zero trace of the energy momentum tensor, as remarked by De Witt [Phys. Rep. 19, 295 (1975)]. The latter property does not hold in the application to the dark energy problem in cosmology, in which we argue that periodic bc might play a distinguished role.« less
Hidden attractors in dynamical systems
NASA Astrophysics Data System (ADS)
Dudkowski, Dawid; Jafari, Sajad; Kapitaniak, Tomasz; Kuznetsov, Nikolay V.; Leonov, Gennady A.; Prasad, Awadhesh
2016-06-01
Complex dynamical systems, ranging from the climate, ecosystems to financial markets and engineering applications typically have many coexisting attractors. This property of the system is called multistability. The final state, i.e., the attractor on which the multistable system evolves strongly depends on the initial conditions. Additionally, such systems are very sensitive towards noise and system parameters so a sudden shift to a contrasting regime may occur. To understand the dynamics of these systems one has to identify all possible attractors and their basins of attraction. Recently, it has been shown that multistability is connected with the occurrence of unpredictable attractors which have been called hidden attractors. The basins of attraction of the hidden attractors do not touch unstable fixed points (if exists) and are located far away from such points. Numerical localization of the hidden attractors is not straightforward since there are no transient processes leading to them from the neighborhoods of unstable fixed points and one has to use the special analytical-numerical procedures. From the viewpoint of applications, the identification of hidden attractors is the major issue. The knowledge about the emergence and properties of hidden attractors can increase the likelihood that the system will remain on the most desirable attractor and reduce the risk of the sudden jump to undesired behavior. We review the most representative examples of hidden attractors, discuss their theoretical properties and experimental observations. We also describe numerical methods which allow identification of the hidden attractors.
Single-hidden-layer feed-forward quantum neural network based on Grover learning.
Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min
2013-09-01
In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.
Nicholls, David P
2018-04-01
The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.
A three-dimensional Dirichlet-to-Neumann operator for water waves over topography
NASA Astrophysics Data System (ADS)
Andrade, D.; Nachbin, A.
2018-06-01
Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.
NASA Astrophysics Data System (ADS)
Nicholls, David P.
2018-04-01
The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.
Modification of Classical SPM for Slightly Rough Surface Scattering with Low Grazing Angle Incidence
NASA Astrophysics Data System (ADS)
Guo, Li-Xin; Wei, Guo-Hui; Kim, Cheyoung; Wu, Zhen-Sen
2005-11-01
Based on the impedance/admittance rough boundaries, the reflection coefficients and the scattering cross section with low grazing angle incidence are obtained for both VV and HH polarizations. The error of the classical perturbation method at grazing angle is overcome for the vertical polarization at a rough Neumann boundary of infinite extent. The derivation of the formulae and the numerical results show that the backscattering cross section depends on the grazing angle to the fourth power for both Neumann and Dirichlet boundary conditions with low grazing angle incidence. Our results can reduce to that of the classical small perturbation method by neglecting the Neumann and Dirichlet boundary conditions. The project supported by National Natural Science Foundation of China under Grant No. 60101001 and the National Defense Foundation of China
What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.
Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.
What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm
Baig, Fahd; Little, Max A.
2016-01-01
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism. PMID:27669525
The Effect of Hidden Curriculum on Character Education Process of Primary School Students
ERIC Educational Resources Information Center
Cubukcu, Zuhal
2012-01-01
Character education is defined as a planned and systematical approach in terms of self- respect, responsibility and honesty etc. for being a good citizen. The elements of hidden curriculum possessed in schools are values, beliefs, attitudes, and norms and values which are important parts of school function, ceremonies and the quality of…
ARPA surveillance technology for detection of targets hidden in foliage
NASA Astrophysics Data System (ADS)
Hoff, Lawrence E.; Stotts, Larry B.
1994-02-01
The processing of large quantities of synthetic aperture radar data in real time is a complex problem. Even the image formation process taxes today's most advanced computers. The use of complex algorithms with multiple channels adds another dimension to the computational problem. Advanced Research Projects Agency (ARPA) is currently planning on using the Paragon parallel processor for this task. The Paragon is small enough to allow its use in a sensor aircraft. Candidate algorithms will be implemented on the Paragon for evaluation for real time processing. In this paper ARPA technology developments for detecting targets hidden in foliage are reviewed and examples of signal processing techniques on field collected data are presented.
Sound effects: Multimodal input helps infants find displaced objects.
Shinskey, Jeanne L
2017-09-01
Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more sensitive to bimodal input as multisensory functions develop across the first year. © 2016 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Yun, Ana; Shin, Jaemin; Li, Yibao; Lee, Seunggyu; Kim, Junseok
We numerically investigate periodic traveling wave solutions for a diffusive predator-prey system with landscape features. The landscape features are modeled through the homogeneous Dirichlet boundary condition which is imposed at the edge of the obstacle domain. To effectively treat the Dirichlet boundary condition, we employ a robust and accurate numerical technique by using a boundary control function. We also propose a robust algorithm for calculating the numerical periodicity of the traveling wave solution. In numerical experiments, we show that periodic traveling waves which move out and away from the obstacle are effectively generated. We explain the formation of the traveling waves by comparing the wavelengths. The spatial asynchrony has been shown in quantitative detail for various obstacles. Furthermore, we apply our numerical technique to the complicated real landscape features.
Sound-turbulence interaction in transonic boundary layers
NASA Astrophysics Data System (ADS)
Lelostec, Ludovic; Scalo, Carlo; Lele, Sanjiva
2014-11-01
Acoustic wave scattering in a transonic boundary layer is investigated through a novel approach. Instead of simulating directly the interaction of an incoming oblique acoustic wave with a turbulent boundary layer, suitable Dirichlet conditions are imposed at the wall to reproduce only the reflected wave resulting from the interaction of the incident wave with the boundary layer. The method is first validated using the laminar boundary layer profiles in a parallel flow approximation. For this scattering problem an exact inviscid solution can be found in the frequency domain which requires numerical solution of an ODE. The Dirichlet conditions are imposed in a high-fidelity unstructured compressible flow solver for Large Eddy Simulation (LES), CharLESx. The acoustic field of the reflected wave is then solved and the interaction between the boundary layer and sound scattering can be studied.
Step scaling and the Yang-Mills gradient flow
NASA Astrophysics Data System (ADS)
Lüscher, Martin
2014-06-01
The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.
Heat kernel for the elliptic system of linear elasticity with boundary conditions
NASA Astrophysics Data System (ADS)
Taylor, Justin; Kim, Seick; Brown, Russell
2014-10-01
We consider the elliptic system of linear elasticity with bounded measurable coefficients in a domain where the second Korn inequality holds. We construct heat kernel of the system subject to Dirichlet, Neumann, or mixed boundary condition under the assumption that weak solutions of the elliptic system are Hölder continuous in the interior. Moreover, we show that if weak solutions of the mixed problem are Hölder continuous up to the boundary, then the corresponding heat kernel has a Gaussian bound. In particular, if the domain is a two dimensional Lipschitz domain satisfying a corkscrew or non-tangential accessibility condition on the set where we specify Dirichlet boundary condition, then we show that the heat kernel has a Gaussian bound. As an application, we construct Green's function for elliptic mixed problem in such a domain.
Saint-Hilary, Gaelle; Cadour, Stephanie; Robert, Veronique; Gasparini, Mauro
2017-05-01
Quantitative methodologies have been proposed to support decision making in drug development and monitoring. In particular, multicriteria decision analysis (MCDA) and stochastic multicriteria acceptability analysis (SMAA) are useful tools to assess the benefit-risk ratio of medicines according to the performances of the treatments on several criteria, accounting for the preferences of the decision makers regarding the relative importance of these criteria. However, even in its probabilistic form, MCDA requires the exact elicitations of the weights of the criteria by the decision makers, which may be difficult to achieve in practice. SMAA allows for more flexibility and can be used with unknown or partially known preferences, but it is less popular due to its increased complexity and the high degree of uncertainty in its results. In this paper, we propose a simple model as a generalization of MCDA and SMAA, by applying a Dirichlet distribution to the weights of the criteria and by making its parameters vary. This unique model permits to fit both MCDA and SMAA, and allows for a more extended exploration of the benefit-risk assessment of treatments. The precision of its results depends on the precision parameter of the Dirichlet distribution, which could be naturally interpreted as the strength of confidence of the decision makers in their elicitation of preferences. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.
Multiple Detector Optimization for Hidden Radiation Source Detection
2015-03-26
important in achieving operationally useful methods for optimizing detector emplacement, the 2-D attenuation model approach promises to speed up the...process of hidden source detection significantly. The model focused on detection of the full energy peak of a radiation source. Methods to optimize... radioisotope identification is possible without using a computationally intensive stochastic model such as the Monte Carlo n-Particle (MCNP) code
ERIC Educational Resources Information Center
Stifter, Cynthia A.; Rovine, Michael
2015-01-01
The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…
Parsing Social Network Survey Data from Hidden Populations Using Stochastic Context-Free Grammars
Poon, Art F. Y.; Brouwer, Kimberly C.; Strathdee, Steffanie A.; Firestone-Cruz, Michelle; Lozada, Remedios M.; Kosakovsky Pond, Sergei L.; Heckathorn, Douglas D.; Frost, Simon D. W.
2009-01-01
Background Human populations are structured by social networks, in which individuals tend to form relationships based on shared attributes. Certain attributes that are ambiguous, stigmatized or illegal can create a ÔhiddenÕ population, so-called because its members are difficult to identify. Many hidden populations are also at an elevated risk of exposure to infectious diseases. Consequently, public health agencies are presently adopting modern survey techniques that traverse social networks in hidden populations by soliciting individuals to recruit their peers, e.g., respondent-driven sampling (RDS). The concomitant accumulation of network-based epidemiological data, however, is rapidly outpacing the development of computational methods for analysis. Moreover, current analytical models rely on unrealistic assumptions, e.g., that the traversal of social networks can be modeled by a Markov chain rather than a branching process. Methodology/Principal Findings Here, we develop a new methodology based on stochastic context-free grammars (SCFGs), which are well-suited to modeling tree-like structure of the RDS recruitment process. We apply this methodology to an RDS case study of injection drug users (IDUs) in Tijuana, México, a hidden population at high risk of blood-borne and sexually-transmitted infections (i.e., HIV, hepatitis C virus, syphilis). Survey data were encoded as text strings that were parsed using our custom implementation of the inside-outside algorithm in a publicly-available software package (HyPhy), which uses either expectation maximization or direct optimization methods and permits constraints on model parameters for hypothesis testing. We identified significant latent variability in the recruitment process that violates assumptions of Markov chain-based methods for RDS analysis: firstly, IDUs tended to emulate the recruitment behavior of their own recruiter; and secondly, the recruitment of like peers (homophily) was dependent on the number of recruits. Conclusions SCFGs provide a rich probabilistic language that can articulate complex latent structure in survey data derived from the traversal of social networks. Such structure that has no representation in Markov chain-based models can interfere with the estimation of the composition of hidden populations if left unaccounted for, raising critical implications for the prevention and control of infectious disease epidemics. PMID:19738904
Stifter, Cynthia A; Rovine, Michael
2015-01-01
The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at two and six months of age, used hidden Markov modeling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a 4-state model for the dyadic responses to a two-month inoculation whereas a 6-state model best described the dyadic process at six months. Two of the states at two months and three of the states at six months suggested a progression from high intensity crying to no crying with parents using vestibular and auditory soothing methods. The use of feeding and/or pacifying to soothe the infant characterized one two-month state and two six-month states. These data indicate that with maturation and experience, the mother-infant dyad is becoming more organized around the soothing interaction. Using hidden Markov modeling to describe individual differences, as well as normative processes, is also presented and discussed.
Stifter, Cynthia A.; Rovine, Michael
2016-01-01
The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at two and six months of age, used hidden Markov modeling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a 4-state model for the dyadic responses to a two-month inoculation whereas a 6-state model best described the dyadic process at six months. Two of the states at two months and three of the states at six months suggested a progression from high intensity crying to no crying with parents using vestibular and auditory soothing methods. The use of feeding and/or pacifying to soothe the infant characterized one two-month state and two six-month states. These data indicate that with maturation and experience, the mother-infant dyad is becoming more organized around the soothing interaction. Using hidden Markov modeling to describe individual differences, as well as normative processes, is also presented and discussed. PMID:27284272
Multiple Positive Solutions in the Second Order Autonomous Nonlinear Boundary Value Problems
NASA Astrophysics Data System (ADS)
Atslega, Svetlana; Sadyrbaev, Felix
2009-09-01
We construct the second order autonomous equations with arbitrarily large number of positive solutions satisfying homogeneous Dirichlet boundary conditions. Phase plane approach and bifurcation of solutions are the main tools.
Variational Problems with Long-Range Interaction
NASA Astrophysics Data System (ADS)
Soave, Nicola; Tavares, Hugo; Terracini, Susanna; Zilio, Alessandro
2018-06-01
We consider a class of variational problems for densities that repel each other at a distance. Typical examples are given by the Dirichlet functional and the Rayleigh functional D(u) = \\sum_{i=1}^k \\int_{Ω} |\
Lessard, Jean-Philippe; Weinstein, Ben G; Borregaard, Michael K; Marske, Katharine A; Martin, Danny R; McGuire, Jimmy A; Parra, Juan L; Rahbek, Carsten; Graham, Catherine H
2016-01-01
A persistent challenge in ecology is to tease apart the influence of multiple processes acting simultaneously and interacting in complex ways to shape the structure of species assemblages. We implement a heuristic approach that relies on explicitly defining species pools and permits assessment of the relative influence of the main processes thought to shape assemblage structure: environmental filtering, dispersal limitations, and biotic interactions. We illustrate our approach using data on the assemblage composition and geographic distribution of hummingbirds, a comprehensive phylogeny and morphological traits. The implementation of several process-based species pool definitions in null models suggests that temperature-but not precipitation or dispersal limitation-acts as the main regional filter of assemblage structure. Incorporating this environmental filter directly into the definition of assemblage-specific species pools revealed an otherwise hidden pattern of phylogenetic evenness, indicating that biotic interactions might further influence hummingbird assemblage structure. Such hidden patterns of assemblage structure call for a reexamination of a multitude of phylogenetic- and trait-based studies that did not explicitly consider potentially important processes in their definition of the species pool. Our heuristic approach provides a transparent way to explore patterns and refine interpretations of the underlying causes of assemblage structure.
On the production of hidden-flavored hadronic states at high energy
NASA Astrophysics Data System (ADS)
Wang, Wei
2018-04-01
I discuss the production mechanism of hidden-flavored hadrons at high energy. Using e+e‑ collisions and light-meson pair production in high energy exclusive processes, I demonstrate that hidden quark pairs do not necessarily participate in short-distance hard scattering. Implications are then explored in a few examples. Finally, I discuss the production mechanism of X(3872) in hadron collisions, where some misunderstandings have arisen in the literature. Supported by the Thousand Talents Plan for Young Professionals, National Natural Science Foundation of China (11575110, 11655002, 11735010, 11747611), Natural Science Foundation of Shanghai (15DZ2272100) and Scientific Research Foundation for Re- turned Overseas Chinese Scholars, Ministry of Education
Wikipedia mining of hidden links between political leaders
NASA Astrophysics Data System (ADS)
Frahm, Klaus M.; Jaffrès-Runser, Katia; Shepelyansky, Dima L.
2016-12-01
We describe a new method of reduced Google matrix which allows to establish direct and hidden links between a subset of nodes of a large directed network. This approach uses parallels with quantum scattering theory, developed for processes in nuclear and mesoscopic physics and quantum chaos. The method is applied to the Wikipedia networks in different language editions analyzing several groups of political leaders of USA, UK, Germany, France, Russia and G20. We demonstrate that this approach allows to recover reliably direct and hidden links among political leaders. We argue that the reduced Google matrix method can form the mathematical basis for studies in social and political sciences analyzing Leader-Members eXchange (LMX).
Short-term memory in networks of dissociated cortical neurons.
Dranias, Mark R; Ju, Han; Rajaram, Ezhilarasan; VanDongen, Antonius M J
2013-01-30
Short-term memory refers to the ability to store small amounts of stimulus-specific information for a short period of time. It is supported by both fading and hidden memory processes. Fading memory relies on recurrent activity patterns in a neuronal network, whereas hidden memory is encoded using synaptic mechanisms, such as facilitation, which persist even when neurons fall silent. We have used a novel computational and optogenetic approach to investigate whether these same memory processes hypothesized to support pattern recognition and short-term memory in vivo, exist in vitro. Electrophysiological activity was recorded from primary cultures of dissociated rat cortical neurons plated on multielectrode arrays. Cultures were transfected with ChannelRhodopsin-2 and optically stimulated using random dot stimuli. The pattern of neuronal activity resulting from this stimulation was analyzed using classification algorithms that enabled the identification of stimulus-specific memories. Fading memories for different stimuli, encoded in ongoing neural activity, persisted and could be distinguished from each other for as long as 1 s after stimulation was terminated. Hidden memories were detected by altered responses of neurons to additional stimulation, and this effect persisted longer than 1 s. Interestingly, network bursts seem to eliminate hidden memories. These results are similar to those that have been reported from similar experiments in vivo and demonstrate that mechanisms of information processing and short-term memory can be studied using cultured neuronal networks, thereby setting the stage for therapeutic applications using this platform.
A method of hidden Markov model optimization for use with geophysical data sets
NASA Technical Reports Server (NTRS)
Granat, R. A.
2003-01-01
Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.
Study of a mixed dispersal population dynamics model
Chugunova, Marina; Jadamba, Baasansuren; Kao, Chiu -Yen; ...
2016-08-27
In this study, we consider a mixed dispersal model with periodic and Dirichlet boundary conditions and its corresponding linear eigenvalue problem. This model describes the time evolution of a population which disperses both locally and non-locally. We investigate how long time dynamics depend on the parameter values. Furthermore, we study the minimization of the principal eigenvalue under the constraints that the resource function is bounded from above and below, and with a fixed total integral. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for the species to diemore » out more slowly or survive more easily. Our numerical simulations indicate that the optimal favorable region tends to be a simply-connected domain. Numerous results are shown to demonstrate various scenarios of optimal favorable regions for periodic and Dirichlet boundary conditions.« less
NASA Astrophysics Data System (ADS)
Cardone, G.; Durante, T.; Nazarov, S. A.
2017-07-01
We consider the spectral Dirichlet problem for the Laplace operator in the plane Ω∘ with double-periodic perforation but also in the domain Ω• with a semi-infinite foreign inclusion so that the Floquet-Bloch technique and the Gelfand transform do not apply directly. We describe waves which are localized near the inclusion and propagate along it. We give a formulation of the problem with radiation conditions that provides a Fredholm operator of index zero. The main conclusion concerns the spectra σ∘ and σ• of the problems in Ω∘ and Ω•, namely we present a concrete geometry which supports the relation σ∘ ⫋σ• due to a new non-empty spectral band caused by the semi-infinite inclusion called an open waveguide in the double-periodic medium.
Dirichlet Component Regression and its Applications to Psychiatric Data.
Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel
2008-08-15
We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook's distance, and a local jackknife influence metric.
Unstable Mode Solutions to the Klein-Gordon Equation in Kerr-anti-de Sitter Spacetimes
NASA Astrophysics Data System (ADS)
Dold, Dominic
2017-03-01
For any cosmological constant {Λ = -3/ℓ2 < 0} and any {α < 9/4}, we find a Kerr-AdS spacetime {({M}, g_{KAdS})}, in which the Klein-Gordon equation {Box_{g_{KAdS}}ψ + α/ℓ2ψ = 0} has an exponentially growing mode solution satisfying a Dirichlet boundary condition at infinity. The spacetime violates the Hawking-Reall bound {r+2 > |a|ℓ}. We obtain an analogous result for Neumann boundary conditions if {5/4 < α < 9/4}. Moreover, in the Dirichlet case, one can prove that, for any Kerr-AdS spacetime violating the Hawking-Reall bound, there exists an open family of masses {α} such that the corresponding Klein-Gordon equation permits exponentially growing mode solutions. Our result adopts methods of Shlapentokh-Rothman developed in (Commun. Math. Phys. 329:859-891, 2014) and provides the first rigorous construction of a superradiant instability for negative cosmological constant.
NASA Astrophysics Data System (ADS)
Gross, Markus
2018-03-01
We consider a one-dimensional fluctuating interfacial profile governed by the Edwards–Wilkinson or the stochastic Mullins-Herring equation for periodic, standard Dirichlet and Dirichlet no-flux boundary conditions. The minimum action path of an interfacial fluctuation conditioned to reach a given maximum height M at a finite (first-passage) time T is calculated within the weak-noise approximation. Dynamic and static scaling functions for the profile shape are obtained in the transient and the equilibrium regime, i.e. for first-passage times T smaller or larger than the characteristic relaxation time, respectively. In both regimes, the profile approaches the maximum height M with a universal algebraic time dependence characterized solely by the dynamic exponent of the model. It is shown that, in the equilibrium regime, the spatial shape of the profile depends sensitively on boundary conditions and conservation laws, but it is essentially independent of them in the transient regime.
Stereochemistry of silicon in oxygen-containing compounds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serezhkin, V. N., E-mail: Serezhkin@samsu.ru; Urusov, V. S.
2017-01-15
Specific stereochemical features of silicon in oxygen-containing compounds, including hybrid silicates with all oxygen atoms of SiO{sub n} groups ({sub n} = 4, 5, or 6) entering into the composition of organic anions or molecules, are described by characteristics of Voronoi—Dirichlet polyhedra. It is found that in rutile-like stishovite and post-stishovite phases with the structures similar to those of СаСl{sub 2}, α-PbO{sub 2}, or pyrite FeS{sub 2}, the volume of Voronoi—Dirichlet polyhedra of silicon and oxygen atoms decreases linearly with pressure increasing to 268 GPa. Based on these results, the possibility of formation of new post-stishovite phases is shown, namely,more » the fluorite-like structure (transition predicted at ~400 GPa) and a body-centered cubic lattice with statistical arrangement of silicon and oxygen atoms (~900 GPa).« less
Hidden in plain sight: the formal, informal, and hidden curricula of a psychiatry clerkship.
Wear, Delese; Skillicorn, Jodie
2009-04-01
To examine perceptions of the formal, informal, and hidden curricula in psychiatry as they are observed and experienced by (1) attending physicians who have teaching responsibilities for residents and medical students, (2) residents who are taught by those same physicians and who have teaching responsibilities for medical students, and (3) medical students who are taught by attendings and residents during their psychiatry rotation. From June to November 2007, the authors conducted focus groups with attendings, residents, and students in one midwestern academic setting. The sessions were audiotaped, transcribed, and analyzed for themes surrounding the formal, informal, and hidden curricula. All three groups offered a similar belief that the knowledge, skills, and values of the formal curriculum focused on building relationships. Similarly, all three suggested that elements of the informal and hidden curricula were expressed primarily as the values arising from attendings' role modeling, as the nature and amount of time attendings spend with patients, and as attendings' advice arising from experience and intuition versus "textbook learning." Whereas students and residents offered negative values arising from the informal and hidden curricula, attendings did not, offering instead the more positive values they intended to encourage through the informal and hidden curricula. The process described here has great potential in local settings across all disciplines. Asking teachers and learners in any setting to think about how they experience the educational environment and what sense they make of all curricular efforts can provide a reality check for educators and a values check for learners as they critically reflect on the meanings of what they are learning.
NASA Astrophysics Data System (ADS)
Hwong, Y. L.; Oliver, C.; Van Kranendonk, M. J.
2016-12-01
The rise of social media has transformed the way the public engages with scientists and science organisations. `Retweet', `Like', `Share' and `Comment' are a few ways users engage with messages on Twitter and Facebook, two of the most popular social media platforms. Despite the availability of big data from these digital footprints, research into social media science communication is scant. This paper presents the results of an empirical study into the processes and outcomes of space science related social media communications using machine learning. The study is divided into two main parts. The first part is dedicated to the use of supervised learning methods to investigate the features of highly engaging messages., e.g. highly retweeted tweets and shared Facebook posts. It is hypothesised that these messages contain certain psycholinguistic features that are unique to the field of space science. We built a predictive model to forecast the engagement levels of social media posts. By using four feature sets (n-grams, psycholinguistics, grammar and social media), we were able to achieve prediction accuracies in the vicinity of 90% using three supervised learning algorithms (Naive Bayes, linear classifier and decision tree). We conducted the same experiments on social media messages from three other fields (politics, business and non-profit) and discovered several features that are exclusive to space science communications: anger, authenticity, hashtags, visual descriptions and a tentative tone. The second part of the study focuses on the extraction of topics from a corpus of texts using topic modelling. This part of the study is exploratory in nature and uses an unsupervised method called Latent Dirichlet Allocation (LDA) to uncover previously unknown topics within a large body of documents. Preliminary results indicate a strong potential of topic model algorithms to automatically uncover themes hidden within social media chatters on space related issues, with keywords such as `exoplanet', `water' and `life' being clustered together forming a topic (i.e. 'Astrobiology'). Results also demonstrate the freewheeling nature of social media conversations, while providing evidence for the role of these platforms in facilitating meaningful exchanges among science audience.
Mining FDA drug labels using an unsupervised learning technique--topic modeling.
Bisgin, Halil; Liu, Zhichao; Fang, Hong; Xu, Xiaowei; Tong, Weida
2011-10-18
The Food and Drug Administration (FDA) approved drug labels contain a broad array of information, ranging from adverse drug reactions (ADRs) to drug efficacy, risk-benefit consideration, and more. However, the labeling language used to describe these information is free text often containing ambiguous semantic descriptions, which poses a great challenge in retrieving useful information from the labeling text in a consistent and accurate fashion for comparative analysis across drugs. Consequently, this task has largely relied on the manual reading of the full text by experts, which is time consuming and labor intensive. In this study, a novel text mining method with unsupervised learning in nature, called topic modeling, was applied to the drug labeling with a goal of discovering "topics" that group drugs with similar safety concerns and/or therapeutic uses together. A total of 794 FDA-approved drug labels were used in this study. First, the three labeling sections (i.e., Boxed Warning, Warnings and Precautions, Adverse Reactions) of each drug label were processed by the Medical Dictionary for Regulatory Activities (MedDRA) to convert the free text of each label to the standard ADR terms. Next, the topic modeling approach with latent Dirichlet allocation (LDA) was applied to generate 100 topics, each associated with a set of drugs grouped together based on the probability analysis. Lastly, the efficacy of the topic modeling was evaluated based on known information about the therapeutic uses and safety data of drugs. The results demonstrate that drugs grouped by topics are associated with the same safety concerns and/or therapeutic uses with statistical significance (P<0.05). The identified topics have distinct context that can be directly linked to specific adverse events (e.g., liver injury or kidney injury) or therapeutic application (e.g., antiinfectives for systemic use). We were also able to identify potential adverse events that might arise from specific medications via topics. The successful application of topic modeling on the FDA drug labeling demonstrates its potential utility as a hypothesis generation means to infer hidden relationships of concepts such as, in this study, drug safety and therapeutic use in the study of biomedical documents.
Near-Native Protein Loop Sampling Using Nonparametric Density Estimation Accommodating Sparcity
Day, Ryan; Lennox, Kristin P.; Sukhanov, Paul; Dahl, David B.; Vannucci, Marina; Tsai, Jerry
2011-01-01
Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD <2.0 Å), the DPM-HMM method performs as well or better than the best templates, demonstrating that our automated method recaptures these canonical loops without inclusion of any IgG specific terms or manual intervention. In cases with poor or few good templates (mean RMSD >7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/. PMID:22028638
A systems approach for analysis of high content screening assay data with topic modeling.
Bisgin, Halil; Chen, Minjun; Wang, Yuping; Kelly, Reagan; Fang, Hong; Xu, Xiaowei; Tong, Weida
2013-01-01
High Content Screening (HCS) has become an important tool for toxicity assessment, partly due to its advantage of handling multiple measurements simultaneously. This approach has provided insight and contributed to the understanding of systems biology at cellular level. To fully realize this potential, the simultaneously measured multiple endpoints from a live cell should be considered in a probabilistic relationship to assess the cell's condition to response stress from a treatment, which poses a great challenge to extract hidden knowledge and relationships from these measurements. In this work, we applied a text mining method of Latent Dirichlet Allocation (LDA) to analyze cellular endpoints from in vitro HCS assays and related to the findings to in vivo histopathological observations. We measured multiple HCS assay endpoints for 122 drugs. Since LDA requires the data to be represented in document-term format, we first converted the continuous value of the measurements to the word frequency that can processed by the text mining tool. For each of the drugs, we generated a document for each of the 4 time points. Thus, we ended with 488 documents (drug-hour) each having different values for the 10 endpoints which are treated as words. We extracted three topics using LDA and examined these to identify diagnostic topics for 45 common drugs located in vivo experiments from the Japanese Toxicogenomics Project (TGP) observing their necrosis findings at 6 and 24 hours after treatment. We found that assay endpoints assigned to particular topics were in concordance with the histopathology observed. Drugs showing necrosis at 6 hour were linked to severe damage events such as Steatosis, DNA Fragmentation, Mitochondrial Potential, and Lysosome Mass. DNA Damage and Apoptosis were associated with drugs causing necrosis at 24 hours, suggesting an interplay of the two pathways in these drugs. Drugs with no sign of necrosis we related to the Cell Loss and Nuclear Size assays, which is suggestive of hepatocyte regeneration. The evidence from this study suggests that topic modeling with LDA can enable us to interpret relationships of endpoints of in vitro assays along with an in vivo histological finding, necrosis. Effectiveness of this approach may add substantially to our understanding of systems biology.
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2018-05-01
In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.
More Than Meets the Eye: Split-Second Social Perception.
Freeman, Jonathan B; Johnson, Kerri L
2016-05-01
Recent research suggests that visual perception of social categories is shaped not only by facial features but also by higher-order social cognitive processes (e.g., stereotypes, attitudes, goals). Building on neural computational models of social perception, we outline a perspective of how multiple bottom-up visual cues are flexibly integrated with a range of top-down processes to form perceptions, and we identify a set of key brain regions involved. During this integration, 'hidden' social category activations are often triggered which temporarily impact perception without manifesting in explicit perceptual judgments. Importantly, these hidden impacts and other aspects of the perceptual process predict downstream social consequences - from politicians' electoral success to several evaluative biases - independently of the outcomes of that process. Copyright © 2016 Elsevier Ltd. All rights reserved.
On the connection between multigrid and cyclic reduction
NASA Technical Reports Server (NTRS)
Merriam, M. L.
1984-01-01
A technique is shown whereby it is possible to relate a particular multigrid process to cyclic reduction using purely mathematical arguments. This technique suggest methods for solving Poisson's equation in 1-, 2-, or 3-dimensions with Dirichlet or Neumann boundary conditions. In one dimension the method is exact and, in fact, reduces to cyclic reduction. This provides a valuable reference point for understanding multigrid techniques. The particular multigrid process analyzed is referred to here as Approximate Cyclic Reduction (ACR) and is one of a class known as Multigrid Reduction methods in the literature. It involves one approximation with a known error term. It is possible to relate the error term in this approximation with certain eigenvector components of the error. These are sharply reduced in amplitude by classical relaxation techniques. The approximation can thus be made a very good one.
On degenerate coupled transport processes in porous media with memory phenomena
NASA Astrophysics Data System (ADS)
Beneš, Michal; Pažanin, Igor
2018-06-01
In this paper we prove the existence of weak solutions to degenerate parabolic systems arising from the fully coupled moisture movement, solute transport of dissolved species and heat transfer through porous materials. Physically relevant mixed Dirichlet-Neumann boundary conditions and initial conditions are considered. Existence of a global weak solution of the problem is proved by means of semidiscretization in time, proving necessary uniform estimates and by passing to the limit from discrete approximations. Degeneration occurs in the nonlinear transport coefficients which are not assumed to be bounded below and above by positive constants. Degeneracies in transport coefficients are overcome by proving suitable a-priori $L^{\\infty}$-estimates based on De Giorgi and Moser iteration technique.
A Case Study on Sepsis Using PubMed and Deep Learning for Ontology Learning.
Arguello Casteleiro, Mercedes; Maseda Fernandez, Diego; Demetriou, George; Read, Warren; Fernandez Prieto, Maria Jesus; Des Diz, Julio; Nenadic, Goran; Keane, John; Stevens, Robert
2017-01-01
We investigate the application of distributional semantics models for facilitating unsupervised extraction of biomedical terms from unannotated corpora. Term extraction is used as the first step of an ontology learning process that aims to (semi-)automatic annotation of biomedical concepts and relations from more than 300K PubMed titles and abstracts. We experimented with both traditional distributional semantics methods such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) as well as the neural language models CBOW and Skip-gram from Deep Learning. The evaluation conducted concentrates on sepsis, a major life-threatening condition, and shows that Deep Learning models outperform LSA and LDA with much higher precision.
Casimir interaction between spheres in ( D + 1)-dimensional Minkowski spacetime
NASA Astrophysics Data System (ADS)
Teo, L. P.
2014-05-01
We consider the Casimir interaction between two spheres in ( D + 1)-dimensional Minkowski spacetime due to the vacuum fluctuations of scalar fields. We consider combinations of Dirichlet and Neumann boundary conditions. The TGTG formula of the Casimir interaction energy is derived. The computations of the T matrices of the two spheres are straightforward. To compute the two G matrices, known as translation matrices, which relate the hyper-spherical waves in two spherical coordinate frames differ by a translation, we generalize the operator approach employed in [39]. The result is expressed in terms of an integral over Gegenbauer polynomials. In contrast to the D=3 case, we do not re-express the integral in terms of 3 j-symbols and hyper-spherical waves, which in principle, can be done but does not simplify the formula. Using our expression for the Casimir interaction energy, we derive the large separation and small separation asymptotic expansions of the Casimir interaction energy. In the large separation regime, we find that the Casimir interaction energy is of order L -2 D+3, L -2 D+1 and L -2 D-1 respectively for Dirichlet-Dirichlet, Dirichlet-Neumann and Neumann-Neumann boundary conditions, where L is the center-to-center distance of the two spheres. In the small separation regime, we confirm that the leading term of the Casimir interaction agrees with the proximity force approximation, which is of order , where d is the distance between the two spheres. Another main result of this work is the analytic computations of the next-to-leading order term in the small separation asymptotic expansion. This term is computed using careful order analysis as well as perturbation method. In the case the radius of one of the sphere goes to infinity, we find that the results agree with the one we derive for sphere-plate configuration. When D=3, we also recover previously known results. We find that when D is large, the ratio of the next-to-leading order term to the leading order term is linear in D, indicating a larger correction at higher dimensions. The methodologies employed in this work and the results obtained can be used to study the one-loop effective action of the system of two spherical objects in the universe.
Extracting hidden messages in steganographic images
Quach, Tu-Thach
2014-07-17
The eventual goal of steganalytic forensic is to extract the hidden messages embedded in steganographic images. A promising technique that addresses this problem partially is steganographic payload location, an approach to reveal the message bits, but not their logical order. It works by finding modified pixels, or residuals, as an artifact of the embedding process. This technique is successful against simple least-significant bit steganography and group-parity steganography. The actual messages, however, remain hidden as no logical order can be inferred from the located payload. This paper establishes an important result addressing this shortcoming: we show that the expected mean residualsmore » contain enough information to logically order the located payload provided that the size of the payload in each stego image is not fixed. The located payload can be ordered as prescribed by the mean residuals to obtain the hidden messages without knowledge of the embedding key, exposing the vulnerability of these embedding algorithms. We provide experimental results to support our analysis.« less
Optimized hardware framework of MLP with random hidden layers for classification applications
NASA Astrophysics Data System (ADS)
Zyarah, Abdullah M.; Ramesh, Abhishek; Merkel, Cory; Kudithipudi, Dhireesha
2016-05-01
Multilayer Perceptron Networks with random hidden layers are very efficient at automatic feature extraction and offer significant performance improvements in the training process. They essentially employ large collection of fixed, random features, and are expedient for form-factor constrained embedded platforms. In this work, a reconfigurable and scalable architecture is proposed for the MLPs with random hidden layers with a customized building block based on CORDIC algorithm. The proposed architecture also exploits fixed point operations for area efficiency. The design is validated for classification on two different datasets. An accuracy of ~ 90% for MNIST dataset and 75% for gender classification on LFW dataset was observed. The hardware has 299 speed-up over the corresponding software realization.
Time series modeling by a regression approach based on a latent process.
Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice
2009-01-01
Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
Boundary conditions in Chebyshev and Legendre methods
NASA Technical Reports Server (NTRS)
Canuto, C.
1984-01-01
Two different ways of treating non-Dirichlet boundary conditions in Chebyshev and Legendre collocation methods are discussed for second order differential problems. An error analysis is provided. The effect of preconditioning the corresponding spectral operators by finite difference matrices is also investigated.
Complex Sequencing Rules of Birdsong Can be Explained by Simple Hidden Markov Processes
Katahira, Kentaro; Suzuki, Kenta; Okanoya, Kazuo; Okada, Masato
2011-01-01
Complex sequencing rules observed in birdsongs provide an opportunity to investigate the neural mechanism for generating complex sequential behaviors. To relate the findings from studying birdsongs to other sequential behaviors such as human speech and musical performance, it is crucial to characterize the statistical properties of the sequencing rules in birdsongs. However, the properties of the sequencing rules in birdsongs have not yet been fully addressed. In this study, we investigate the statistical properties of the complex birdsong of the Bengalese finch (Lonchura striata var. domestica). Based on manual-annotated syllable labeles, we first show that there are significant higher-order context dependencies in Bengalese finch songs, that is, which syllable appears next depends on more than one previous syllable. We then analyze acoustic features of the song and show that higher-order context dependencies can be explained using first-order hidden state transition dynamics with redundant hidden states. This model corresponds to hidden Markov models (HMMs), well known statistical models with a large range of application for time series modeling. The song annotation with these models with first-order hidden state dynamics agreed well with manual annotation, the score was comparable to that of a second-order HMM, and surpassed the zeroth-order model (the Gaussian mixture model; GMM), which does not use context information. Our results imply that the hierarchical representation with hidden state dynamics may underlie the neural implementation for generating complex behavioral sequences with higher-order dependencies. PMID:21915345
Muth, Claudia; Raab, Marius H; Carbon, Claus-Christian
2016-01-01
Research in the field of psychological aesthetics points to the appeal of stimuli which defy easy recognition by being "semantically unstable" but which still allow for creating meaning-in the ongoing process of elaborative perception or as an end product of the entire process. Such effects were reported for hidden images (Muth and Carbon, 2013) as well as Cubist artworks concealing detectable-although fragmented-objects (Muth et al., 2013). To test the stability of the relationship between semantic determinacy and appreciation across different episodic contexts, 30 volunteers evaluated an artistic movie continuously on visual determinacy or liking via the Continuous Evaluation Procedure (CEP, Muth et al., 2015b). The movie consisted of five episodes with emerging Gestalts. In the first between-participants condition, the hidden Gestalts in the movie episodes were of increasing determinacy, in the second condition, the episodes showed decreasing determinacies of hidden Gestalts. In the increasing-determinacy group, visual determinacy was rated higher and showed better predictive quality for liking than in the decreasing-determinacy group. Furthermore, when the movie started with low visual determinacy of hidden Gestalts, unexpectedly strong increases in visual determinacy had a bigger effect on liking than in the condition which allowed for weaker Gestalt recognition after having started with highly determinate Gestalts. The resulting pattern calls for consideration of the episodic context when examining art appreciation.
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes. PMID:29872389
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot.
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback-Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes.
A Dirichlet process model for classifying and forecasting epidemic curves.
Nsoesie, Elaine O; Leman, Scotland C; Marathe, Madhav V
2014-01-09
A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997-2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods' performance was comparable. Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial.
Hidden symmetries and equilibrium properties of multiplicative white-noise stochastic processes
NASA Astrophysics Data System (ADS)
González Arenas, Zochil; Barci, Daniel G.
2012-12-01
Multiplicative white-noise stochastic processes continue to attract attention in a wide area of scientific research. The variety of prescriptions available for defining them makes the development of general tools for their characterization difficult. In this work, we study equilibrium properties of Markovian multiplicative white-noise processes. For this, we define the time reversal transformation for such processes, taking into account that the asymptotic stationary probability distribution depends on the prescription. Representing the stochastic process in a functional Grassmann formalism, we avoid the necessity of fixing a particular prescription. In this framework, we analyze equilibrium properties and study hidden symmetries of the process. We show that, using a careful definition of the equilibrium distribution and taking into account the appropriate time reversal transformation, usual equilibrium properties are satisfied for any prescription. Finally, we present a detailed deduction of a covariant supersymmetric formulation of a multiplicative Markovian white-noise process and study some of the constraints that it imposes on correlation functions using Ward-Takahashi identities.
NASA Astrophysics Data System (ADS)
Brown-Dymkoski, Eric; Kasimov, Nurlybek; Vasilyev, Oleg V.
2014-04-01
In order to introduce solid obstacles into flows, several different methods are used, including volume penalization methods which prescribe appropriate boundary conditions by applying local forcing to the constitutive equations. One well known method is Brinkman penalization, which models solid obstacles as porous media. While it has been adapted for compressible, incompressible, viscous and inviscid flows, it is limited in the types of boundary conditions that it imposes, as are most volume penalization methods. Typically, approaches are limited to Dirichlet boundary conditions. In this paper, Brinkman penalization is extended for generalized Neumann and Robin boundary conditions by introducing hyperbolic penalization terms with characteristics pointing inward on solid obstacles. This Characteristic-Based Volume Penalization (CBVP) method is a comprehensive approach to conditions on immersed boundaries, providing for homogeneous and inhomogeneous Dirichlet, Neumann, and Robin boundary conditions on hyperbolic and parabolic equations. This CBVP method can be used to impose boundary conditions for both integrated and non-integrated variables in a systematic manner that parallels the prescription of exact boundary conditions. Furthermore, the method does not depend upon a physical model, as with porous media approach for Brinkman penalization, and is therefore flexible for various physical regimes and general evolutionary equations. Here, the method is applied to scalar diffusion and to direct numerical simulation of compressible, viscous flows. With the Navier-Stokes equations, both homogeneous and inhomogeneous Neumann boundary conditions are demonstrated through external flow around an adiabatic and heated cylinder. Theoretical and numerical examination shows that the error from penalized Neumann and Robin boundary conditions can be rigorously controlled through an a priori penalization parameter η. The error on a transient boundary is found to converge as O(η), which is more favorable than the error convergence of the already established Dirichlet boundary condition.
A numerical technique for linear elliptic partial differential equations in polygonal domains.
Hashemzadeh, P; Fokas, A S; Smitheman, S A
2015-03-08
Integral representations for the solution of linear elliptic partial differential equations (PDEs) can be obtained using Green's theorem. However, these representations involve both the Dirichlet and the Neumann values on the boundary, and for a well-posed boundary-value problem (BVPs) one of these functions is unknown. A new transform method for solving BVPs for linear and integrable nonlinear PDEs usually referred to as the unified transform ( or the Fokas transform ) was introduced by the second author in the late Nineties. For linear elliptic PDEs, this method can be considered as the analogue of Green's function approach but now it is formulated in the complex Fourier plane instead of the physical plane. It employs two global relations also formulated in the Fourier plane which couple the Dirichlet and the Neumann boundary values. These relations can be used to characterize the unknown boundary values in terms of the given boundary data, yielding an elegant approach for determining the Dirichlet to Neumann map . The numerical implementation of the unified transform can be considered as the counterpart in the Fourier plane of the well-known boundary integral method which is formulated in the physical plane. For this implementation, one must choose (i) a suitable basis for expanding the unknown functions and (ii) an appropriate set of complex values, which we refer to as collocation points, at which to evaluate the global relations. Here, by employing a variety of examples we present simple guidelines of how the above choices can be made. Furthermore, we provide concrete rules for choosing the collocation points so that the condition number of the matrix of the associated linear system remains low.
The Hidden Curriculum: What Are We Actually Teaching about the Fundamentals of Care?
MacMillan, Kathleen
2016-01-01
The issues of missed or inadequately provided basic nursing care and related complications are being identified as worldwide phenomena of interest. Without being aware of it, educators and practicing nurses may be teaching nursing students that fundamental nursing care is unimportant, uncomplicated and not really nursing's responsibility. This paper explores the concept of the "hidden curriculum" in nursing education, as it relates to fundamental nursing care and calls for greater partnerships between education and service to uncover the hidden curriculum; to effectively shape it to achieve alignment between classroom and practice; and, ultimately, to improve care processes and patient outcomes through collaboration. A renewed focus on the vital importance of what is considered "basics" to patient outcomes is required in nursing education. Copyright © 2016 Longwoods Publishing.
Hidden flows and waste processing--an analysis of illustrative futures.
Schiller, F; Raffield, T; Angus, A; Herben, M; Young, P J; Longhurst, P J; Pollard, S J T
2010-12-14
An existing materials flow model is adapted (using Excel and AMBER model platforms) to account for waste and hidden material flows within a domestic environment. Supported by national waste data, the implications of legislative change, domestic resource depletion and waste technology advances are explored. The revised methodology offers additional functionality for economic parameters that influence waste generation and disposal. We explore this accounting system under hypothetical future waste and resource management scenarios, illustrating the utility of the model. A sensitivity analysis confirms that imports, domestic extraction and their associated hidden flows impact mostly on waste generation. The model offers enhanced utility for policy and decision makers with regard to economic mass balance and strategic waste flows, and may promote further discussion about waste technology choice in the context of reducing carbon budgets.
Discovering functional modules by topic modeling RNA-Seq based toxicogenomic data.
Yu, Ke; Gong, Binsheng; Lee, Mikyung; Liu, Zhichao; Xu, Joshua; Perkins, Roger; Tong, Weida
2014-09-15
Toxicogenomics (TGx) endeavors to elucidate the underlying molecular mechanisms through exploring gene expression profiles in response to toxic substances. Recently, RNA-Seq is increasingly regarded as a more powerful alternative to microarrays in TGx studies. However, realizing RNA-Seq's full potential requires novel approaches to extracting information from the complex TGx data. Considering read counts as the number of times a word occurs in a document, gene expression profiles from RNA-Seq are analogous to a word by document matrix used in text mining. Topic modeling aiming at to discover the latent structures in text corpora would be helpful to explore RNA-Seq based TGx data. In this study, topic modeling was applied on a typical RNA-Seq based TGx data set to discover hidden functional modules. The RNA-Seq based gene expression profiles were transformed into "documents", on which latent Dirichlet allocation (LDA) was used to build a topic model. We found samples treated by the compounds with the same modes of actions (MoAs) could be clustered based on topic similarities. The topic most relevant to each cluster was identified as a "marker" topic, which was interpreted by gene enrichment analysis with MoAs then confirmed by compound and pathways associations mined from literature. To further validate the "marker" topics, we tested topic transferability from RNA-Seq to microarrays. The RNA-Seq based gene expression profile of a topic specifically associated with peroxisome proliferator-activated receptors (PPAR) signaling pathway was used to query samples with similar expression profiles in two different microarray data sets, yielding accuracy of about 85%. This proof-of-concept study demonstrates the applicability of topic modeling to discover functional modules in RNA-Seq data and suggests a valuable computational tool for leveraging information within TGx data in RNA-Seq era.
Nonparametric Bayesian models for a spatial covariance.
Reich, Brian J; Fuentes, Montserrat
2012-01-01
A crucial step in the analysis of spatial data is to estimate the spatial correlation function that determines the relationship between a spatial process at two locations. The standard approach to selecting the appropriate correlation function is to use prior knowledge or exploratory analysis, such as a variogram analysis, to select the correct parametric correlation function. Rather that selecting a particular parametric correlation function, we treat the covariance function as an unknown function to be estimated from the data. We propose a flexible prior for the correlation function to provide robustness to the choice of correlation function. We specify the prior for the correlation function using spectral methods and the Dirichlet process prior, which is a common prior for an unknown distribution function. Our model does not require Gaussian data or spatial locations on a regular grid. The approach is demonstrated using a simulation study as well as an analysis of California air pollution data.
Automatic Hidden-Web Table Interpretation by Sibling Page Comparison
NASA Astrophysics Data System (ADS)
Tao, Cui; Embley, David W.
The longstanding problem of automatic table interpretation still illudes us. Its solution would not only be an aid to table processing applications such as large volume table conversion, but would also be an aid in solving related problems such as information extraction and semi-structured data management. In this paper, we offer a conceptual modeling solution for the common special case in which so-called sibling pages are available. The sibling pages we consider are pages on the hidden web, commonly generated from underlying databases. We compare them to identify and connect nonvarying components (category labels) and varying components (data values). We tested our solution using more than 2,000 tables in source pages from three different domains—car advertisements, molecular biology, and geopolitical information. Experimental results show that the system can successfully identify sibling tables, generate structure patterns, interpret tables using the generated patterns, and automatically adjust the structure patterns, if necessary, as it processes a sequence of hidden-web pages. For these activities, the system was able to achieve an overall F-measure of 94.5%.
DUTIR at TREC 2009: Chemical IR Track
2009-11-01
We set the Dirichlet prior empirically at 1,500 as recommended in [2]. For example, Topic 15 “ Betaines for peripheral arterial disease” is...converted into the following Indri query: # (combine betaines for peripheral arterial disease ) which produces results rank-equivalent to a simple query
Modifications to holographic entanglement entropy in warped CFT
NASA Astrophysics Data System (ADS)
Song, Wei; Wen, Qiang; Xu, Jianfei
2017-02-01
In [1] it was observed that asymptotic boundary conditions play an important role in the study of holographic entanglement beyond AdS/CFT. In particular, the Ryu-Takayanagi proposal must be modified for warped AdS3 (WAdS3) with Dirichlet boundary conditions. In this paper, we consider AdS3 and WAdS3 with Dirichlet-Neumann boundary conditions. The conjectured holographic duals are warped conformal field theories (WCFTs), featuring a Virasoro-Kac-Moody algebra. We provide a holographic calculation of the entanglement entropy and Rényi entropy using AdS3/WCFT and WAdS3/WCFT dualities. Our bulk results are consistent with the WCFT results derived by Castro-Hofman-Iqbal using the Rindler method. Comparing with [1], we explicitly show that the holographic entanglement entropy is indeed affected by boundary conditions. Both results differ from the Ryu-Takayanagi proposal, indicating new relations between spacetime geometry and quantum entanglement for holographic dualities beyond AdS/CFT.
Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation.
Chen, Chao; Zare, Alina; Trinh, Huy N; Omotara, Gbenga O; Cobb, James Tory; Lagaunne, Timotius A
2017-12-01
Topic models [e.g., probabilistic latent semantic analysis, latent Dirichlet allocation (LDA), and supervised LDA] have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership LDA (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery, where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.
A generalized Poisson solver for first-principles device simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch; Brück, Sascha
2016-01-28
Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative methodmore » in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.« less
Dirichlet Component Regression and its Applications to Psychiatric Data
Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel
2011-01-01
Summary We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook’s distance, and a local jackknife influence metric. PMID:22058582
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Dr. Li; Cui, Xiaohui; Cemerlic, Alma
Ad hoc networks are very helpful in situations when no fixed network infrastructure is available, such as natural disasters and military conflicts. In such a network, all wireless nodes are equal peers simultaneously serving as both senders and routers for other nodes. Therefore, how to route packets through reliable paths becomes a fundamental problems when behaviors of certain nodes deviate from wireless ad hoc routing protocols. We proposed a novel Dirichlet reputation model based on Bayesian inference theory which evaluates reliability of each node in terms of packet delivery. Our system offers a way to predict and select a reliablemore » path through combination of first-hand observation and second-hand reputation reports. We also proposed moving window mechanism which helps to adjust ours responsiveness of our system to changes of node behaviors. We integrated the Dirichlet reputation into routing protocol of wireless ad hoc networks. Our extensive simulation indicates that our proposed reputation system can improve good throughput of the network and reduce negative impacts caused by misbehaving nodes.« less
Positivity and Almost Positivity of Biharmonic Green's Functions under Dirichlet Boundary Conditions
NASA Astrophysics Data System (ADS)
Grunau, Hans-Christoph; Robert, Frédéric
2010-03-01
In general, for higher order elliptic equations and boundary value problems like the biharmonic equation and the linear clamped plate boundary value problem, neither a maximum principle nor a comparison principle or—equivalently—a positivity preserving property is available. The problem is rather involved since the clamped boundary conditions prevent the boundary value problem from being reasonably written as a system of second order boundary value problems. It is shown that, on the other hand, for bounded smooth domains {Ω subsetmathbb{R}^n} , the negative part of the corresponding Green’s function is “small” when compared with its singular positive part, provided {n≥q 3} . Moreover, the biharmonic Green’s function in balls {Bsubsetmathbb{R}^n} under Dirichlet (that is, clamped) boundary conditions is known explicitly and is positive. It has been known for some time that positivity is preserved under small regular perturbations of the domain, if n = 2. In the present paper, such a stability result is proved for {n≥q 3}.
New solutions to the constant-head test performed at a partially penetrating well
NASA Astrophysics Data System (ADS)
Chang, Y. C.; Yeh, H. D.
2009-05-01
SummaryThe mathematical model describing the aquifer response to a constant-head test performed at a fully penetrating well can be easily solved by the conventional integral transform technique. In addition, the Dirichlet-type condition should be chosen as the boundary condition along the rim of wellbore for such a test well. However, the boundary condition for a test well with partial penetration must be considered as a mixed-type condition. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The model for such a mixed boundary problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the dual series equations and perturbation method. This approach provides analytical results for the drawdown in the partially penetrating well and the well discharge along the screen. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.
Briggs, Andrew H; Ades, A E; Price, Martin J
2003-01-01
In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.
Modeling Information Content Via Dirichlet-Multinomial Regression Analysis.
Ferrari, Alberto
2017-01-01
Shannon entropy is being increasingly used in biomedical research as an index of complexity and information content in sequences of symbols, e.g. languages, amino acid sequences, DNA methylation patterns and animal vocalizations. Yet, distributional properties of information entropy as a random variable have seldom been the object of study, leading to researchers mainly using linear models or simulation-based analytical approach to assess differences in information content, when entropy is measured repeatedly in different experimental conditions. Here a method to perform inference on entropy in such conditions is proposed. Building on results coming from studies in the field of Bayesian entropy estimation, a symmetric Dirichlet-multinomial regression model, able to deal efficiently with the issue of mean entropy estimation, is formulated. Through a simulation study the model is shown to outperform linear modeling in a vast range of scenarios and to have promising statistical properties. As a practical example, the method is applied to a data set coming from a real experiment on animal communication.
NASA Astrophysics Data System (ADS)
Zhukovsky, K.; Oskolkov, D.
2018-03-01
A system of hyperbolic-type inhomogeneous differential equations (DE) is considered for non-Fourier heat transfer in thin films. Exact harmonic solutions to Guyer-Krumhansl-type heat equation and to the system of inhomogeneous DE are obtained in Cauchy- and Dirichlet-type conditions. The contribution of the ballistic-type heat transport, of the Cattaneo heat waves and of the Fourier heat diffusion is discussed and compared with each other in various conditions. The application of the study to the ballistic heat transport in thin films is performed. Rapid evolution of the ballistic quasi-temperature component in low-dimensional systems is elucidated and compared with slow evolution of its diffusive counterpart. The effect of the ballistic quasi-temperature component on the evolution of the complete quasi-temperature is explored. In this context, the influence of the Knudsen number and of Cauchy- and Dirichlet-type conditions on the evolution of the temperature distribution is explored. The comparative analysis of the obtained solutions is performed.
Exclusion Process with Slow Boundary
NASA Astrophysics Data System (ADS)
Baldasso, Rangel; Menezes, Otávio; Neumann, Adriana; Souza, Rafael R.
2017-06-01
We study the hydrodynamic and the hydrostatic behavior of the simple symmetric exclusion process with slow boundary. The term slow boundary means that particles can be born or die at the boundary sites, at a rate proportional to N^{-θ }, where θ > 0 and N is the scaling parameter. In the bulk, the particles exchange rate is equal to 1. In the hydrostatic scenario, we obtain three different linear profiles, depending on the value of the parameter θ ; in the hydrodynamic scenario, we obtain that the time evolution of the spatial density of particles, in the diffusive scaling, is given by the weak solution of the heat equation, with boundary conditions that depend on θ . If θ \\in (0,1), we get Dirichlet boundary conditions, (which is the same behavior if θ =0, see Farfán in Hydrostatics, statical and dynamical large deviations of boundary driven gradient symmetric exclusion processes, 2008); if θ =1, we get Robin boundary conditions; and, if θ \\in (1,∞), we get Neumann boundary conditions.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Posterior consistency in conditional distribution estimation
Pati, Debdeep; Dunson, David B.; Tokdar, Surya T.
2014-01-01
A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a challenging problem, which differs in some important ways from density and mean regression estimation problems. Defining various topologies on the space of conditional distributions, we provide sufficient conditions for posterior consistency focusing on a broad class of priors formulated as predictor-dependent mixtures of Gaussian kernels. This theory is illustrated by showing that the conditions are satisfied for a class of generalized stick-breaking process mixtures in which the stick-breaking lengths are monotone, differentiable functions of a continuous stochastic process. We also provide a set of sufficient conditions for the case where stick-breaking lengths are predictor independent, such as those arising from a fixed Dirichlet process prior. PMID:25067858
Automated airplane surface generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.E.; Cordero, Y.; Jones, W.
1996-12-31
An efficient methodology and software axe presented for defining a class of airplane configurations. A small set of engineering design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tall, horizontal tail, and canard components. Wing, canard, and tail surface grids axe manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage is described by an algebraic function with four design parameters. The computed surface grids are suitablemore » for a wide range of Computational Fluid Dynamics simulation and configuration optimizations. Both batch and interactive software are discussed for applying the methodology.« less
Hidden Markov models and other machine learning approaches in computational molecular biology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldi, P.
1995-12-31
This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Computational tools are increasingly needed to process the massive amounts of data, to organize and classify sequences, to detect weak similarities, to separate coding from non-coding regions, and reconstruct the underlying evolutionary history. The fundamental problem in machine learning is the same as in scientific reasoning in general, as well as statistical modeling: to come up with a good model for the data. In thismore » tutorial four classes of models are reviewed. They are: Hidden Markov models; artificial Neural Networks; Belief Networks; and Stochastic Grammars. When dealing with DNA and protein primary sequences, Hidden Markov models are one of the most flexible and powerful alignments and data base searches. In this tutorial, attention is focused on the theory of Hidden Markov Models, and how to apply them to problems in molecular biology.« less
Chemical Bonding: The Orthogonal Valence-Bond View
Sax, Alexander F.
2015-01-01
Chemical bonding is the stabilization of a molecular system by charge- and spin-reorganization processes in chemical reactions. These processes are said to be local, because the number of atoms involved is very small. With multi-configurational self-consistent field (MCSCF) wave functions, these processes can be calculated, but the local information is hidden by the delocalized molecular orbitals (MO) used to construct the wave functions. The transformation of such wave functions into valence bond (VB) wave functions, which are based on localized orbitals, reveals the hidden information; this transformation is called a VB reading of MCSCF wave functions. The two-electron VB wave functions describing the Lewis electron pair that connects two atoms are frequently called covalent or neutral, suggesting that these wave functions describe an electronic situation where two electrons are never located at the same atom; such electronic situations and the wave functions describing them are called ionic. When the distance between two atoms decreases, however, every covalent VB wave function composed of non-orthogonal atomic orbitals changes its character from neutral to ionic. However, this change in the character of conventional VB wave functions is hidden by its mathematical form. Orthogonal VB wave functions composed of orthonormalized orbitals never change their character. When localized fragment orbitals are used instead of atomic orbitals, one can decide which local information is revealed and which remains hidden. In this paper, we analyze four chemical reactions by transforming the MCSCF wave functions into orthogonal VB wave functions; we show how the reactions are influenced by changing the atoms involved or by changing their local symmetry. Using orthogonal instead of non-orthogonal orbitals is not just a technical issue; it also changes the interpretation, revealing the properties of wave functions that remain otherwise undetected. PMID:25906476
Giehr, Pascal; Kyriakopoulos, Charalampos; Ficz, Gabriella; Wolf, Verena; Walter, Jörn
2016-05-01
DNA methylation and demethylation are opposing processes that when in balance create stable patterns of epigenetic memory. The control of DNA methylation pattern formation by replication dependent and independent demethylation processes has been suggested to be influenced by Tet mediated oxidation of 5mC. Several alternative mechanisms have been proposed suggesting that 5hmC influences either replication dependent maintenance of DNA methylation or replication independent processes of active demethylation. Using high resolution hairpin oxidative bisulfite sequencing data, we precisely determine the amount of 5mC and 5hmC and model the contribution of 5hmC to processes of demethylation in mouse ESCs. We develop an extended hidden Markov model capable of accurately describing the regional contribution of 5hmC to demethylation dynamics. Our analysis shows that 5hmC has a strong impact on replication dependent demethylation, mainly by impairing methylation maintenance.
ERIC Educational Resources Information Center
Brilleslyper, Michael A.; Wolverton, Robert H.
2008-01-01
In this article we consider an example suitable for investigation in many mid and upper level undergraduate mathematics courses. Fourier series provide an excellent example of the differences between uniform and non-uniform convergence. We use Dirichlet's test to investigate the convergence of the Fourier series for a simple periodic saw tooth…
Linguistic Extensions of Topic Models
ERIC Educational Resources Information Center
Boyd-Graber, Jordan
2010-01-01
Topic models like latent Dirichlet allocation (LDA) provide a framework for analyzing large datasets where observations are collected into groups. Although topic modeling has been fruitfully applied to problems social science, biology, and computer vision, it has been most widely used to model datasets where documents are modeled as exchangeable…
Computational study of peptide permeation through membrane: searching for hidden slow variables
NASA Astrophysics Data System (ADS)
Cardenas, Alfredo E.; Elber, Ron
2013-12-01
Atomically detailed molecular dynamics trajectories in conjunction with Milestoning are used to analyse the different contributions of coarse variables to the permeation process of a small peptide (N-acetyl-l-tryptophanamide, NATA) through a 1,2-dioleoyl-sn-glycero-3-phosphocholine membrane. The peptide reverses its overall orientation as it permeates through the biological bilayer. The large change in orientation is investigated explicitly but is shown to impact the free energy landscape and permeation time only moderately. Nevertheless, a significant difference in permeation properties of the two halves of the membrane suggests the presence of other hidden slow variables. We speculate, based on calculation of the potential of mean force, that a conformational transition of NATA makes significant contribution to these differences. Other candidates for hidden slow variables may include water permeation and collective motions of phospholipids.
STDP Installs in Winner-Take-All Circuits an Online Approximation to Hidden Markov Model Learning
Kappel, David; Nessler, Bernhard; Maass, Wolfgang
2014-01-01
In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyramidal cells in Winner-Take-All circuits with lateral excitation. In fact, one can show that these motifs endow cortical microcircuits with functional properties of a hidden Markov model, a generic model for solving such tasks through probabilistic inference. Whereas in engineering applications this model is adapted to specific tasks through offline learning, we show here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards. We demonstrate the emergent computing capabilities of the model through several computer simulations. The full power of hidden Markov model learning can be attained through reward-gated STDP. This is due to the fact that these mechanisms enable a rejection sampling approximation to theoretically optimal learning. We investigate the possible performance gain that can be achieved with this more accurate learning method for an artificial grammar task. PMID:24675787
NASA Astrophysics Data System (ADS)
Krishnan, Chethan; Maheshwari, Shubham; Bala Subramanian, P. N.
2017-08-01
We write down a Robin boundary term for general relativity. The construction relies on the Neumann result of arXiv:1605.01603 in an essential way. This is unlike in mechanics and (polynomial) field theory, where two formulations of the Robin problem exist: one with Dirichlet as the natural limiting case, and another with Neumann.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manjunath, Naren; Samajdar, Rhine; Jain, Sudhir R., E-mail: srjain@barc.gov.in
Recently, the nodal domain counts of planar, integrable billiards with Dirichlet boundary conditions were shown to satisfy certain difference equations in Samajdar and Jain (2014). The exact solutions of these equations give the number of domains explicitly. For complete generality, we demonstrate this novel formulation for three additional separable systems and thus extend the statement to all integrable billiards.
A weighted anisotropic variant of the Caffarelli-Kohn-Nirenberg inequality and applications
NASA Astrophysics Data System (ADS)
Bahrouni, Anouar; Rădulescu, Vicenţiu D.; Repovš, Dušan D.
2018-04-01
We present a weighted version of the Caffarelli-Kohn-Nirenberg inequality in the framework of variable exponents. The combination of this inequality with a variant of the fountain theorem, yields the existence of infinitely many solutions for a class of non-homogeneous problems with Dirichlet boundary condition.
The use of MACSYMA for solving elliptic boundary value problems
NASA Technical Reports Server (NTRS)
Thejll, Peter; Gilbert, Robert P.
1990-01-01
A boundary method is presented for the solution of elliptic boundary value problems. An approach based on the use of complete systems of solutions is emphasized. The discussion is limited to the Dirichlet problem, even though the present method can possibly be adapted to treat other boundary value problems.
Test Design Project: Studies in Test Adequacy. Annual Report.
ERIC Educational Resources Information Center
Wilcox, Rand R.
These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…
NASA Astrophysics Data System (ADS)
Chernyshov, A. D.
2018-05-01
The analytical solution of the nonlinear heat conduction problem for a curvilinear region is obtained with the use of the fast-expansion method together with the method of extension of boundaries and pointwise technique of computing Fourier coefficients.
Pig Data and Bayesian Inference on Multinomial Probabilities
ERIC Educational Resources Information Center
Kern, John C.
2006-01-01
Bayesian inference on multinomial probabilities is conducted based on data collected from the game Pass the Pigs[R]. Prior information on these probabilities is readily available from the instruction manual, and is easily incorporated in a Dirichlet prior. Posterior analysis of the scoring probabilities quantifies the discrepancy between empirical…
Comment Data Mining to Estimate Student Performance Considering Consecutive Lessons
ERIC Educational Resources Information Center
Sorour, Shaymaa E.; Goda, Kazumasa; Mine, Tsunenori
2017-01-01
The purpose of this study is to examine different formats of comment data to predict student performance. Having students write comment data after every lesson can reflect students' learning attitudes, tendencies and learning activities involved with the lesson. In this research, Latent Dirichlet Allocation (LDA) and Probabilistic Latent Semantic…
Object permanence in domestic dogs (Canis lupus familiaris) and gray wolves (Canis lupus).
Fiset, Sylvain; Plourde, Vickie
2013-05-01
Recent evidence suggests that phylogenetic constraints exerted on dogs by the process of domestication have altered the ability of dogs to represent the physical world and the displacement of objects. In this study, invisible (Experiment 1) and visible (Experiment 2) displacement problems were administered to determine whether domestic dogs' and gray wolves' cognitive capacities to infer the position of a hidden object differ. The results revealed that adult dogs and wolves performed similarly in searching for disappearing objects: Both species succeeded the visible displacement tasks but failed the invisible displacement problems. We conclude that physical cognition for finding hidden objects in domestic dogs and gray wolves is alike and unrelated to the process of domestication.
An information hidden model holding cover distributions
NASA Astrophysics Data System (ADS)
Fu, Min; Cai, Chao; Dai, Zuxu
2018-03-01
The goal of steganography is to embed secret data into a cover so no one apart from the sender and intended recipients can find the secret data. Usually, the way the cover changing was decided by a hidden function. There were no existing model could be used to find an optimal function which can greatly reduce the distortion the cover suffered. This paper considers the cover carrying secret message as a random Markov chain, taking the advantages of a deterministic relation between initial distributions and transferring matrix of the Markov chain, and takes the transferring matrix as a constriction to decrease statistical distortion the cover suffered in the process of information hiding. Furthermore, a hidden function is designed and the transferring matrix is also presented to be a matrix from the original cover to the stego cover. Experiment results show that the new model preserves a consistent statistical characterizations of original and stego cover.
An Immunization Strategy for Hidden Populations.
Chen, Saran; Lu, Xin
2017-06-12
Hidden populations, such as injecting drug users (IDUs), sex workers (SWs) and men who have sex with men (MSM), are considered at high risk of contracting and transmitting infectious diseases such as AIDS, gonorrhea, syphilis etc. However, public health interventions to such groups are prohibited due to strong privacy concerns and lack of global information, which is a necessity for traditional strategies such as targeted immunization and acquaintance immunization. In this study, we introduce an innovative intervention strategy to be used in combination with a sampling approach that is widely used for hidden populations, Respondent-driven Sampling (RDS). The RDS strategy is implemented in two steps: First, RDS is used to estimate the average degree (personal network size) and degree distribution of the target population with sample data. Second, a cut-off threshold is calculated and used to screen the respondents to be immunized. Simulations on model networks and real-world networks reveal that the efficiency of the RDS strategy is close to that of the targeted strategy. As the new strategy can be implemented with the RDS sampling process, it provides a cost-efficient and feasible approach for disease intervention and control for hidden populations.
Oxytocin enhances pupil dilation and sensitivity to 'hidden' emotional expressions.
Leknes, Siri; Wessberg, Johan; Ellingsen, Dan-Mikael; Chelnokova, Olga; Olausson, Håkan; Laeng, Bruno
2013-10-01
Sensing others' emotions through subtle facial expressions is a highly important social skill. We investigated the effects of intranasal oxytocin treatment on the evaluation of explicit and 'hidden' emotional expressions and related the results to individual differences in sensitivity to others' subtle expressions of anger and happiness. Forty healthy volunteers participated in this double-blind, placebo-controlled crossover study, which shows that a single dose of intranasal oxytocin (40 IU) enhanced or 'sharpened' evaluative processing of others' positive and negative facial expression for both explicit and hidden emotional information. Our results point to mechanisms that could underpin oxytocin's prosocial effects in humans. Importantly, individual differences in baseline emotional sensitivity predicted oxytocin's effects on the ability to sense differences between faces with hidden emotional information. Participants with low emotional sensitivity showed greater oxytocin-induced improvement. These participants also showed larger task-related pupil dilation, suggesting that they also allocated the most attentional resources to the task. Overall, oxytocin treatment enhanced stimulus-induced pupil dilation, consistent with oxytocin enhancement of attention towards socially relevant stimuli. Since pupil dilation can be associated with increased attractiveness and approach behaviour, this effect could also represent a mechanism by which oxytocin increases human affiliation.
Sneider, Jennifer Tropp; Sava, Simona; Rogowska, Jadwiga; Yurgelun-Todd, Deborah A
2011-10-01
The hippocampus plays a significant role in spatial memory processing, with sex differences being prominent on various spatial tasks. This study examined sex differences in healthy adults, using functional magnetic resonance imaging (fMRI) in areas implicated in spatial processing during navigation of a virtual analogue of the Morris water-maze. There were three conditions: learning, hidden, and visible control. There were no significant differences in performance measures. However, sex differences were found in regional brain activation during learning in the right hippocampus, right parahippocampal gyrus, and the cingulate cortex. During the hidden condition, the hippocampus, parahippocampal gyrus, and cingulate cortex were activated in both men and women. Additional brain areas involved in spatial processing may be recruited in women when learning information about the environment, by utilizing external cues (landmarks) more than do men, contributing to the observed sex differences in brain activation.
Tanabe, Soichi; Miyauchi, Eiji; Muneshige, Akemi; Mio, Kazuhiro; Sato, Chikara; Sato, Masahiko
2007-07-01
A PCR method to detect porcine DNA was developed for verifying the allergen labeling of foods and for identifying hidden pork ingredients in processed foods. The primer pair, F2/R1, was designed to detect the gene encoding porcine cytochrome b for the specific detection of pork with high sensitivity. The amplified DNA fragment (130 bp) was specifically detected from porcine DNA, while no amplification occurred with other species such as cattle, chicken, sheep, and horse. When the developed PCR method was used for investigating commercial food products, porcine DNA was clearly detected in those containing pork in the list of ingredients. In addition, 100 ppb of pork in heated gyoza (pork and vegetable dumpling) could be detected by this method. This method is rapid, specific and sensitive, making it applicable for detecting trace amounts of pork in processed foods.
Statistical patterns of visual search for hidden objects
Credidio, Heitor F.; Teixeira, Elisângela N.; Reis, Saulo D. S.; Moreira, André A.; Andrade Jr, José S.
2012-01-01
The movement of the eyes has been the subject of intensive research as a way to elucidate inner mechanisms of cognitive processes. A cognitive task that is rather frequent in our daily life is the visual search for hidden objects. Here we investigate through eye-tracking experiments the statistical properties associated with the search of target images embedded in a landscape of distractors. Specifically, our results show that the twofold process of eye movement, composed of sequences of fixations (small steps) intercalated by saccades (longer jumps), displays characteristic statistical signatures. While the saccadic jumps follow a log-normal distribution of distances, which is typical of multiplicative processes, the lengths of the smaller steps in the fixation trajectories are consistent with a power-law distribution. Moreover, the present analysis reveals a clear transition between a directional serial search to an isotropic random movement as the difficulty level of the searching task is increased. PMID:23226829
Exact solution of the hidden Markov processes.
Saakian, David B
2017-11-01
We write a master equation for the distributions related to hidden Markov processes (HMPs) and solve it using a functional equation. Thus the solution of HMPs is mapped exactly to the solution of the functional equation. For a general case the latter can be solved only numerically. We derive an exact expression for the entropy of HMPs. Our expression for the entropy is an alternative to the ones given before by the solution of integral equations. The exact solution is possible because actually the model can be considered as a generalized random walk on a one-dimensional strip. While we give the solution for the two second-order matrices, our solution can be easily generalized for the L values of the Markov process and M values of observables: We should be able to solve a system of L functional equations in the space of dimension M-1.
Exact solution of the hidden Markov processes
NASA Astrophysics Data System (ADS)
Saakian, David B.
2017-11-01
We write a master equation for the distributions related to hidden Markov processes (HMPs) and solve it using a functional equation. Thus the solution of HMPs is mapped exactly to the solution of the functional equation. For a general case the latter can be solved only numerically. We derive an exact expression for the entropy of HMPs. Our expression for the entropy is an alternative to the ones given before by the solution of integral equations. The exact solution is possible because actually the model can be considered as a generalized random walk on a one-dimensional strip. While we give the solution for the two second-order matrices, our solution can be easily generalized for the L values of the Markov process and M values of observables: We should be able to solve a system of L functional equations in the space of dimension M -1 .
Analyzing Single-Molecule Protein Transportation Experiments via Hierarchical Hidden Markov Models
Chen, Yang; Shen, Kuang
2017-01-01
To maintain proper cellular functions, over 50% of proteins encoded in the genome need to be transported to cellular membranes. The molecular mechanism behind such a process, often referred to as protein targeting, is not well understood. Single-molecule experiments are designed to unveil the detailed mechanisms and reveal the functions of different molecular machineries involved in the process. The experimental data consist of hundreds of stochastic time traces from the fluorescence recordings of the experimental system. We introduce a Bayesian hierarchical model on top of hidden Markov models (HMMs) to analyze these data and use the statistical results to answer the biological questions. In addition to resolving the biological puzzles and delineating the regulating roles of different molecular complexes, our statistical results enable us to propose a more detailed mechanism for the late stages of the protein targeting process. PMID:28943680
Real-time computer treatment of THz passive device images with the high image quality
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2012-06-01
We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.
The Effect of Multigrid Parameters in a 3D Heat Diffusion Equation
NASA Astrophysics Data System (ADS)
Oliveira, F. De; Franco, S. R.; Pinto, M. A. Villela
2018-02-01
The aim of this paper is to reduce the necessary CPU time to solve the three-dimensional heat diffusion equation using Dirichlet boundary conditions. The finite difference method (FDM) is used to discretize the differential equations with a second-order accuracy central difference scheme (CDS). The algebraic equations systems are solved using the lexicographical and red-black Gauss-Seidel methods, associated with the geometric multigrid method with a correction scheme (CS) and V-cycle. Comparisons are made between two types of restriction: injection and full weighting. The used prolongation process is the trilinear interpolation. This work is concerned with the study of the influence of the smoothing value (v), number of mesh levels (L) and number of unknowns (N) on the CPU time, as well as the analysis of algorithm complexity.
Giri, Maria Grazia; Cavedon, Carlo; Mazzarotto, Renzo; Ferdeghini, Marco
2016-05-01
The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10-37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This "calibration curve" was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giri, Maria Grazia, E-mail: mariagrazia.giri@ospedaleuniverona.it; Cavedon, Carlo; Mazzarotto, Renzo
Purpose: The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on {sup 18}F-fluorodeoxyglucose positron emission tomography ({sup 18}F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. Methods: The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracymore » was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10–37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Results: Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This “calibration curve” was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. Conclusions: The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.« less
Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes
Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide
2017-01-01
Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889
The ``Folk Theorem'' on effective field theory: How does it fare in nuclear physics?
NASA Astrophysics Data System (ADS)
Rho, Mannque
2017-10-01
This is a brief history of what I consider as very important, some of which truly seminal, contributions made by young Korean nuclear theorists, mostly graduate students working on PhD thesis in 1990s and early 2000s, to nuclear effective field theory, nowadays heralded as the first-principle approach to nuclear physics. The theoretical framework employed is an effective field theory anchored on a single scale-invariant hidden local symmetric Lagrangian constructed in the spirit of Weinberg's "Folk Theorem" on effective field theory. The problems addressed are the high-precision calculations on the thermal np capture, the solar pp fusion process, the solar hep process — John Bahcall's challenge to nuclear theorists — and the quenching of g A in giant Gamow-Teller resonances and the whopping enhancement of first-forbidden beta transitions relevant in astrophysical processes. Extending adventurously the strategy to a wild uncharted domain in which a systematic implementation of the "theorem" is far from obvious, the same effective Lagrangian is applied to the structure of compact stars. A surprising, unexpected, result on the properties of massive stars, totally different from what has been obtained up to day in the literature, is predicted, such as the precocious onset of conformal sound velocity together with a hint for the possible emergence in dense matter of hidden symmetries such as scale symmetry and hidden local symmetry.
Dissipative hidden sector dark matter
NASA Astrophysics Data System (ADS)
Foot, R.; Vagnozzi, S.
2015-01-01
A simple way of explaining dark matter without modifying known Standard Model physics is to require the existence of a hidden (dark) sector, which interacts with the visible one predominantly via gravity. We consider a hidden sector containing two stable particles charged under an unbroken U (1 )' gauge symmetry, hence featuring dissipative interactions. The massless gauge field associated with this symmetry, the dark photon, can interact via kinetic mixing with the ordinary photon. In fact, such an interaction of strength ε ˜10-9 appears to be necessary in order to explain galactic structure. We calculate the effect of this new physics on big bang nucleosynthesis and its contribution to the relativistic energy density at hydrogen recombination. We then examine the process of dark recombination, during which neutral dark states are formed, which is important for large-scale structure formation. Galactic structure is considered next, focusing on spiral and irregular galaxies. For these galaxies we modeled the dark matter halo (at the current epoch) as a dissipative plasma of dark matter particles, where the energy lost due to dissipation is compensated by the energy produced from ordinary supernovae (the core-collapse energy is transferred to the hidden sector via kinetic mixing induced processes in the supernova core). We find that such a dynamical halo model can reproduce several observed features of disk galaxies, including the cored density profile and the Tully-Fisher relation. We also discuss how elliptical and dwarf spheroidal galaxies could fit into this picture. Finally, these analyses are combined to set bounds on the parameter space of our model, which can serve as a guideline for future experimental searches.
Social Information Processing and Emotional Understanding in Children with LD
ERIC Educational Resources Information Center
Bauminger, Nirit; Edelsztein, Hany Schorr; Morash, Janice
2005-01-01
The present study aimed to comprehensively examine social cognition processes in children with and without learning disabilities (LD), focusing on social information processing (SIP) and complex emotional understanding capabilities such as understanding complex, mixed, and hidden emotions. Participants were 50 children with LD (age range 9.4-12.7;…
Alanazi, Adwan; Elleithy, Khaled
2016-01-01
Successful transmission of online multimedia streams in wireless multimedia sensor networks (WMSNs) is a big challenge due to their limited bandwidth and power resources. The existing WSN protocols are not completely appropriate for multimedia communication. The effectiveness of WMSNs varies, and it depends on the correct location of its sensor nodes in the field. Thus, maximizing the multimedia coverage is the most important issue in the delivery of multimedia contents. The nodes in WMSNs are either static or mobile. Thus, the node connections change continuously due to the mobility in wireless multimedia communication that causes an additional energy consumption, and synchronization loss between neighboring nodes. In this paper, we introduce an Optimized Hidden Node Detection (OHND) paradigm. The OHND consists of three phases: hidden node detection, message exchange, and location detection. These three phases aim to maximize the multimedia node coverage, and improve energy efficiency, hidden node detection capacity, and packet delivery ratio. OHND helps multimedia sensor nodes to compute the directional coverage. Furthermore, an OHND is used to maintain a continuous node– continuous neighbor discovery process in order to handle the mobility of the nodes. We implement our proposed algorithms by using a network simulator (NS2). The simulation results demonstrate that nodes are capable of maintaining direct coverage and detecting hidden nodes in order to maximize coverage and multimedia node mobility. To evaluate the performance of our proposed algorithms, we compared our results with other known approaches. PMID:27618048
Alanazi, Adwan; Elleithy, Khaled
2016-09-07
Successful transmission of online multimedia streams in wireless multimedia sensor networks (WMSNs) is a big challenge due to their limited bandwidth and power resources. The existing WSN protocols are not completely appropriate for multimedia communication. The effectiveness of WMSNs varies, and it depends on the correct location of its sensor nodes in the field. Thus, maximizing the multimedia coverage is the most important issue in the delivery of multimedia contents. The nodes in WMSNs are either static or mobile. Thus, the node connections change continuously due to the mobility in wireless multimedia communication that causes an additional energy consumption, and synchronization loss between neighboring nodes. In this paper, we introduce an Optimized Hidden Node Detection (OHND) paradigm. The OHND consists of three phases: hidden node detection, message exchange, and location detection. These three phases aim to maximize the multimedia node coverage, and improve energy efficiency, hidden node detection capacity, and packet delivery ratio. OHND helps multimedia sensor nodes to compute the directional coverage. Furthermore, an OHND is used to maintain a continuous node- continuous neighbor discovery process in order to handle the mobility of the nodes. We implement our proposed algorithms by using a network simulator (NS2). The simulation results demonstrate that nodes are capable of maintaining direct coverage and detecting hidden nodes in order to maximize coverage and multimedia node mobility. To evaluate the performance of our proposed algorithms, we compared our results with other known approaches.
NASA Astrophysics Data System (ADS)
Its, Alexander; Its, Elizabeth
2018-04-01
We revisit the Helmholtz equation in a quarter-plane in the framework of the Riemann-Hilbert approach to linear boundary value problems suggested in late 1990s by A. Fokas. We show the role of the Sommerfeld radiation condition in Fokas' scheme.
Vectorized multigrid Poisson solver for the CDC CYBER 205
NASA Technical Reports Server (NTRS)
Barkai, D.; Brandt, M. A.
1984-01-01
The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.
Three-dimensional analytical solutions of the atmospheric diffusion equation with multiple sources and height-dependent wind speed and eddy diffusivities are derived in a systematic fashion. For homogeneous Neumann (total reflection), Dirichlet (total adsorpti...
Three-dimensional analytical solutions of the atmospheric diffusion equation with multiple sources and height-dependent wind speed and eddy diffusivities are derived in a systematic fashion. For homogeneous Neumann (total reflection), Dirichlet (total adsorpti...
Functional Neuronal Processing of Human Body Odors
Lundström, Johan N.; Olsson, Mats J.
2013-01-01
Body odors carry informational cues of great importance for individuals across a wide range of species, and signals hidden within the body odor cocktail are known to regulate several key behaviors in animals. For a long time, the notion that humans may be among these species has been dismissed. We now know, however, that each human has a unique odor signature that carries information related to his or her genetic makeup, as well as information about personal environmental variables, such as diet and hygiene. Although a substantial number of studies have investigated the behavioral effects of body odors, only a handful have studied central processing. Recent studies have, however, demonstrated that the human brain responds to fear signals hidden within the body odor cocktail, is able to extract kin specific signals, and processes body odors differently than other perceptually similar odors. In this chapter, we provide an overview of the current knowledge of how the human brain processes body odors and the potential importance these signals have for us in everyday life. PMID:20831940
Boundary conditions and formation of pure spin currents in magnetic field
NASA Astrophysics Data System (ADS)
Eliashvili, Merab; Tsitsishvili, George
2017-09-01
Schrödinger equation for an electron confined to a two-dimensional strip is considered in the presence of homogeneous orthogonal magnetic field. Since the system has edges, the eigenvalue problem is supplied by the boundary conditions (BC) aimed in preventing the leakage of matter away across the edges. In the case of spinless electrons the Dirichlet and Neumann BC are considered. The Dirichlet BC result in the existence of charge carrying edge states. For the Neumann BC each separate edge comprises two counterflow sub-currents which precisely cancel out each other provided the system is populated by electrons up to certain Fermi level. Cancelation of electric current is a good starting point for developing the spin-effects. In this scope we reconsider the problem for a spinning electron with Rashba coupling. The Neumann BC are replaced by Robin BC. Again, the two counterflow electric sub-currents cancel out each other for a separate edge, while the spin current survives thus modeling what is known as pure spin current - spin flow without charge flow.
Inverse scattering for an exterior Dirichlet program
NASA Technical Reports Server (NTRS)
Hariharan, S. I.
1981-01-01
Scattering due to a metallic cylinder which is in the field of a wire carrying a periodic current is considered. The location and shape of the cylinder is obtained with a far field measurement in between the wire and the cylinder. The same analysis is applicable in acoustics in the situation that the cylinder is a soft wall body and the wire is a line source. The associated direct problem in this situation is an exterior Dirichlet problem for the Helmholtz equation in two dimensions. An improved low frequency estimate for the solution of this problem using integral equation methods is presented. The far field measurements are related to the solutions of boundary integral equations in the low frequency situation. These solutions are expressed in terms of mapping function which maps the exterior of the unknown curve onto the exterior of a unit disk. The coefficients of the Laurent expansion of the conformal transformations are related to the far field coefficients. The first far field coefficient leads to the calculation of the distance between the source and the cylinder.
Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation.
Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu
2017-07-01
In the field of pathology, whole slide image (WSI) has become the major carrier of visual and diagnostic information. Content-based image retrieval among WSIs can aid the diagnosis of an unknown pathological image by finding its similar regions in WSIs with diagnostic information. However, the huge size and complex content of WSI pose several challenges for retrieval. In this paper, we propose an unsupervised, accurate, and fast retrieval method for a breast histopathological image. Specifically, the method presents a local statistical feature of nuclei for morphology and distribution of nuclei, and employs the Gabor feature to describe the texture information. The latent Dirichlet allocation model is utilized for high-level semantic mining. Locality-sensitive hashing is used to speed up the search. Experiments on a WSI database with more than 8000 images from 15 types of breast histopathology demonstrate that our method achieves about 0.9 retrieval precision as well as promising efficiency. Based on the proposed framework, we are developing a search engine for an online digital slide browsing and retrieval platform, which can be applied in computer-aided diagnosis, pathology education, and WSI archiving and management.
The impact of the rate prior on Bayesian estimation of divergence times with multiple Loci.
Dos Reis, Mario; Zhu, Tianqi; Yang, Ziheng
2014-07-01
Bayesian methods provide a powerful way to estimate species divergence times by combining information from molecular sequences with information from the fossil record. With the explosive increase of genomic data, divergence time estimation increasingly uses data of multiple loci (genes or site partitions). Widely used computer programs to estimate divergence times use independent and identically distributed (i.i.d.) priors on the substitution rates for different loci. The i.i.d. prior is problematic. As the number of loci (L) increases, the prior variance of the average rate across all loci goes to zero at the rate 1/L. As a consequence, the rate prior dominates posterior time estimates when many loci are analyzed, and if the rate prior is misspecified, the estimated divergence times will converge to wrong values with very narrow credibility intervals. Here we develop a new prior on the locus rates based on the Dirichlet distribution that corrects the problematic behavior of the i.i.d. prior. We use computer simulation and real data analysis to highlight the differences between the old and new priors. For a dataset for six primate species, we show that with the old i.i.d. prior, if the prior rate is too high (or too low), the estimated divergence times are too young (or too old), outside the bounds imposed by the fossil calibrations. In contrast, with the new Dirichlet prior, posterior time estimates are insensitive to the rate prior and are compatible with the fossil calibrations. We re-analyzed a phylogenomic data set of 36 mammal species and show that using many fossil calibrations can alleviate the adverse impact of a misspecified rate prior to some extent. We recommend the use of the new Dirichlet prior in Bayesian divergence time estimation. [Bayesian inference, divergence time, relaxed clock, rate prior, partition analysis.]. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Atmospheric effect in three-space scenario for the Stokes-Helmert method of geoid determination
NASA Astrophysics Data System (ADS)
Yang, H.; Tenzer, R.; Vanicek, P.; Santos, M.
2004-05-01
: According to the Stokes-Helmert method for the geoid determination by Vanicek and Martinec (1994) and Vanicek et al. (1999), the Helmert gravity anomalies are computed at the earth surface. To formulate the fundamental formula of physical geodesy, Helmert's gravity anomalies are then downward continued from the earth surface onto the geoid. This procedure, i.e., the inverse Dirichlet's boundary value problem, is realized by solving the Poisson integral equation. The above mentioned "classical" approach can be modified so that the inverse Dirichlet's boundary value problem is solved in the No Topography (NT) space (Vanicek et al., 2004) instead of in the Helmert (H) space. This technique has been introduced by Vanicek et al. (2003) and was used by Tenzer and Vanicek (2003) for the determination of the geoid in the region of the Canadian Rocky Mountains. According to this new approach, the gravity anomalies referred to the earth surface are first transformed into the NT-space. This transformation is realized by subtracting the gravitational attraction of topographical and atmospheric masses from the gravity anomalies at the earth surface. Since the NT-anomalies are harmonic above the geoid, the Dirichlet boundary value problem is solved in the NT-space instead of the Helmert space according to the standard formulation. After being obtained on the geoid, the NT-anomalies are transformed into the H-space to minimize the indirect effect on the geoidal heights. This step, i.e., transformation from NT-space to H-space is realized by adding the gravitational attraction of condensed topographical and condensed atmospheric masses to the NT-anomalies at the geoid. The effects of atmosphere in the standard Stokes-Helmert method was intensively investigated by Sjöberg (1998 and 1999), and Novák (2000). In this presentation, the effect of the atmosphere in the three-space scenario for the Stokes-Helmert method is discussed and the numerical results over Canada are shown. Key words: Atmosphere - Geoid - Gravity
A Dirichlet process model for classifying and forecasting epidemic curves
2014-01-01
Background A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. Methods The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997–2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). Results We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods’ performance was comparable. Conclusions Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial. PMID:24405642
Hidden momentum of electrons, nuclei, atoms, and molecules
NASA Astrophysics Data System (ADS)
Cameron, Robert P.; Cotter, J. P.
2018-04-01
We consider the positions and velocities of electrons and spinning nuclei and demonstrate that these particles harbour hidden momentum when located in an electromagnetic field. This hidden momentum is present in all atoms and molecules, however it is ultimately canceled by the momentum of the electromagnetic field. We point out that an electron vortex in an electric field might harbour a comparatively large hidden momentum and recognize the phenomenon of hidden hidden momentum.
Influence of enzymatic hydrolysis on the allergenic reactivity of processed cashew and pistachio
USDA-ARS?s Scientific Manuscript database
Tree nuts constitute one of the main cause of fatal anaphylactic reactions due to food allergy upon direct ingestion and as ingredients (hidden allergens), and cashew and pistachio allergies are considered a serious health problem. Several previous studies have shown that thermal processing may modi...
A Hidden Cost of Happiness in Children
ERIC Educational Resources Information Center
Schnall, Simone; Jaswal, Vikram K.; Rowe, Christina
2008-01-01
Happiness is generally considered an emotion with only beneficial effects, particularly in childhood. However, there are some situations where the style of information processing triggered by happiness could be a liability. In particular, happiness seems to motivate a top-down processing style, which could impair performance when attention to…
Latent Image Processing Can Bolster the Value of Quizzes.
ERIC Educational Resources Information Center
Singer, David
1985-01-01
Latent image processing is a method which reveals hidden ink when marked with a special pen. Using multiple-choice items with commercially available latent image transfers can provide immediate feedback on take-home quizzes. Students benefitted from formative evaluation and were challenged to search for alternative solutions and explain unexpected…
Local recovery of the compressional and shear speeds from the hyperbolic DN map
NASA Astrophysics Data System (ADS)
Stefanov, Plamen; Uhlmann, Gunther; Vasy, Andras
2018-01-01
We study the isotropic elastic wave equation in a bounded domain with boundary. We show that local knowledge of the Dirichlet-to-Neumann map determines uniquely the speed of the p-wave locally if there is a strictly convex foliation with respect to it, and similarly for the s-wave speed.
The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples
ERIC Educational Resources Information Center
Avetisyan, Marianna; Fox, Jean-Paul
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…
Existence and uniqueness of steady state solutions of a nonlocal diffusive logistic equation
NASA Astrophysics Data System (ADS)
Sun, Linan; Shi, Junping; Wang, Yuwen
2013-08-01
In this paper, we consider a dynamical model of population biology which is of the classical Fisher type, but the competition interaction between individuals is nonlocal. The existence, uniqueness, and stability of the steady state solution of the nonlocal problem on a bounded interval with homogeneous Dirichlet boundary conditions are studied.
Using Dirichlet Priors to Improve Model Parameter Plausibility
ERIC Educational Resources Information Center
Rai, Dovan; Gong, Yue; Beck, Joseph E.
2009-01-01
Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…
Quantum field between moving mirrors: A three dimensional example
NASA Technical Reports Server (NTRS)
Hacyan, S.; Jauregui, Roco; Villarreal, Carlos
1995-01-01
The scalar quantum field uniformly moving plates in three dimensional space is studied. Field equations for Dirichlet boundary conditions are solved exactly. Comparison of the resulting wavefunctions with their instantaneous static counterpart is performed via Bogolubov coefficients. Unlike the one dimensional problem, 'particle' creation as well as squeezing may occur. The time dependent Casimir energy is also evaluated.
Efficient free energy calculations by combining two complementary tempering sampling methods.
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-14
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
Efficient free energy calculations by combining two complementary tempering sampling methods
NASA Astrophysics Data System (ADS)
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-01
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings
Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun
2017-01-01
The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088
Einstein-Gauss-Bonnet theory of gravity: The Gauss-Bonnet-Katz boundary term
NASA Astrophysics Data System (ADS)
Deruelle, Nathalie; Merino, Nelson; Olea, Rodrigo
2018-05-01
We propose a boundary term to the Einstein-Gauss-Bonnet action for gravity, which uses the Chern-Weil theorem plus a dimensional continuation process, such that the extremization of the full action yields the equations of motion when Dirichlet boundary conditions are imposed. When translated into tensorial language, this boundary term is the generalization to this theory of the Katz boundary term and vector for general relativity. The boundary term constructed in this paper allows to deal with a general background and is not equivalent to the Gibbons-Hawking-Myers boundary term. However, we show that they coincide if one replaces the background of the Katz procedure by a product manifold. As a first application we show that this Einstein Gauss-Bonnet Katz action yields, without any extra ingredients, the expected mass of the Boulware-Deser black hole.
Bayesian hierarchical functional data analysis via contaminated informative priors.
Scarpa, Bruno; Dunson, David B
2009-09-01
A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Wei, E-mail: zhoux123@umn.edu
2013-06-15
We consider the value function of a stochastic optimal control of degenerate diffusion processes in a domain D. We study the smoothness of the value function, under the assumption of the non-degeneracy of the diffusion term along the normal to the boundary and an interior condition weaker than the non-degeneracy of the diffusion term. When the diffusion term, drift term, discount factor, running payoff and terminal payoff are all in the class of C{sup 1,1}( D-bar ) , the value function turns out to be the unique solution in the class of C{sub loc}{sup 1,1}(D) Intersection C{sup 0,1}( D-bar )more » to the associated degenerate Bellman equation with Dirichlet boundary data. Our approach is probabilistic.« less
Holographic quantitative imaging of sample hidden by turbid medium or occluding objects
NASA Astrophysics Data System (ADS)
Bianco, V.; Miccio, L.; Merola, F.; Memmolo, P.; Gennari, O.; Paturzo, Melania; Netti, P. A.; Ferraro, P.
2015-03-01
Digital Holography (DH) numerical procedures have been developed to allow imaging through turbid media. A fluid is considered turbid when dispersed particles provoke strong light scattering, thus destroying the image formation by any standard optical system. Here we show that sharp amplitude imaging and phase-contrast mapping of object hidden behind turbid medium and/or occluding objects are possible in harsh noise conditions and with a large field-of view by Multi-Look DH microscopy. In particular, it will be shown that both amplitude imaging and phase-contrast mapping of cells hidden behind a flow of Red Blood Cells can be obtained. This allows, in a noninvasive way, the quantitative evaluation of living processes in Lab on Chip platforms where conventional microscopy techniques fail. The combination of this technique with endoscopic imaging can pave the way for the holographic blood vessel inspection, e.g. to look for settled cholesterol plaques as well as blood clots for a rapid diagnostics of blood diseases.
Monitoring volcano activity through Hidden Markov Model
NASA Astrophysics Data System (ADS)
Cassisi, C.; Montalto, P.; Prestifilippo, M.; Aliotta, M.; Cannata, A.; Patanè, D.
2013-12-01
During 2011-2013, Mt. Etna was mainly characterized by cyclic occurrences of lava fountains, totaling to 38 episodes. During this time interval Etna volcano's states (QUIET, PRE-FOUNTAIN, FOUNTAIN, POST-FOUNTAIN), whose automatic recognition is very useful for monitoring purposes, turned out to be strongly related to the trend of RMS (Root Mean Square) of the seismic signal recorded by stations close to the summit area. Since RMS time series behavior is considered to be stochastic, we can try to model the system generating its values, assuming to be a Markov process, by using Hidden Markov models (HMMs). HMMs are a powerful tool in modeling any time-varying series. HMMs analysis seeks to recover the sequence of hidden states from the observed emissions. In our framework, observed emissions are characters generated by the SAX (Symbolic Aggregate approXimation) technique, which maps RMS time series values with discrete literal emissions. The experiments show how it is possible to guess volcano states by means of HMMs and SAX.
El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar
2014-11-01
Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data.
Munakata, Y; McClelland, J L; Johnson, M H; Siegler, R S
1997-10-01
Infants seem sensitive to hidden objects in habituation tasks at 3.5 months but fail to retrieve hidden objects until 8 months. The authors first consider principle-based accounts of these successes and failures, in which early successes imply knowledge of principles and failures are attributed to ancillary deficits. One account is that infants younger than 8 months have the object permanence principle but lack means-ends abilities. To test this, 7-month-olds were trained on means-ends behaviors and were tested on retrieval of visible and occluded toys. Means-ends demands were the same, yet infants made more toy-guided retrievals in the visible case. The authors offer an adaptive process account in which knowledge is graded and embedded in specific behavioral processes. Simulation models that learn gradually to represent occluded objects show how this approach can account for success and failure in object permanence tasks without assuming principles and ancillary deficits.
Predicting the future trend of popularity by network diffusion.
Zeng, An; Yeung, Chi Ho
2016-06-01
Conventional approaches to predict the future popularity of products are mainly based on extrapolation of their current popularity, which overlooks the hidden microscopic information under the macroscopic trend. Here, we study diffusion processes on consumer-product and citation networks to exploit the hidden microscopic information and connect consumers to their potential purchase, publications to their potential citers to obtain a prediction for future item popularity. By using the data obtained from the largest online retailers including Netflix and Amazon as well as the American Physical Society citation networks, we found that our method outperforms the accurate short-term extrapolation and identifies the potentially popular items long before they become prominent.
Predicting the future trend of popularity by network diffusion
NASA Astrophysics Data System (ADS)
Zeng, An; Yeung, Chi Ho
2016-06-01
Conventional approaches to predict the future popularity of products are mainly based on extrapolation of their current popularity, which overlooks the hidden microscopic information under the macroscopic trend. Here, we study diffusion processes on consumer-product and citation networks to exploit the hidden microscopic information and connect consumers to their potential purchase, publications to their potential citers to obtain a prediction for future item popularity. By using the data obtained from the largest online retailers including Netflix and Amazon as well as the American Physical Society citation networks, we found that our method outperforms the accurate short-term extrapolation and identifies the potentially popular items long before they become prominent.
Learning-Testing Process in Classroom: An Empirical Simulation Model
ERIC Educational Resources Information Center
Buda, Rodolphe
2009-01-01
This paper presents an empirical micro-simulation model of the teaching and the testing process in the classroom (Programs and sample data are available--the actual names of pupils have been hidden). It is a non-econometric micro-simulation model describing informational behaviors of the pupils, based on the observation of the pupils'…
ERIC Educational Resources Information Center
Seiler, Gale; Abraham, Anjali
2009-01-01
Conscientization involves a recursive process of reflection and action toward individual and social transformation. Often this process takes shape through encounters in/with diverse and often conflicting discourses. The study of student and teacher discourses, or scripts and counterscripts, in science classrooms can reveal asymmetrical power…
Functional neuronal processing of human body odors.
Lundström, Johan N; Olsson, Mats J
2010-01-01
Body odors carry informational cues of great importance for individuals across a wide range of species, and signals hidden within the body odor cocktail are known to regulate several key behaviors in animals. For a long time, the notion that humans may be among these species has been dismissed. We now know, however, that each human has a unique odor signature that carries information related to his or her genetic makeup, as well as information about personal environmental variables, such as diet and hygiene. Although a substantial number of studies have investigated the behavioral effects of body odors, only a handful have studied central processing. Recent studies have, however, demonstrated that the human brain responds to fear signals hidden within the body odor cocktail, is able to extract kin specific signals, and processes body odors differently than other perceptually similar odors. In this chapter, we provide an overview of the current knowledge of how the human brain processes body odors and the potential importance these signals have for us in everyday life. Copyright © 2010 Elsevier Inc. All rights reserved.
Nielsen, J D; Dean, C B
2008-09-01
A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.
Rotationally symmetric viscous gas flows
NASA Astrophysics Data System (ADS)
Weigant, W.; Plotnikov, P. I.
2017-03-01
The Dirichlet boundary value problem for the Navier-Stokes equations of a barotropic viscous compressible fluid is considered. The flow region and the data of the problem are assumed to be invariant under rotations about a fixed axis. The existence of rotationally symmetric weak solutions for all adiabatic exponents from the interval (γ*,∞) with a critical exponent γ* < 4/3 is proved.
Thermoelectric DC conductivities in hyperscaling violating Lifshitz theories
NASA Astrophysics Data System (ADS)
Cremonini, Sera; Cvetič, Mirjam; Papadimitriou, Ioannis
2018-04-01
We analytically compute the thermoelectric conductivities at zero frequency (DC) in the holographic dual of a four dimensional Einstein-Maxwell-Axion-Dilaton theory that admits a class of asymptotically hyperscaling violating Lifshitz backgrounds with a dynamical exponent z and hyperscaling violating parameter θ. We show that the heat current in the dual Lifshitz theory involves the energy flux, which is an irrelevant operator for z > 1. The linearized fluctuations relevant for computing the thermoelectric conductivities turn on a source for this irrelevant operator, leading to several novel and non-trivial aspects in the holographic renormalization procedure and the identification of the physical observables in the dual theory. Moreover, imposing Dirichlet or Neumann boundary conditions on the spatial components of one of the two Maxwell fields present leads to different thermoelectric conductivities. Dirichlet boundary conditions reproduce the thermoelectric DC conductivities obtained from the near horizon analysis of Donos and Gauntlett, while Neumann boundary conditions result in a new set of DC conductivities. We make preliminary analytical estimates for the temperature behavior of the thermoelectric matrix in appropriate regions of parameter space. In particular, at large temperatures we find that the only case which could lead to a linear resistivity ρ ˜ T corresponds to z = 4 /3.
Repulsive Casimir effect from extra dimensions and Robin boundary conditions: From branes to pistons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elizalde, E.; Odintsov, S. D.; Institucio Catalana de Recerca i Estudis Avanccats
2009-03-15
We evaluate the Casimir energy and force for a massive scalar field with general curvature coupling parameter, subject to Robin boundary conditions on two codimension-one parallel plates, located on a (D+1)-dimensional background spacetime with an arbitrary internal space. The most general case of different Robin coefficients on the two separate plates is considered. With independence of the geometry of the internal space, the Casimir forces are seen to be attractive for special cases of Dirichlet or Neumann boundary conditions on both plates and repulsive for Dirichlet boundary conditions on one plate and Neumann boundary conditions on the other. For Robinmore » boundary conditions, the Casimir forces can be either attractive or repulsive, depending on the Robin coefficients and the separation between the plates, what is actually remarkable and useful. Indeed, we demonstrate the existence of an equilibrium point for the interplate distance, which is stabilized due to the Casimir force, and show that stability is enhanced by the presence of the extra dimensions. Applications of these properties in braneworld models are discussed. Finally, the corresponding results are generalized to the geometry of a piston of arbitrary cross section.« less
Latent Dirichlet Allocation (LDA) for Sentiment Analysis Toward Tourism Review in Indonesia
NASA Astrophysics Data System (ADS)
Putri, IR; Kusumaningrum, R.
2017-01-01
The tourism industry is one of foreign exchange sector, which has considerable potential development in Indonesia. Compared to other Southeast Asia countries such as Malaysia with 18 million tourists and Singapore 20 million tourists, Indonesia which is the largest Southeast Asia’s country have failed to attract higher tourist numbers compared to its regional peers. Indonesia only managed to attract 8,8 million foreign tourists in 2013, with the value of foreign tourists each year which is likely to decrease. Apart from the infrastructure problems, marketing and managing also form of obstacles for tourism growth. An evaluation and self-analysis should be done by the stakeholder to respond toward this problem and capture opportunities that related to tourism satisfaction from tourists review. Recently, one of technology to answer this problem only relying on the subjective of statistical data which collected by voting or grading from user randomly. So the result is still not to be accountable. Thus, we proposed sentiment analysis with probabilistic topic model using Latent Dirichlet Allocation (LDA) method to be applied for reading general tendency from tourist review into certain topics that can be classified toward positive and negative sentiment.
Synthesis and X-ray Crystallography of [Mg(H2O)6][AnO2(C2H5COO)3]2 (An = U, Np, or Pu).
Serezhkin, Viktor N; Grigoriev, Mikhail S; Abdulmyanov, Aleksey R; Fedoseev, Aleksandr M; Savchenkov, Anton V; Serezhkina, Larisa B
2016-08-01
Synthesis and X-ray crystallography of single crystals of [Mg(H2O)6][AnO2(C2H5COO)3]2, where An = U (I), Np (II), or Pu (III), are reported. Compounds I-III are isostructural and crystallize in the trigonal crystal system. The structures of I-III are built of hydrated magnesium cations [Mg(H2O)6](2+) and mononuclear [AnO2(C2H5COO)3](-) complexes, which belong to the AB(01)3 crystallochemical group of uranyl complexes (A = AnO2(2+), B(01) = C2H5COO(-)). Peculiarities of intermolecular interactions in the structures of [Mg(H2O)6][UO2(L)3]2 complexes depending on the carboxylate ion L (acetate, propionate, or n-butyrate) are investigated using the method of molecular Voronoi-Dirichlet polyhedra. Actinide contraction in the series of U(VI)-Np(VI)-Pu(VI) in compounds I-III is reflected in a decrease in the mean An═O bond lengths and in the volume and sphericity degree of Voronoi-Dirichlet polyhedra of An atoms.
Application of the perfectly matched layer in 2.5D marine controlled-source electromagnetic modeling
NASA Astrophysics Data System (ADS)
Li, Gang; Han, Bo
2017-09-01
For the traditional framework of EM modeling algorithms, the Dirichlet boundary is usually used which assumes the field values are zero at the boundaries. This crude condition requires that the boundaries should be sufficiently far away from the area of interest. Although cell sizes could become larger toward the boundaries as electromagnetic wave is propagated diffusively, a large modeling area may still be necessary to mitigate the boundary artifacts. In this paper, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 2.5D frequency-domain marine controlled-source electromagnetic (CSEM) field modeling. By using this PML boundary, one can restrict the modeling area of interest to the target region. Only a few absorbing layers surrounding the computational area can effectively depress the artificial boundary effect without losing the numerical accuracy. A 2.5D marine CSEM modeling scheme with the CFS-PML is developed by using the staggered finite-difference discretization. This modeling algorithm using the CFS-PML is of high accuracy, and shows advantages in computational time and memory saving than that using the Dirichlet boundary. For 3D problem, this computation time and memory saving should be more significant.
The spectra of rectangular lattices of quantum waveguides
NASA Astrophysics Data System (ADS)
Nazarov, S. A.
2017-02-01
We obtain asymptotic formulae for the spectral segments of a thin (h\\ll 1) rectangular lattice of quantum waveguides which is described by a Dirichlet problem for the Laplacian. We establish that the structure of the spectrum of the lattice is incorrectly described by the commonly accepted quantum graph model with the traditional Kirchhoff conditions at the vertices. It turns out that the lengths of the spectral segments are infinitesimals of order O(e-δ/h), δ> 0, and O(h) as h\\to+0, and gaps of width O(h-2) and O(1) arise between them in the low- frequency and middle- frequency spectral ranges respectively. The first spectral segment is generated by the (unique) eigenvalue in the discrete spectrum of an infinite cross-shaped waveguide \\Theta. The absence of bounded solutions of the problem in \\Theta at the threshold frequency means that the correct model of the lattice is a graph with Dirichlet conditions at the vertices which splits into two infinite subsets of identical edges- intervals. By using perturbations of finitely many joints, we construct any given number of discrete spectrum points of the lattice below the essential spectrum as well as inside the gaps.
NASA Astrophysics Data System (ADS)
Chang, Ya-Chi; Yeh, Hund-Der
2010-06-01
The constant-head pumping tests are usually employed to determine the aquifer parameters and they can be performed in fully or partially penetrating wells. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The mathematical model describing the aquifer response to a constant-head test performed in a fully penetrating well can be easily solved by the conventional integral transform technique under the uniform Dirichlet-type condition along the rim of wellbore. However, the boundary condition for a test well with partial penetration should be considered as a mixed-type condition. This mixed boundary value problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the Laplace and finite Fourier transforms in conjunction with the triple series equations method. This approach provides analytical results for the drawdown in a partially penetrating well for arbitrary location of the well screen in a finite thickness aquifer. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.
Extending information retrieval methods to personalized genomic-based studies of disease.
Ye, Shuyun; Dawson, John A; Kendziorski, Christina
2014-01-01
Genomic-based studies of disease now involve diverse types of data collected on large groups of patients. A major challenge facing statistical scientists is how best to combine the data, extract important features, and comprehensively characterize the ways in which they affect an individual's disease course and likelihood of response to treatment. We have developed a survival-supervised latent Dirichlet allocation (survLDA) modeling framework to address these challenges. Latent Dirichlet allocation (LDA) models have proven extremely effective at identifying themes common across large collections of text, but applications to genomics have been limited. Our framework extends LDA to the genome by considering each patient as a "document" with "text" detailing his/her clinical events and genomic state. We then further extend the framework to allow for supervision by a time-to-event response. The model enables the efficient identification of collections of clinical and genomic features that co-occur within patient subgroups, and then characterizes each patient by those features. An application of survLDA to The Cancer Genome Atlas ovarian project identifies informative patient subgroups showing differential response to treatment, and validation in an independent cohort demonstrates the potential for patient-specific inference.
The Other Side of the Hidden Curriculum: Correspondence Theories and the Labor Process.
ERIC Educational Resources Information Center
Apple, Michael W.
1980-01-01
Discusses the inadequacy of correspondence as a theory of the relationship both among all social institutions and between the school and other powerful socioeconomic forces. Notes implications for educational action. (Author/MK)
Experimental entanglement distillation and 'hidden' non-locality.
Kwiat, P G; Barraza-Lopez, S; Stefanov, A; Gisin, N
2001-02-22
Entangled states are central to quantum information processing, including quantum teleportation, efficient quantum computation and quantum cryptography. In general, these applications work best with pure, maximally entangled quantum states. However, owing to dissipation and decoherence, practically available states are likely to be non-maximally entangled, partially mixed (that is, not pure), or both. To counter this problem, various schemes of entanglement distillation, state purification and concentration have been proposed. Here we demonstrate experimentally the distillation of maximally entangled states from non-maximally entangled inputs. Using partial polarizers, we perform a filtering process to maximize the entanglement of pure polarization-entangled photon pairs generated by spontaneous parametric down-conversion. We have also applied our methods to initial states that are partially mixed. After filtering, the distilled states demonstrate certain non-local correlations, as evidenced by their violation of a form of Bell's inequality. Because the initial states do not have this property, they can be said to possess 'hidden' non-locality.
2003-08-26
KENNEDY SPACE CENTER, FLA. - KSC Director James W. Kennedy receives Consul General of Japan Ko Kodaira and his family in his office in Headquarters Building during their visit to Kennedy Space Center (KSC). From left are Kennedy, Kodaira, his wife Marie (partially hidden), and his daughter Reiko. Kodaira is touring the facilities at KSC at the invitation of the local office of the National Space Development Agency of Japan (NASDA) to acquaint him with KSC's unique processing capabilities.
Multi-category micro-milling tool wear monitoring with continuous hidden Markov models
NASA Astrophysics Data System (ADS)
Zhu, Kunpeng; Wong, Yoke San; Hong, Geok Soon
2009-02-01
In-process monitoring of tool conditions is important in micro-machining due to the high precision requirement and high tool wear rate. Tool condition monitoring in micro-machining poses new challenges compared to conventional machining. In this paper, a multi-category classification approach is proposed for tool flank wear state identification in micro-milling. Continuous Hidden Markov models (HMMs) are adapted for modeling of the tool wear process in micro-milling, and estimation of the tool wear state given the cutting force features. For a noise-robust approach, the HMM outputs are connected via a medium filter to minimize the tool state before entry into the next state due to high noise level. A detailed study on the selection of HMM structures for tool condition monitoring (TCM) is presented. Case studies on the tool state estimation in the micro-milling of pure copper and steel demonstrate the effectiveness and potential of these methods.
Shao, Q; Rowe, R C; York, P
2007-06-01
Understanding of the cause-effect relationships between formulation ingredients, process conditions and product properties is essential for developing a quality product. However, the formulation knowledge is often hidden in experimental data and not easily interpretable. This study compares neurofuzzy logic and decision tree approaches in discovering hidden knowledge from an immediate release tablet formulation database relating formulation ingredients (silica aerogel, magnesium stearate, microcrystalline cellulose and sodium carboxymethylcellulose) and process variables (dwell time and compression force) to tablet properties (tensile strength, disintegration time, friability, capping and drug dissolution at various time intervals). Both approaches successfully generated useful knowledge in the form of either "if then" rules or decision trees. Although different strategies are employed by the two approaches in generating rules/trees, similar knowledge was discovered in most cases. However, as decision trees are not able to deal with continuous dependent variables, data discretisation procedures are generally required.
NASA Astrophysics Data System (ADS)
Yuan, Y.; Meng, Y.; Chen, Y. X.; Jiang, C.; Yue, A. Z.
2018-04-01
In this study, we proposed a method to map urban encroachment onto farmland using satellite image time series (SITS) based on the hierarchical hidden Markov model (HHMM). In this method, the farmland change process is decomposed into three hierarchical levels, i.e., the land cover level, the vegetation phenology level, and the SITS level. Then a three-level HHMM is constructed to model the multi-level semantic structure of farmland change process. Once the HHMM is established, a change from farmland to built-up could be detected by inferring the underlying state sequence that is most likely to generate the input time series. The performance of the method is evaluated on MODIS time series in Beijing. Results on both simulated and real datasets demonstrate that our method improves the change detection accuracy compared with the HMM-based method.
Analysis of Accuracy and Epoch on Back-propagation BFGS Quasi-Newton
NASA Astrophysics Data System (ADS)
Silaban, Herlan; Zarlis, Muhammad; Sawaluddin
2017-12-01
Back-propagation is one of the learning algorithms on artificial neural networks that have been widely used to solve various problems, such as pattern recognition, prediction and classification. The Back-propagation architecture will affect the outcome of learning processed. BFGS Quasi-Newton is one of the functions that can be used to change the weight of back-propagation. This research tested some back-propagation architectures using classical back-propagation and back-propagation with BFGS. There are 7 architectures that have been tested on glass dataset with various numbers of neurons, 6 architectures with 1 hidden layer and 1 architecture with 2 hidden layers. BP with BFGS improves the convergence of the learning process. The average improvement convergence is 98.34%. BP with BFGS is more optimal on architectures with smaller number of neurons with decreased epoch number is 94.37% with the increase of accuracy about 0.5%.
Religion, Spirituality, and the Hidden Curriculum: Medical Student and Faculty Reflections.
Balboni, Michael J; Bandini, Julia; Mitchell, Christine; Epstein-Peterson, Zachary D; Amobi, Ada; Cahill, Jonathan; Enzinger, Andrea C; Peteet, John; Balboni, Tracy
2015-10-01
Religion and spirituality play an important role in physicians' medical practice, but little research has examined their influence within the socialization of medical trainees and the hidden curriculum. The objective is to explore the role of religion and spirituality as they intersect with aspects of medicine's hidden curriculum. Semiscripted, one-on-one interviews and focus groups (n = 33 respondents) were conducted to assess Harvard Medical School student and faculty experiences of religion/spirituality and the professionalization process during medical training. Using grounded theory, theme extraction was performed with interdisciplinary input (medicine, sociology, and theology), yielding a high inter-rater reliability score (kappa = 0.75). Three domains emerged where religion and spirituality appear as a factor in medical training. First, religion/spirituality may present unique challenges and benefits in relation to the hidden curriculum. Religious/spiritual respondents more often reported to struggle with issues of personal identity, increased self-doubt, and perceived medical knowledge inadequacy. However, religious/spiritual participants less often described relationship conflicts within the medical team, work-life imbalance, and emotional stress arising from patient suffering. Second, religion/spirituality may influence coping strategies during encounters with patient suffering. Religious/spiritual trainees described using prayer, faith, and compassion as means for coping whereas nonreligious/nonspiritual trainees discussed compartmentalization and emotional repression. Third, levels of religion/spirituality appear to fluctuate in relation to medical training, with many trainees experiencing an increase in religiousness/spirituality during training. Religion/spirituality has a largely unstudied but possibly influential role in medical student socialization. Future study is needed to characterize its function within the hidden curriculum. Copyright © 2015 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Religion, Spirituality, and the Hidden Curriculum: Medical Student and Faculty Reflections
Balboni, Michael J.; Bandini, Julia; Mitchell, Christine; Epstein-Peterson, Zachary D.; Amobi, Ada; Cahill, Jonathan; Enzinger, Andrea C.; Peteet, John; Balboni, Tracy
2017-01-01
Context Religion and spirituality play an important role in physicians’ medical practice, but little research has examined their influence within the socialization of medical trainees and the hidden curriculum. Objectives The objective is to explore the role of religion and spirituality as they intersect with aspects of medicine’s hidden curriculum. Methods Semiscripted, one-on-one interviews and focus groups (n = 33 respondents) were conducted to assess Harvard Medical School student and faculty experiences of religion/spirituality and the professionalization process during medical training. Using grounded theory, theme extraction was performed with interdisciplinary input (medicine, sociology, and theology), yielding a high inter-rater reliability score (kappa = 0.75). Results Three domains emerged where religion and spirituality appear as a factor in medical training. First, religion/spirituality may present unique challenges and benefits in relation to the hidden curriculum. Religious/spiritual respondents more often reported to struggle with issues of personal identity, increased self-doubt, and perceived medical knowledge inadequacy. However, religious/spiritual participants less often described relationship conflicts within the medical team, work-life imbalance, and emotional stress arising from patient suffering. Second, religion/spirituality may influence coping strategies during encounters with patient suffering. Religious/spiritual trainees described using prayer, faith, and compassion as means for coping whereas nonreligious/nonspiritual trainees discussed compartmentalization and emotional repression. Third, levels of religion/spirituality appear to fluctuate in relation to medical training, with many trainees experiencing an increase in religiousness/spirituality during training. Conclusion Religion/spirituality has a largely unstudied but possibly influential role in medical student socialization. Future study is needed to characterize its function within the hidden curriculum. PMID:26025271
Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas
2013-11-22
Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.
Finding your next core business.
Zook, Chris
2007-04-01
How do you know when your core needs to change? And how do you determine what should replace it? From an in-depth study of 25 companies, the author, a strategy consultant, has discovered that it's possible to measure the vitality of a business's core. If it needs reinvention, he says, the best course is to mine hidden assets. Some of the 25 companies were in deep crisis when they began the process of redefining themselves. But, says Zook, management teams can learn to recognize early signs of erosion. He offers five diagnostic questions with which to evaluate the customers, key sources of differentiation, profit pools, capabilities, and organizational culture of your core business. The next step is strategic regeneration. In four-fifths of the companies Zook examined, a hidden asset was the centerpiece of the new strategy. He provides a map for identifying the hidden assets in your midst, which tend to fall into three categories: undervalued business platforms, untapped insights into customers, and underexploited capabilities. The Swedish company Dometic, for example, was manufacturing small absorption refrigerators for boats and RVs when it discovered a hidden asset: its understanding of, and access to, customers in the RV market. The company took advantage of a boom in that market to refocus on complete systems for live-in vehicles. The Danish company Novozymes, which produced relatively low-tech commodity enzymes such as those used in detergents, realized that its underutilized biochemical capability in genetic and protein engineering was a hidden asset and successfully refocused on creating bioengineered specialty enzymes. Your next core business is not likely to announce itself with fanfare. Use the author's tools to conduct an internal audit of possibilities and pinpoint your new focus.
Uncovering the cognitive processes underlying mental rotation: an eye-movement study.
Xue, Jiguo; Li, Chunyong; Quan, Cheng; Lu, Yiming; Yue, Jingwei; Zhang, Chenggang
2017-08-30
Mental rotation is an important paradigm for spatial ability. Mental-rotation tasks are assumed to involve five or three sequential cognitive-processing states, though this has not been demonstrated experimentally. Here, we investigated how processing states alternate during mental-rotation tasks. Inference was carried out using an advanced statistical modelling and data-driven approach - a discriminative hidden Markov model (dHMM) trained using eye-movement data obtained from an experiment consisting of two different strategies: (I) mentally rotate the right-side figure to be aligned with the left-side figure and (II) mentally rotate the left-side figure to be aligned with the right-side figure. Eye movements were found to contain the necessary information for determining the processing strategy, and the dHMM that best fit our data segmented the mental-rotation process into three hidden states, which we termed encoding and searching, comparison, and searching on one-side pair. Additionally, we applied three classification methods, logistic regression, support vector model and dHMM, of which dHMM predicted the strategies with the highest accuracy (76.8%). Our study did confirm that there are differences in processing states between these two of mental-rotation strategies, and were consistent with the previous suggestion that mental rotation is discrete process that is accomplished in a piecemeal fashion.
3D variational brain tumor segmentation using Dirichlet priors on a clustered feature set.
Popuri, Karteek; Cobzas, Dana; Murtha, Albert; Jägersand, Martin
2012-07-01
Brain tumor segmentation is a required step before any radiation treatment or surgery. When performed manually, segmentation is time consuming and prone to human errors. Therefore, there have been significant efforts to automate the process. But, automatic tumor segmentation from MRI data is a particularly challenging task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. In our work, we propose an automatic brain tumor segmentation method that addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multidimensional feature set. Then, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this work is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned region statistics in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters from the normal brain region to be in the tumor region. This leads to a better disambiguation of the tumor from brain tissue. We evaluated the performance of our automatic segmentation method on 15 real MRI scans of brain tumor patients, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Validation with the expert segmentation labels yielded encouraging results: Jaccard (58%), Precision (81%), Recall (67%), Hausdorff distance (24 mm). Using priors on the brain/tumor appearance, our proposed automatic 3D variational segmentation method was able to better disambiguate the tumor from the surrounding tissue.
Lauridsen, S M R; Norup, M S; Rossel, P J H
2007-12-01
Rationing healthcare is a difficult task, which includes preventing patients from accessing potentially beneficial treatments. Proponents of implicit rationing argue that politicians cannot resist pressure from strong patient groups for treatments and conclude that physicians should ration without informing patients or the public. The authors subdivide this specific programme of implicit rationing, or "hidden rationing", into local hidden rationing, unsophisticated global hidden rationing and sophisticated global hidden rationing. They evaluate the appropriateness of these methods of rationing from the perspectives of individual and political autonomy and conclude that local hidden rationing and unsophisticated global hidden rationing clearly violate patients' individual autonomy, that is, their right to participate in medical decision-making. While sophisticated global hidden rationing avoids this charge, the authors point out that it nonetheless violates the political autonomy of patients, that is, their right to engage in public affairs as citizens. A defence of any of the forms of hidden rationing is therefore considered to be incompatible with a defence of autonomy.
Jang, Hojin; Plis, Sergey M.; Calhoun, Vince D.; Lee, Jong-Hwan
2016-01-01
Feedforward deep neural networks (DNN), artificial neural networks with multiple hidden layers, have recently demonstrated a record-breaking performance in multiple areas of applications in computer vision and speech processing. Following the success, DNNs have been applied to neuroimaging modalities including functional/structural magnetic resonance imaging (MRI) and positron-emission tomography data. However, no study has explicitly applied DNNs to 3D whole-brain fMRI volumes and thereby extracted hidden volumetric representations of fMRI that are discriminative for a task performed as the fMRI volume was acquired. Our study applied fully connected feedforward DNN to fMRI volumes collected in four sensorimotor tasks (i.e., left-hand clenching, right-hand clenching, auditory attention, and visual stimulus) undertaken by 12 healthy participants. Using a leave-one-subject-out cross-validation scheme, a restricted Boltzmann machine-based deep belief network was pretrained and used to initialize weights of the DNN. The pretrained DNN was fine-tuned while systematically controlling weight-sparsity levels across hidden layers. Optimal weight-sparsity levels were determined from a minimum validation error rate of fMRI volume classification. Minimum error rates (mean ± standard deviation; %) of 6.9 (± 3.8) were obtained from the three-layer DNN with the sparsest condition of weights across the three hidden layers. These error rates were even lower than the error rates from the single-layer network (9.4 ± 4.6) and the two-layer network (7.4 ± 4.1). The estimated DNN weights showed spatial patterns that are remarkably task-specific, particularly in the higher layers. The output values of the third hidden layer represented distinct patterns/codes of the 3D whole-brain fMRI volume and encoded the information of the tasks as evaluated from representational similarity analysis. Our reported findings show the ability of the DNN to classify a single fMRI volume based on the extraction of hidden representations of fMRI volumes associated with tasks across multiple hidden layers. Our study may be beneficial to the automatic classification/diagnosis of neuropsychiatric and neurological diseases and prediction of disease severity and recovery in (pre-) clinical settings using fMRI volumes without requiring an estimation of activation patterns or ad hoc statistical evaluation. PMID:27079534
Jang, Hojin; Plis, Sergey M; Calhoun, Vince D; Lee, Jong-Hwan
2017-01-15
Feedforward deep neural networks (DNNs), artificial neural networks with multiple hidden layers, have recently demonstrated a record-breaking performance in multiple areas of applications in computer vision and speech processing. Following the success, DNNs have been applied to neuroimaging modalities including functional/structural magnetic resonance imaging (MRI) and positron-emission tomography data. However, no study has explicitly applied DNNs to 3D whole-brain fMRI volumes and thereby extracted hidden volumetric representations of fMRI that are discriminative for a task performed as the fMRI volume was acquired. Our study applied fully connected feedforward DNN to fMRI volumes collected in four sensorimotor tasks (i.e., left-hand clenching, right-hand clenching, auditory attention, and visual stimulus) undertaken by 12 healthy participants. Using a leave-one-subject-out cross-validation scheme, a restricted Boltzmann machine-based deep belief network was pretrained and used to initialize weights of the DNN. The pretrained DNN was fine-tuned while systematically controlling weight-sparsity levels across hidden layers. Optimal weight-sparsity levels were determined from a minimum validation error rate of fMRI volume classification. Minimum error rates (mean±standard deviation; %) of 6.9 (±3.8) were obtained from the three-layer DNN with the sparsest condition of weights across the three hidden layers. These error rates were even lower than the error rates from the single-layer network (9.4±4.6) and the two-layer network (7.4±4.1). The estimated DNN weights showed spatial patterns that are remarkably task-specific, particularly in the higher layers. The output values of the third hidden layer represented distinct patterns/codes of the 3D whole-brain fMRI volume and encoded the information of the tasks as evaluated from representational similarity analysis. Our reported findings show the ability of the DNN to classify a single fMRI volume based on the extraction of hidden representations of fMRI volumes associated with tasks across multiple hidden layers. Our study may be beneficial to the automatic classification/diagnosis of neuropsychiatric and neurological diseases and prediction of disease severity and recovery in (pre-) clinical settings using fMRI volumes without requiring an estimation of activation patterns or ad hoc statistical evaluation. Copyright © 2016 Elsevier Inc. All rights reserved.
The Hidden Curriculum as Emancipatory and Non-Emancipatory Tools.
ERIC Educational Resources Information Center
Kanpol, Barry
Moral values implied in school practices and policies constitute the "hidden curriculum." Because the hidden curriculum may promote certain moral values to students, teachers are partially responsible for the moral education of students. A component of the hidden curriculum, institutional political resistance, concerns teacher opposition to…
Education and Violation: Conceptualizing Power, Domination, and Agency in the Hidden Curriculum
ERIC Educational Resources Information Center
De Lissovoy, Noah
2012-01-01
This article offers a theory of a process of "violation" that connects macropolitical effects to the intimate terrain of subject production. I describe power, as violation, in terms of a simultaneous process of construction and destruction, which seeks its satisfaction in an injury to the very identities it is complicit in producing. Starting from…
NASA Astrophysics Data System (ADS)
Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea; Montalto, Placido; Patanè, Domenico; Privitera, Eugenio
2016-07-01
From January 2011 to December 2015, Mt. Etna was mainly characterized by a cyclic eruptive behavior with more than 40 lava fountains from New South-East Crater. Using the RMS (Root Mean Square) of the seismic signal recorded by stations close to the summit area, an automatic recognition of the different states of volcanic activity (QUIET, PRE-FOUNTAIN, FOUNTAIN, POST-FOUNTAIN) has been applied for monitoring purposes. Since values of the RMS time series calculated on the seismic signal are generated from a stochastic process, we can try to model the system generating its sampled values, assumed to be a Markov process, using Hidden Markov Models (HMMs). HMMs analysis seeks to recover the sequence of hidden states from the observations. In our framework, observations are characters generated by the Symbolic Aggregate approXimation (SAX) technique, which maps RMS time series values with symbols of a pre-defined alphabet. The main advantages of the proposed framework, based on HMMs and SAX, with respect to other automatic systems applied on seismic signals at Mt. Etna, are the use of multiple stations and static thresholds to well characterize the volcano states. Its application on a wide seismic dataset of Etna volcano shows the possibility to guess the volcano states. The experimental results show that, in most of the cases, we detected lava fountains in advance.
Photoacoustic imaging of hidden dental caries by using a fiber-based probing system
NASA Astrophysics Data System (ADS)
Koyama, Takuya; Kakino, Satoko; Matsuura, Yuji
2017-04-01
Photoacoustic method to detect hidden dental caries is proposed. It was found that high frequency ultrasonic waves are generated from hidden carious part when radiating laser light to occlusal surface of model tooth. By making a map of intensity of these high frequency components, photoacoustic images of hidden caries were successfully obtained. A photoacoustic imaging system using a bundle of hollow optical fiber was fabricated for using clinical application, and clear photoacoustic image of hidden caries was also obtained by this system.
Research of the multimodal brain-tumor segmentation algorithm
NASA Astrophysics Data System (ADS)
Lu, Yisu; Chen, Wufan
2015-12-01
It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. A new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain tumor images, we developed the algorithm to segment multimodal brain tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated and compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance.
Bayesian Ensemble Trees (BET) for Clustering and Prediction in Heterogeneous Data
Duan, Leo L.; Clancy, John P.; Szczesniak, Rhonda D.
2016-01-01
We propose a novel “tree-averaging” model that utilizes the ensemble of classification and regression trees (CART). Each constituent tree is estimated with a subset of similar data. We treat this grouping of subsets as Bayesian Ensemble Trees (BET) and model them as a Dirichlet process. We show that BET determines the optimal number of trees by adapting to the data heterogeneity. Compared with the other ensemble methods, BET requires much fewer trees and shows equivalent prediction accuracy using weighted averaging. Moreover, each tree in BET provides variable selection criterion and interpretation for each subset. We developed an efficient estimating procedure with improved estimation strategies in both CART and mixture models. We demonstrate these advantages of BET with simulations and illustrate the approach with a real-world data example involving regression of lung function measurements obtained from patients with cystic fibrosis. Supplemental materials are available online. PMID:27524872
Behavior Based Social Dimensions Extraction for Multi-Label Classification
Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin
2016-01-01
Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849
Multiclass Data Segmentation using Diffuse Interface Methods on Graphs
2014-01-01
37] that performs interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general...izations of the graph Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph...continuous setting carry over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence
NASA Technical Reports Server (NTRS)
Chiavassa, G.; Liandrat, J.
1996-01-01
We construct compactly supported wavelet bases satisfying homogeneous boundary conditions on the interval (0,1). The maximum features of multiresolution analysis on the line are retained, including polynomial approximation and tree algorithms. The case of H(sub 0)(sup 1)(0, 1)is detailed, and numerical values, required for the implementation, are provided for the Neumann and Dirichlet boundary conditions.
ERIC Educational Resources Information Center
Kjeldsen, Tinne Hoff; Lützen, Jesper
2015-01-01
In this paper, we discuss the history of the concept of function and emphasize in particular how problems in physics have led to essential changes in its definition and application in mathematical practices. Euler defined a function as an analytic expression, whereas Dirichlet defined it as a variable that depends in an arbitrary manner on another…
The accurate solution of Poisson's equation by expansion in Chebyshev polynomials
NASA Technical Reports Server (NTRS)
Haidvogel, D. B.; Zang, T.
1979-01-01
A Chebyshev expansion technique is applied to Poisson's equation on a square with homogeneous Dirichlet boundary conditions. The spectral equations are solved in two ways - by alternating direction and by matrix diagonalization methods. Solutions are sought to both oscillatory and mildly singular problems. The accuracy and efficiency of the Chebyshev approach compare favorably with those of standard second- and fourth-order finite-difference methods.
Manifold Matching: Joint Optimization of Fidelity and Commensurability
2011-11-12
identified separately in p◦m, will be geometrically incommensurate (see Figure 7). Thus the null distribution of the test statistic will be inflated...into the objective function obviates the geometric incommensurability phenomenon. Thus we can es- tablish that, for a range of Dirichlet product model...from the geometric incommensu- rability phenomenon. Then q p implies that cca suffers from the spurious correlation phe- nomenon with high probability
Post processing of optically recognized text via second order hidden Markov model
NASA Astrophysics Data System (ADS)
Poudel, Srijana
In this thesis, we describe a postprocessing system on Optical Character Recognition(OCR) generated text. Second Order Hidden Markov Model (HMM) approach is used to detect and correct the OCR related errors. The reason for choosing the 2nd order HMM is to keep track of the bigrams so that the model can represent the system more accurately. Based on experiments with training data of 159,733 characters and testing of 5,688 characters, the model was able to correct 43.38 % of the errors with a precision of 75.34 %. However, the precision value indicates that the model introduced some new errors, decreasing the correction percentage to 26.4%.
Reconstruction of pulse noisy images via stochastic resonance
Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan
2015-01-01
We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications. PMID:26067911
Hidden-charm Pentaquark Production at e + e - Colliders
NASA Astrophysics Data System (ADS)
Li, Shi-Yuan; Liu, Yan-Rui; Liu, Yu-Nan; Si, Zong-Guo; Zhang, Xiao-Feng
2018-03-01
We study one possible production mechanism for the hidden-charm pentaquark via a color-octet c\\bar{c} pair fragmentation in e + e - collision. The pentaquark production at B factory energy is dominated by {e}+{e}-\\to c\\bar{c}g\\to {P}c+X, while at Z 0 pole energy, there are several partonic processes playing significant role. Our results show that it is possible to search for the direct pentaquark production signal at e + e - colliders, which is important to understand the properties of pentaquark. Supported by National Natural Science Foundation of China under Grant Nos. 11775130, 11775132, 11635009, 11325525 and the Natural Science Foundation of Shandong Province under Grant No. ZR2017MA002
Eastern Sahara Geology from Orbital Radar: Potential Analog to Mars
NASA Technical Reports Server (NTRS)
Farr, T. G.; Paillou, P.; Heggy, E.
2004-01-01
Much of the surface of Mars has been intensely reworked by aeolian processes and key evidence about the history of the Martian environment seems to be hidden beneath a widespread layer of debris (paleo lakes and rivers, faults, impact craters). In the same way, the recent geological and hydrological history of the eastern Sahara is still mainly hidden under large regions of wind-blown sand which represent a possible terrestrial analog to Mars. The subsurface geology there is generally invisible to optical remote sensing techniques, but radar images obtained from the Shuttle Imaging Radar (SIR) missions were able to penetrate the superficial sand layer to reveal parts of paleohydrological networks in southern Egypt.
Multiscale hidden Markov models for photon-limited imaging
NASA Astrophysics Data System (ADS)
Nowak, Robert D.
1999-06-01
Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.
The tunneling effect for a class of difference operators
NASA Astrophysics Data System (ADS)
Klein, Markus; Rosenberger, Elke
We analyze a general class of self-adjoint difference operators H𝜀 = T𝜀 + V𝜀 on ℓ2((𝜀ℤ)d), where V𝜀 is a multi-well potential and 𝜀 is a small parameter. We give a coherent review of our results on tunneling up to new sharp results on the level of complete asymptotic expansions (see [30-35]).Our emphasis is on general ideas and strategy, possibly of interest for a broader range of readers, and less on detailed mathematical proofs. The wells are decoupled by introducing certain Dirichlet operators on regions containing only one potential well. Then the eigenvalue problem for the Hamiltonian H𝜀 is treated as a small perturbation of these comparison problems. After constructing a Finslerian distance d induced by H𝜀, we show that Dirichlet eigenfunctions decay exponentially with a rate controlled by this distance to the well. It follows with microlocal techniques that the first n eigenvalues of H𝜀 converge to the first n eigenvalues of the direct sum of harmonic oscillators on ℝd located at several wells. In a neighborhood of one well, we construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low-lying eigenvalues of H𝜀. These are obtained from eigenfunctions or quasimodes for the operator H𝜀, acting on L2(ℝd), via restriction to the lattice (𝜀ℤ)d. Tunneling is then described by a certain interaction matrix, similar to the analysis for the Schrödinger operator (see [22]), the remainder is exponentially small and roughly quadratic compared with the interaction matrix. We give weighted ℓ2-estimates for the difference of eigenfunctions of Dirichlet-operators in neighborhoods of the different wells and the associated WKB-expansions at the wells. In the last step, we derive full asymptotic expansions for interactions between two “wells” (minima) of the potential energy, in particular for the discrete tunneling effect. Here we essentially use analysis on phase space, complexified in the momentum variable. These results are as sharp as the classical results for the Schrödinger operator in [22].
;height:auto;overflow:hidden}.poc_table .top_row{background-color:#eee;height:auto;overflow:hidden}.poc_table ;background-color:#FFF;height:auto;overflow:hidden;border-top:1px solid #ccc}.poc_table .main_row .name :200px;padding:5px;height:auto;overflow:hidden}.tli_grey_box{background-color:#eaeaea;text-align:center
Natural hidden antibodies reacting with DNA or cardiolipin bind to thymocytes and evoke their death.
Zamulaeva, I A; Lekakh, I V; Kiseleva, V I; Gabai, V L; Saenko, A S; Shevchenko, A S; Poverenny, A M
1997-08-18
Both free and hidden natural antibodies to DNA or cardiolipin were obtained from immunoglobulins of a normal donor. The free antibodies reacting with DNA or cardiolipin were isolated by means of affinity chromatography. Antibodies occurring in an hidden state were disengaged from the depleted immunoglobulins by ion-exchange chromatography and were then affinity-isolated on DNA or cardiolipin sorbents. We used flow cytometry to study the ability of free and hidden antibodies to bind to rat thymocytes. Simultaneously, plasma membrane integrity was tested by propidium iodide (PI) exclusion. The hidden antibodies reacted with 65.2 +/- 10.9% of the thymocytes and caused a fast plasma membrane disruption. Cells (28.7 +/- 7.1%) were stained with PI after incubation with the hidden antibodies for 1 h. The free antibodies bound to a very small fraction of the thymocytes and did not evoke death as compared to control without antibodies. The possible reason for the observed effects is difference in reactivity of the free and hidden antibodies to phospholipids. While free antibodies reacted preferentially with phosphotidylcholine, hidden antibodies reacted with cardiolipin and phosphotidylserine.
Xu, Xiao; Jin, Tao; Wei, Zhijie; Wang, Jianmin
2017-01-01
Clinical pathways are widely used around the world for providing quality medical treatment and controlling healthcare cost. However, the expert-designed clinical pathways can hardly deal with the variances among hospitals and patients. It calls for more dynamic and adaptive process, which is derived from various clinical data. Topic-based clinical pathway mining is an effective approach to discover a concise process model. Through this approach, the latent topics found by latent Dirichlet allocation (LDA) represent the clinical goals. And process mining methods are used to extract the temporal relations between these topics. However, the topic quality is usually not desirable due to the low performance of the LDA in clinical data. In this paper, we incorporate topic assignment constraint and topic correlation limitation into the LDA to enhance the ability of discovering high-quality topics. Two real-world datasets are used to evaluate the proposed method. The results show that the topics discovered by our method are with higher coherence, informativeness, and coverage than the original LDA. These quality topics are suitable to represent the clinical goals. Also, we illustrate that our method is effective in generating a comprehensive topic-based clinical pathway model.
Xu, Xiao; Wei, Zhijie
2017-01-01
Clinical pathways are widely used around the world for providing quality medical treatment and controlling healthcare cost. However, the expert-designed clinical pathways can hardly deal with the variances among hospitals and patients. It calls for more dynamic and adaptive process, which is derived from various clinical data. Topic-based clinical pathway mining is an effective approach to discover a concise process model. Through this approach, the latent topics found by latent Dirichlet allocation (LDA) represent the clinical goals. And process mining methods are used to extract the temporal relations between these topics. However, the topic quality is usually not desirable due to the low performance of the LDA in clinical data. In this paper, we incorporate topic assignment constraint and topic correlation limitation into the LDA to enhance the ability of discovering high-quality topics. Two real-world datasets are used to evaluate the proposed method. The results show that the topics discovered by our method are with higher coherence, informativeness, and coverage than the original LDA. These quality topics are suitable to represent the clinical goals. Also, we illustrate that our method is effective in generating a comprehensive topic-based clinical pathway model. PMID:29065617
Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan
2015-02-01
In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections.
Fast, Cynthia D; Flesher, M Melissa; Nocera, Nathanial A; Fanselow, Michael S; Blaisdell, Aaron P
2016-06-01
Identifying statistical patterns between environmental stimuli enables organisms to respond adaptively when cues are later observed. However, stimuli are often obscured from detection, necessitating behavior under conditions of ambiguity. Considerable evidence indicates decisions under ambiguity rely on inference processes that draw on past experiences to generate predictions under novel conditions. Despite the high demand for this process and the observation that it deteriorates disproportionately with age, the underlying mechanisms remain unknown. We developed a rodent model of decision-making during ambiguity to examine features of experience that contribute to inference. Rats learned either a simple (positive patterning) or complex (negative patterning) instrumental discrimination between the illumination of one or two lights. During test, only one light was lit while the other relevant light was blocked from physical detection (covered by an opaque shield, rendering its status ambiguous). We found experience with the complex negative patterning discrimination was necessary for rats to behave sensitively to the ambiguous test situation. These rats behaved as if they inferred the presence of the hidden light, responding differently than when the light was explicitly absent (uncovered and unlit). Differential expression profiles of the immediate early gene cFos indicated hippocampal involvement in the inference process while localized microinfusions of the muscarinic antagonist, scopolamine, into the dorsal hippocampus caused rats to behave as if only one light was present. That is, blocking cholinergic modulation prevented the rat from inferring the presence of the hidden light. Collectively, these results suggest cholinergic modulation mediates recruitment of hippocampal processes related to past experiences and transfer of these processes to make decisions during ambiguous situations. Our results correspond with correlations observed between human brain function and inference abilities, suggesting our experiments may inform interventions to alleviate or prevent cognitive dysfunction. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Whitcomb, Tiffany L
2014-01-01
The hidden curriculum is characterized by information that is tacitly conveyed to and among students about the cultural and moral environment in which they find themselves. Although the hidden curriculum is often defined as a distinct entity, tacit information is conveyed to students throughout all aspects of formal and informal curricula. This unconsciously communicated knowledge has been identified across a wide spectrum of educational environments and is known to have lasting and powerful impacts, both positive and negative. Recently, medical education research on the hidden curriculum of becoming a doctor has come to the forefront as institutions struggle with inconsistencies between formal and hidden curricula that hinder the practice of patient-centered medicine. Similarly, the complex ethical questions that arise during the practice and teaching of veterinary medicine have the potential to cause disagreement between what the institution sets out to teach and what is actually learned. However, the hidden curriculum remains largely unexplored for this field. Because the hidden curriculum is retained effectively by students, elucidating its underlying messages can be a key component of program refinement. A review of recent literature about the hidden curriculum in a variety of fields, including medical education, will be used to explore potential hidden curricula in veterinary medicine and draw attention to the need for further investigation.
The Hidden Technology: Dictation Systems.
ERIC Educational Resources Information Center
Barton, Kathy; And Others
This booklet provides business and office teachers with background information, supporting materials, recruiting techniques, and a suggested unit plan that integrates the concepts related to dictation systems into information processing curricula. An "Introduction" (Donna Everett) discusses the need for dictation skills. "Need for Dictation…
Buried Messages, Hidden Meanings: Speech Mannerisms Revisited.
ERIC Educational Resources Information Center
Morgan, Lewis B.
1988-01-01
Introduces counselors to 10 commonly used mannerisms of speech and the part that each mannerism plays in the communication process, especially in the counseling context. Offers suggestions on how to respond to these speech mannerisms in a straightforward and effective manner. (Author)
Inside School Spaces: Rethinking the Hidden Dimension.
ERIC Educational Resources Information Center
Sitton, Thad
1980-01-01
Considers the spatial arrangements of public schools as culturally derived characteristics that reflect particular traditional expectations in regard to the learning process and teacher student interactions. Discusses fixed spatial arrangements as well as the territorial manipulation of school space by students. (GC)
Condition Monitoring for Helicopter Data. Appendix A
NASA Technical Reports Server (NTRS)
Wen, Fang; Willett, Peter; Deb, Somnath
2000-01-01
In this paper the classical "Westland" set of empirical accelerometer helicopter data is analyzed with the aim of condition monitoring for diagnostic purposes. The goal is to determine features for failure events from these data, via a proprietary signal processing toolbox, and to weigh these according to a variety of classification algorithms. As regards signal processing, it appears that the autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; it has also been found that augmentation of these by harmonic and other parameters can improve classification significantly. As regards classification, several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior on training data and is thus able to quantify probability of error in an exact manner, such that features may be discarded or coarsened appropriately.
NASA Technical Reports Server (NTRS)
Parse, Joseph B.; Wert, J. A.
1991-01-01
Inhomogeneities in the spatial distribution of second phase particles in engineering materials are known to affect certain mechanical properties. Progress in this area has been hampered by the lack of a convenient method for quantitative description of the spatial distribution of the second phase. This study intends to develop a broadly applicable method for the quantitative analysis and description of the spatial distribution of second phase particles. The method was designed to operate on a desktop computer. The Dirichlet tessellation technique (geometrical method for dividing an area containing an array of points into a set of polygons uniquely associated with the individual particles) was selected as the basis of an analysis technique implemented on a PC. This technique is being applied to the production of Al sheet by PM processing methods; vacuum hot pressing, forging, and rolling. The effect of varying hot working parameters on the spatial distribution of aluminum oxide particles in consolidated sheet is being studied. Changes in distributions of properties such as through-thickness near-neighbor distance correlate with hot-working reduction.
A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*
Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.
2013-01-01
This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186
Heating up the Galaxy with hidden photons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubovsky, Sergei; Hernández-Chifflet, Guzmán, E-mail: dubovsky@nyu.edu, E-mail: ghc236@nyu.edu
2015-12-01
We elaborate on the dynamics of ionized interstellar medium in the presence of hidden photon dark matter. Our main focus is the ultra-light regime, where the hidden photon mass is smaller than the plasma frequency in the Milky Way. We point out that as a result of the Galactic plasma shielding direct detection of ultra-light photons in this mass range is especially challenging. However, we demonstrate that ultra-light hidden photon dark matter provides a powerful heating source for the ionized interstellar medium. This results in a strong bound on the kinetic mixing between hidden and regular photons all the waymore » down to the hidden photon masses of order 10{sup −20} eV.« less
Heating up the Galaxy with hidden photons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubovsky, Sergei; Hernández-Chifflet, Guzmán; Instituto de Física, Facultad de Ingeniería, Universidad de la República,Montevideo, 11300
2015-12-29
We elaborate on the dynamics of ionized interstellar medium in the presence of hidden photon dark matter. Our main focus is the ultra-light regime, where the hidden photon mass is smaller than the plasma frequency in the Milky Way. We point out that as a result of the Galactic plasma shielding direct detection of ultra-light photons in this mass range is especially challenging. However, we demonstrate that ultra-light hidden photon dark matter provides a powerful heating source for the ionized interstellar medium. This results in a strong bound on the kinetic mixing between hidden and regular photons all the waymore » down to the hidden photon masses of order 10{sup −20} eV.« less
Measures and Metrics of Information Processing in Complex Systems: A Rope of Sand
ERIC Educational Resources Information Center
James, Ryan Gregory
2013-01-01
How much information do natural systems store and process? In this work we attempt to answer this question in multiple ways. We first establish a mathematical framework where natural systems are represented by a canonical form of edge-labeled hidden fc models called e-machines. Then, utilizing this framework, a variety of measures are defined and…
ERIC Educational Resources Information Center
Duran, Nicholas D.; Hall, Charles; McCarthy, Philip M.; McNamara, Danielle S.
2010-01-01
The words people use and the way they use them can reveal a great deal about their mental states when they attempt to deceive. The challenge for researchers is how to reliably distinguish the linguistic features that characterize these hidden states. In this study, we use a natural language processing tool called Coh-Metrix to evaluate deceptive…
ERIC Educational Resources Information Center
Brownlee, Jamie
2015-01-01
In Canada, universities are undergoing a process of corporatization where business interests, values and practices are assuming a more prominent place in higher education. A key feature of this process has been the changing composition of academic labor. While it is generally accepted that universities are relying more heavily on contract faculty,…
Detecting critical state before phase transition of complex systems by hidden Markov model
NASA Astrophysics Data System (ADS)
Liu, Rui; Chen, Pei; Li, Yongjun; Chen, Luonan
Identifying the critical state or pre-transition state just before the occurrence of a phase transition is a challenging task, because the state of the system may show little apparent change before this critical transition during the gradual parameter variations. Such dynamics of phase transition is generally composed of three stages, i.e., before-transition state, pre-transition state, and after-transition state, which can be considered as three different Markov processes. Thus, based on this dynamical feature, we present a novel computational method, i.e., hidden Markov model (HMM), to detect the switching point of the two Markov processes from the before-transition state (a stationary Markov process) to the pre-transition state (a time-varying Markov process), thereby identifying the pre-transition state or early-warning signals of the phase transition. To validate the effectiveness, we apply this method to detect the signals of the imminent phase transitions of complex systems based on the simulated datasets, and further identify the pre-transition states as well as their critical modules for three real datasets, i.e., the acute lung injury triggered by phosgene inhalation, MCF-7 human breast cancer caused by heregulin, and HCV-induced dysplasia and hepatocellular carcinoma.
SU-E-J-191: Motion Prediction Using Extreme Learning Machine in Image Guided Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, J; Cao, R; Pei, X
Purpose: Real-time motion tracking is a critical issue in image guided radiotherapy due to the time latency caused by image processing and system response. It is of great necessity to fast and accurately predict the future position of the respiratory motion and the tumor location. Methods: The prediction of respiratory position was done based on the positioning and tracking module in ARTS-IGRT system which was developed by FDS Team (www.fds.org.cn). An approach involving with the extreme learning machine (ELM) was adopted to predict the future respiratory position as well as the tumor’s location by training the past trajectories. For themore » training process, a feed-forward neural network with one single hidden layer was used for the learning. First, the number of hidden nodes was figured out for the single layered feed forward network (SLFN). Then the input weights and hidden layer biases of the SLFN were randomly assigned to calculate the hidden neuron output matrix. Finally, the predicted movement were obtained by applying the output weights and compared with the actual movement. Breathing movement acquired from the external infrared markers was used to test the prediction accuracy. And the implanted marker movement for the prostate cancer was used to test the implementation of the tumor motion prediction. Results: The accuracy of the predicted motion and the actual motion was tested. Five volunteers with different breathing patterns were tested. The average prediction time was 0.281s. And the standard deviation of prediction accuracy was 0.002 for the respiratory motion and 0.001 for the tumor motion. Conclusion: The extreme learning machine method can provide an accurate and fast prediction of the respiratory motion and the tumor location and therefore can meet the requirements of real-time tumor-tracking in image guided radiotherapy.« less
"It's Not Always What It Seems": Exploring the Hidden Curriculum within a Doctoral Program
ERIC Educational Resources Information Center
Foot, Rachel Elizabeth
2017-01-01
The purpose of this qualitative, naturalistic study was to explore the ways in which hidden curriculum might influence doctoral student success. Two questions guided the study: (a) How do doctoral students experience the hidden curriculum? (b) What forms of hidden curricula can be identified in a PhD program? Data were collected from twelve…
Hidden Farmworker Labor Camps in North Carolina: An Indicator of Structural Vulnerability
Summers, Phillip; Quandt, Sara A.; Talton, Jennifer W.; Galván, Leonardo
2015-01-01
Objectives. We used geographic information systems (GIS) to delineate whether farmworker labor camps were hidden and to determine whether hidden camps differed from visible camps in terms of physical and resident characteristics. Methods. We collected data using observation, interview, and public domain GIS data for 180 farmworker labor camps in east central North Carolina. A hidden camp was defined as one that was at least 0.15 miles from an all-weather road or located behind natural or manufactured objects. Hidden camps were compared with visible camps in terms of physical and resident characteristics. Results. More than one third (37.8%) of the farmworker labor camps were hidden. Hidden camps were significantly larger (42.7% vs 17.0% with 21 or more residents; P ≤ .001; and 29.4% vs 13.5% with 3 or more dwellings; P = .002) and were more likely to include barracks (50% vs 19.6%; P ≤ .001) than were visible camps. Conclusions. Poor housing conditions in farmworker labor camps often go unnoticed because they are hidden in the rural landscape, increasing farmworker vulnerability. Policies that promote greater community engagement with farmworker labor camp residents to reduce structural vulnerability should be considered. PMID:26469658
Biology and polymer physics at the single-molecule level.
Chu, Steven
2003-04-15
The ability to look at individual molecules has given us new insights into molecular processes. Examples of our recent work are given to illustrate how behaviour that may otherwise be hidden from view can be clearly seen in single-molecule experiments.
There's gold in them thar' databases.
Gillespie, G
2000-11-01
Some health care organizations are using sophisticated data mining applications to unearth hidden truths buried in their online clinical and financial information. But the lack of a standard clinical vocabulary and standard work processes is an obstacle CIOs must blast through to reach their treasure.
Sayers, Ken; Menzel, Charles R.
2012-01-01
Many models from foraging theory and movement ecology assume that resources are encountered randomly. If food locations, types and values are retained in memory, however, search time could be significantly reduced, with concurrent effects on biological fitness. Despite this, little is known about what specific characteristics of foods, particularly those relevant to profitability, nonhuman animals can remember. Building upon previous observations, we hypothesized that chimpanzees (Pan troglodytes), after observing foods being hidden in a large wooded test area they could not enter, and after long delays, would direct (through gesture and vocalization) experimentally naïve humans to the reward locations in an order that could be predicted beforehand by the spatial and physical characteristics of those items. In the main experiment, various quantities of almonds, both in and out of shells and sealed in transparent bags, were hidden in the test area. The chimpanzees later directed searchers to those items in a nonrandom order related to quantity, shell presence/absence, and the distance they were hidden from the subject. The recovery sequences were closely related to the actual e/h profitability of the foods. Predicted recovery orders, based on the energetic value of almonds and independently-measured, individual-specific expected pursuit and processing times, were closely related to observed recovery orders. We argue that the information nonhuman animals possess regarding their environment can be extensive, and that further comparative study is vital for incorporating realistic cognitive variables into models of foraging and movement. PMID:23226837
Multiclass Data Segmentation Using Diffuse Interface Methods on Graphs
2014-01-01
interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general- izations of the graph...Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph-based energy function that...over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence results for two algorithms
1987-07-01
multinomial distribution as a magazine exposure model. J. of Marketing Research . 21, 100-106. Lehmann, E.L. (1983). Theory of Point Estimation. John Wiley and... Marketing Research . 21, 89-99. V I flWflW WflW~WWMWSS tWN ,rw fl rwwrwwr-w~ w-. ~. - - -- .~ 4’.) ~a 4’ ., . ’-4. .4.: .4~ I .4. ~J3iAf a,’ -a’ 4
Multispike solutions for the Brezis-Nirenberg problem in dimension three
NASA Astrophysics Data System (ADS)
Musso, Monica; Salazar, Dora
2018-06-01
We consider the problem Δu + λu +u5 = 0, u > 0, in a smooth bounded domain Ω in R3, under zero Dirichlet boundary conditions. We obtain solutions to this problem exhibiting multiple bubbling behavior at k different points of the domain as λ tends to a special positive value λ0, which we characterize in terms of the Green function of - Δ - λ.
1985-05-01
non- zero Dirichlet boundary conditions and/or general mixed type boundary conditions. Note that Neumann type boundary condi- tion enters the problem by...Background ................................. ................... I 1.3 General Description ..... ............ ........... . ....... ...... 2 2. ANATOMICAL...human and varions loading conditions for the definition of a generalized safety guideline of blast exposure. To model the response of a sheep torso
Visibility of quantum graph spectrum from the vertices
NASA Astrophysics Data System (ADS)
Kühn, Christian; Rohleder, Jonathan
2018-03-01
We investigate the relation between the eigenvalues of the Laplacian with Kirchhoff vertex conditions on a finite metric graph and a corresponding Titchmarsh-Weyl function (a parameter-dependent Neumann-to-Dirichlet map). We give a complete description of all real resonances, including multiplicities, in terms of the edge lengths and the connectivity of the graph, and apply it to characterize all eigenvalues which are visible for the Titchmarsh-Weyl function.
A nonlinear ordinary differential equation associated with the quantum sojourn time
NASA Astrophysics Data System (ADS)
Benguria, Rafael D.; Duclos, Pierre; Fernández, Claudio; Sing-Long, Carlos
2010-11-01
We study a nonlinear ordinary differential equation on the half-line, with the Dirichlet boundary condition at the origin. This equation arises when studying the local maxima of the sojourn time for a free quantum particle whose states belong to an adequate subspace of the unit sphere of the corresponding Hilbert space. We establish several results concerning the existence and asymptotic behavior of the solutions.
Mappings of Least Dirichlet Energy and their Hopf Differentials
NASA Astrophysics Data System (ADS)
Iwaniec, Tadeusz; Onninen, Jani
2013-08-01
The paper is concerned with mappings {h \\colon {X}} {{begin{array}{ll} onto \\ longrightarrow }} {{Y}} between planar domains having least Dirichlet energy. The existence and uniqueness (up to a conformal change of variables in {{X}}) of the energy-minimal mappings is established within the class {overline{fancyscript{H}}_2({X}, {Y})} of strong limits of homeomorphisms in the Sobolev space {fancyscript{W}^{1,2}({X}, {Y})} , a result of considerable interest in the mathematical models of nonlinear elasticity. The inner variation of the independent variable in {{X}} leads to the Hopf differential {hz overline{h_{bar{z}}} dz ⊗ dz} and its trajectories. For a pair of doubly connected domains, in which {{X}} has finite conformal modulus, we establish the following principle: A mapping {h in overline{fancyscript{H}}2 ({X}, {Y})} is energy-minimal if and only if its Hopf-differential is analytic in {{X}} and real along {partial {X}} . In general, the energy-minimal mappings may not be injective, in which case one observes the occurrence of slits in {{X}} (cognate with cracks). Slits are triggered by points of concavity of {{Y}} . They originate from {partial {X}} and advance along vertical trajectories of the Hopf differential toward {{X}} where they eventually terminate, so no crosscuts are created.
Zipf exponent of trajectory distribution in the hidden Markov model
NASA Astrophysics Data System (ADS)
Bochkarev, V. V.; Lerner, E. Yu
2014-03-01
This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.
Singularities of Three-Layered Complex-Valued Neural Networks With Split Activation Function.
Kobayashi, Masaki
2018-05-01
There are three important concepts related to learning processes in neural networks: reducibility, nonminimality, and singularity. Although the definitions of these three concepts differ, they are equivalent in real-valued neural networks. This is also true of complex-valued neural networks (CVNNs) with hidden neurons not employing biases. The situation of CVNNs with hidden neurons employing biases, however, is very complicated. Exceptional reducibility was found, and it was shown that reducibility and nonminimality are not the same. Irreducibility consists of minimality and exceptional reducibility. The relationship between minimality and singularity has not yet been established. In this paper, we describe our surprising finding that minimality and singularity are independent. We also provide several examples based on exceptional reducibility.
Potential observation of the ϒ (6 S )→ϒ (13DJ)η transitions at Belle II
NASA Astrophysics Data System (ADS)
Huang, Qi; Xu, Hao; Liu, Xiang; Matsuki, Takayuki
2018-05-01
We perform the investigation of two-body hidden-bottom transitions of the ϒ (6 S ), which include ϒ (6 S )→ϒ (13DJ)η (J =1 ,2 ,3 ) decays. For estimating the branching ratios of these processes, we consider contributions from the triangle hadronic loops composed of S -wave B(s ) and B(s) * mesons, which are a bridge to connect the ϒ (6 S ) and final states. Our results show that both of the branching ratios of these decays can reach 10-3. Because of such considerable potential to observe these two-body hidden-bottom transitions of the ϒ (6 S ), we suggest the forthcoming Belle II experiment to explore them.
Targowski, Piotr; Iwanicka, Magdalena; Sylwestrzak, Marcin; Frosinini, Cecilia; Striova, Jana; Fontana, Raffaella
2018-06-18
Optical coherence tomography (OCT) was used for non-invasive examination of a well-known, yet complex, painting from the studio of Leonardo da Vinci in combination with routine imaging in various bands of electromagnetic radiation. In contrast with these techniques, OCT provides depth-resolved information. Three post-processing modalities were explored: cross-sectional views, maps of scattering from given depths, and their 3D models. Some hidden alterations of the painting owing to past restorations were traced: retouching and overpainting with their positioning within varnish layers as well as indications of a former transfer to canvas. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Uncovering the wisdom hidden between the lines: the Collaborative Reflexive Deliberative Approach.
Crabtree, Benjamin F; Miller, William L; Gunn, Jane M; Hogg, William E; Scott, Cathie M; Levesque, Jean-Frederic; Harris, Mark F; Chase, Sabrina M; Advocat, Jenny R; Halma, Lisa M; Russell, Grant M
2018-05-23
Meta-analysis and meta-synthesis have been developed to synthesize results across published studies; however, they are still largely grounded in what is already published, missing the tacit 'between the lines' knowledge generated during many research projects that are not intrinsic to the main objectives of studies. To develop a novel approach to expand and deepen meta-syntheses using researchers' experience, tacit knowledge and relevant unpublished materials. We established new collaborations among primary health care researchers from different contexts based on common interests in reforming primary care service delivery and a diversity of perspectives. Over 2 years, the team met face-to-face and via tele- and video-conferences to employ the Collaborative Reflexive Deliberative Approach (CRDA) to discuss and reflect on published and unpublished results from participants' studies to identify new patterns and insights. CRDA focuses on uncovering critical insights, interpretations hidden within multiple research contexts. For the process to work, careful attention must be paid to ensure sufficient diversity among participants while also having people who are able to collaborate effectively. Ensuring there are enough studies for contextual variation also matters. It is necessary to balance rigorous facilitation techniques with the creation of safe space for diverse contributions. The CRDA requires large commitments of investigator time, the expense of convening facilitated retreats, considerable coordination, and strong leadership. The process creates an environment where interactions among diverse participants can illuminate hidden information within the contexts of studies, effectively enhancing theory development and generating new research questions and strategies.
Organizational Culture in a Mexican School: Lessons for Reform.
ERIC Educational Resources Information Center
Davila, Anabella; Willower, donald J.
1996-01-01
Discusses a study of a Mexican Roman Catholic high school's organizational culture, highlighting findings concerning school reform and improvement processes. The school stressed community and featured activities and values promoting student commitment to learning. Religion reinforced the school's "hidden curriculum" of good conduct and…
Cache-Cache Comparison for Supporting Meaningful Learning
ERIC Educational Resources Information Center
Wang, Jingyun; Fujino, Seiji
2015-01-01
The paper presents a meaningful discovery learning environment called "cache-cache comparison" for a personalized learning support system. The processing of seeking hidden relations or concepts in "cache-cache comparison" is intended to encourage learners to actively locate new knowledge in their knowledge framework and check…
Primal Leadership: The Hidden Driver of Great Performance.
ERIC Educational Resources Information Center
Goleman, Daniel; Boyatzis, Richard; McKee, Annie
2001-01-01
An extension of emotional intelligence research demonstrated that leaders' moods play a key role in organizational climate and effectiveness. A process for developing emotionally intelligent behaviors emerged: developing self-awareness, collecting 360 feedback, action planning, learning new habits, and cultivating a community of supporters. (SK)
Hidden Markov Models as a tool to measure pilot attention switching during simulated ILS approaches
DOT National Transportation Integrated Search
2003-04-14
The pilot's instrument scanning data contain information about not only the pilot's eye movements, but also the pilot's : cognitive process during flight. However, it is often difficult to interpret the scanning data at the cognitive level : because:...
Ontology-Based Empirical Knowledge Verification for Professional Virtual Community
ERIC Educational Resources Information Center
Chen, Yuh-Jen
2011-01-01
A professional virtual community provides an interactive platform for enterprise experts to create and share their empirical knowledge cooperatively, and the platform contains a tremendous amount of hidden empirical knowledge that knowledge experts have preserved in the discussion process. Therefore, enterprise knowledge management highly…
Implications of hidden gauged U (1 ) model for B anomalies
NASA Astrophysics Data System (ADS)
Fuyuto, Kaori; Li, Hao-Lin; Yu, Jiang-Hao
2018-06-01
We propose a hidden gauged U (1 )H Z' model to explain deviations from the standard model (SM) values in lepton flavor universality known as RK and RD anomalies. The Z' only interacts with the SM fermions via their mixing with vectorlike doublet fermions after the U (1 )H symmetry breaking, which leads to b →s μ μ transition through the Z' at tree level. Moreover, introducing an additional mediator, inert-Higgs doublet, yields b →c τ ν process via charged scalar contribution at tree level. Using flavio package, we scrutinize adequate sizes of the relevant Wilson coefficients to these two processes by taking various flavor observables into account. It is found that significant mixing between the vectorlike and the second generation leptons is needed for the RK anomaly. A possible explanation of the RD anomaly can also be simultaneously addressed in a motivated situation, where a single scalar operator plays a dominant role, by the successful model parameters for the RK anomaly.
Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.
Selvaraj, Lokesh; Ganesan, Balakrishnan
2014-01-01
Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.
A composite model for the 750 GeV diphoton excess
Harigaya, Keisuke; Nomura, Yasunori
2016-03-14
We study a simple model in which the recently reported 750 GeV diphoton excess arises from a composite pseudo Nambu-Goldstone boson — hidden pion — produced by gluon fusion and decaying into two photons. The model only introduces an extra hidden gauge group at the TeV scale with a vectorlike quark in the bifundamental representation of the hidden and standard model gauge groups. We calculate the masses of all the hidden pions and analyze their experimental signatures and constraints. We find that two colored hidden pions must be near the current experimental limits, and hence are probed in the nearmore » future. We study physics of would-be stable particles — the composite states that do not decay purely by the hidden and standard model gauge dynamics — in detail, including constraints from cosmology. We discuss possible theoretical structures above the TeV scale, e.g. conformal dynamics and supersymmetry, and their phenomenological implications. We also discuss an extension of the minimal model in which there is an extra hidden quark that is singlet under the standard model and has a mass smaller than the hidden dynamical scale. This provides two standard model singlet hidden pions that can both be viewed as diphoton/diboson resonances produced by gluon fusion. We discuss several scenarios in which these (and other) resonances can be used to explain various excesses seen in the LHC data.« less
I Learned More than I Taught: The Hidden Dimension of Learning in Intercultural Knowledge Transfer
ERIC Educational Resources Information Center
Chen, Fang; Bapuji, Hari; Dyck, Bruno; Wang, Xiaoyun
2012-01-01
Purpose: Although knowledge transfer is generally conceived as a two-way process in which knowledge is transferred to and from the knowledge source, research has tended to focus on the first part of the process and neglect the second part. This study aims to examine the feedback loop and how knowledge is transferred from the knowledge receiver to…
Radio for hidden-photon dark matter detection
Chaudhuri, Saptarshi; Graham, Peter W.; Irwin, Kent; ...
2015-10-08
We propose a resonant electromagnetic detector to search for hidden-photon dark matter over an extensive range of masses. Hidden-photon dark matter can be described as a weakly coupled “hidden electric field,” oscillating at a frequency fixed by the mass, and able to penetrate any shielding. At low frequencies (compared to the inverse size of the shielding), we find that the observable effect of the hidden photon inside any shielding is a real, oscillating magnetic field. We outline experimental setups designed to search for hidden-photon dark matter, using a tunable, resonant LC circuit designed to couple to this magnetic field. Ourmore » “straw man” setups take into consideration resonator design, readout architecture and noise estimates. At high frequencies, there is an upper limit to the useful size of a single resonator set by 1/ν. However, many resonators may be multiplexed within a hidden-photon coherence length to increase the sensitivity in this regime. Hidden-photon dark matter has an enormous range of possible frequencies, but current experiments search only over a few narrow pieces of that range. As a result, we find the potential sensitivity of our proposal is many orders of magnitude beyond current limits over an extensive range of frequencies, from 100 Hz up to 700 GHz and potentially higher.« less
NASA Technical Reports Server (NTRS)
Hague, D. S.; Vanderburg, J. D.
1977-01-01
A vehicle geometric definition based upon quadrilateral surface elements to produce realistic pictures of an aerospace vehicle. The PCSYS programs can be used to visually check geometric data input, monitor geometric perturbations, and to visualize the complex spatial inter-relationships between the internal and external vehicle components. PCSYS has two major component programs. The between program, IMAGE, draws a complex aerospace vehicle pictorial representation based on either an approximate but rapid hidden line algorithm or without any hidden line algorithm. The second program, HIDDEN, draws a vehicle representation using an accurate but time consuming hidden line algorithm.
Liu, Chuchu; Lu, Xin
2018-01-05
Traditional survey methods are limited in the study of hidden populations due to the hard to access properties, including lack of a sampling frame, sensitivity issue, reporting error, small sample size, etc. The rapid increase of online communities, of which members interact with others via the Internet, have generated large amounts of data, offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. In this study, we try to understand the multidimensional characteristics of a hidden population by analyzing the massive data generated in the online community. By elaborately designing crawlers, we retrieved a complete dataset from the "HIV bar," the largest bar related to HIV on the Baidu Tieba platform, for all records from January 2005 to August 2016. Through natural language processing and social network analysis, we explored the psychology, behavior and demand of online HIV population and examined the network community structure. In HIV communities, the average topic similarity among members is positively correlated to network efficiency (r = 0.70, p < 0.001), indicating that the closer the social distance between members of the community, the more similar their topics. The proportion of negative users in each community is around 60%, weakly correlated with community size (r = 0.25, p = 0.002). It is found that users suspecting initial HIV infection or first in contact with high-risk behaviors tend to seek help and advice on the social networking platform, rather than immediately going to a hospital for blood tests. Online communities have generated copious amounts of data offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. It is recommended that support through online services for HIV/AIDS consultation and diagnosis be improved to avoid privacy concerns and social discrimination in China.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farina, Marco; Pappadopulo, Duccio; Ruderman, Joshua T.
A hidden sector with a mass gap undergoes an epoch of cannibalism if number changing interactions are active when the temperature drops below the mass of the lightest hidden particle. During cannibalism, the hidden sector temperature decreases only logarithmically with the scale factor. We consider the possibility that dark matter resides in a hidden sector that underwent cannibalism, and has relic density set by the freeze-out of two-to-two annihilations. We identify three novel phases, depending on the behavior of the hidden sector when dark matter freezes out. During the cannibal phase, dark matter annihilations decouple while the hidden sector ismore » cannibalizing. During the chemical phase, only two-to-two interactions are active and the total number of hidden particles is conserved. During the one way phase, the dark matter annihilation products decay out of equilibrium, suppressing the production of dark matter from inverse annihilations. We map out the distinct phenomenology of each phase, which includes a boosted dark matter annihilation rate, new relativistic degrees of freedom, warm dark matter, and observable distortions to the spectrum of the cosmic microwave background.« less
Phases of cannibal dark matter
NASA Astrophysics Data System (ADS)
Farina, Marco; Pappadopulo, Duccio; Ruderman, Joshua T.; Trevisan, Gabriele
2016-12-01
A hidden sector with a mass gap undergoes an epoch of cannibalism if number changing interactions are active when the temperature drops below the mass of the lightest hidden particle. During cannibalism, the hidden sector temperature decreases only logarithmically with the scale factor. We consider the possibility that dark matter resides in a hidden sector that underwent cannibalism, and has relic density set by the freeze-out of two-to-two annihilations. We identify three novel phases, depending on the behavior of the hidden sector when dark matter freezes out. During the cannibal phase, dark matter annihilations decouple while the hidden sector is cannibalizing. During the chemical phase, only two-to-two interactions are active and the total number of hidden particles is conserved. During the one way phase, the dark matter annihilation products decay out of equilibrium, suppressing the production of dark matter from inverse annihilations. We map out the distinct phenomenology of each phase, which includes a boosted dark matter annihilation rate, new relativistic degrees of freedom, warm dark matter, and observable distortions to the spectrum of the cosmic microwave background.
Phases of cannibal dark matter
Farina, Marco; Pappadopulo, Duccio; Ruderman, Joshua T.; ...
2016-12-13
A hidden sector with a mass gap undergoes an epoch of cannibalism if number changing interactions are active when the temperature drops below the mass of the lightest hidden particle. During cannibalism, the hidden sector temperature decreases only logarithmically with the scale factor. We consider the possibility that dark matter resides in a hidden sector that underwent cannibalism, and has relic density set by the freeze-out of two-to-two annihilations. We identify three novel phases, depending on the behavior of the hidden sector when dark matter freezes out. During the cannibal phase, dark matter annihilations decouple while the hidden sector ismore » cannibalizing. During the chemical phase, only two-to-two interactions are active and the total number of hidden particles is conserved. During the one way phase, the dark matter annihilation products decay out of equilibrium, suppressing the production of dark matter from inverse annihilations. We map out the distinct phenomenology of each phase, which includes a boosted dark matter annihilation rate, new relativistic degrees of freedom, warm dark matter, and observable distortions to the spectrum of the cosmic microwave background.« less
Out of Reach, Out of Mind? Infants' Comprehension of References to Hidden Inaccessible Objects.
Osina, Maria A; Saylor, Megan M; Ganea, Patricia A
2017-09-01
This study investigated the nature of infants' difficulty understanding references to hidden inaccessible objects. Twelve-month-old infants (N = 32) responded to the mention of objects by looking at, pointing at, or approaching them when the referents were visible or accessible, but not when they were hidden and inaccessible (Experiment I). Twelve-month-olds (N = 16) responded robustly when a container with the hidden referent was moved from a previously inaccessible position to an accessible position before the request, but failed to respond when the reverse occurred (Experiment II). This suggests that infants might be able to track the hidden object's dislocations and update its accessibility as it changes. Knowing the hidden object is currently inaccessible inhibits their responding. Older, 16-month-old (N = 17) infants' performance was not affected by object accessibility. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.
The hidden KPI registration accuracy.
Shorrosh, Paul
2011-09-01
Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually.
The Robust Beauty of Ordinary Information
ERIC Educational Resources Information Center
Katsikopoulos, Konstantinos V.; Schooler, Lael J.; Hertwig, Ralph
2010-01-01
Heuristics embodying limited information search and noncompensatory processing of information can yield robust performance relative to computationally more complex models. One criticism raised against heuristics is the argument that complexity is hidden in the calculation of the cue order used to make predictions. We discuss ways to order cues…
UTD at TREC 2014: Query Expansion for Clinical Decision Support
2014-11-01
Description: A 62-year-old man sees a neurologist for progressive memory loss and jerking movements of the lower ex- tremities. Neurologic examination confirms...infiltration. Summary: 62-year-old man with progressive memory loss and in- voluntary leg movements. Brain MRI reveals cortical atrophy, and cortical...latent topics produced by the Latent Dirichlet Allocation (LDA) on the TREC-CDS corpus of scientific articles. The position of words “ loss ” and “ memory
Nondestructive Testing and Target Identification
2016-12-21
Dirichlet obstacle coated by a thin layer of non-absorbing media, IMA J. Appl. Math , 80, 1063-1098, (2015). Abstract: We consider the transmission...F. Cakoni, I. De Teresa, H. Haddar and P. Monk, Nondestructive testing of the delami- nated interface between two materials, SIAM J. Appl. Math ., 76...then they form a discrete set. 22. F. Cakoni, D. Colton, S. Meng and P. Monk, Steklov eigenvalues in inverse scattering, SIAM J. Appl. Math . 76, 1737
Single-grid spectral collocation for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Canuto, Claudio; Maday, Yvon; Metivet, Brigitte
1988-01-01
The aim of the paper is to study a collocation spectral method to approximate the Navier-Stokes equations: only one grid is used, which is built from the nodes of a Gauss-Lobatto quadrature formula, either of Legendre or of Chebyshev type. The convergence is proven for the Stokes problem provided with inhomogeneous Dirichlet conditions, then thoroughly analyzed for the Navier-Stokes equations. The practical implementation algorithm is presented, together with numerical results.
The Smoothed Dirichlet Distribution: Understanding Cross-Entropy Ranking in Information Retrieval
2006-07-01
reflect those of the spon- sor. viii ABSTRACT Unigram Language modeling is a successful probabilistic framework for Information Retrieval (IR) that uses...the Relevance model (RM), a state-of-the-art model for IR in the language modeling framework that uses the same cross-entropy as its ranking function...In addition, the SD based classifier provides more flexibility than RM in modeling documents owing to a consistent generative framework . We
NASA Astrophysics Data System (ADS)
Birkel, C.; Paroli, R.; Spezia, L.; Tetzlaff, D.; Soulsby, C.
2012-12-01
In this paper we present a novel model framework using the class of Markov Switching Autoregressive Models (MSARMs) to examine catchments as complex stochastic systems that exhibit non-stationary, non-linear and non-Normal rainfall-runoff and solute dynamics. Hereby, MSARMs are pairs of stochastic processes, one observed and one unobserved, or hidden. We model the unobserved process as a finite state Markov chain and assume that the observed process, given the hidden Markov chain, is conditionally autoregressive, which means that the current observation depends on its recent past (system memory). The model is fully embedded in a Bayesian analysis based on Markov Chain Monte Carlo (MCMC) algorithms for model selection and uncertainty assessment. Hereby, the autoregressive order and the dimension of the hidden Markov chain state-space are essentially self-selected. The hidden states of the Markov chain represent unobserved levels of variability in the observed process that may result from complex interactions of hydroclimatic variability on the one hand and catchment characteristics affecting water and solute storage on the other. To deal with non-stationarity, additional meteorological and hydrological time series along with a periodic component can be included in the MSARMs as covariates. This extension allows identification of potential underlying drivers of temporal rainfall-runoff and solute dynamics. We applied the MSAR model framework to streamflow and conservative tracer (deuterium and oxygen-18) time series from an intensively monitored 2.3 km2 experimental catchment in eastern Scotland. Statistical time series analysis, in the form of MSARMs, suggested that the streamflow and isotope tracer time series are not controlled by simple linear rules. MSARMs showed that the dependence of current observations on past inputs observed by transport models often in form of the long-tailing of travel time and residence time distributions can be efficiently explained by non-stationarity either of the system input (climatic variability) and/or the complexity of catchment storage characteristics. The statistical model is also capable of reproducing short (event) and longer-term (inter-event) and wet and dry dynamical "hydrological states". These reflect the non-linear transport mechanisms of flow pathways induced by transient climatic and hydrological variables and modified by catchment characteristics. We conclude that MSARMs are a powerful tool to analyze the temporal dynamics of hydrological data, allowing for explicit integration of non-stationary, non-linear and non-Normal characteristics.
Modeling Driver Behavior near Intersections in Hidden Markov Model
Li, Juan; He, Qinglian; Zhou, Hang; Guan, Yunlin; Dai, Wei
2016-01-01
Intersections are one of the major locations where safety is a big concern to drivers. Inappropriate driver behaviors in response to frequent changes when approaching intersections often lead to intersection-related crashes or collisions. Thus to better understand driver behaviors at intersections, especially in the dilemma zone, a Hidden Markov Model (HMM) is utilized in this study. With the discrete data processing, the observed dynamic data of vehicles are used for the inference of the Hidden Markov Model. The Baum-Welch (B-W) estimation algorithm is applied to calculate the vehicle state transition probability matrix and the observation probability matrix. When combined with the Forward algorithm, the most likely state of the driver can be obtained. Thus the model can be used to measure the stability and risk of driver behavior. It is found that drivers’ behaviors in the dilemma zone are of lower stability and higher risk compared with those in other regions around intersections. In addition to the B-W estimation algorithm, the Viterbi Algorithm is utilized to predict the potential dangers of vehicles. The results can be applied to driving assistance systems to warn drivers to avoid possible accidents. PMID:28009838
Efficient Text Encryption and Hiding with Double-Random Phase-Encoding
Sang, Jun; Ling, Shenggui; Alam, Mohammad S.
2012-01-01
In this paper, a double-random phase-encoding technique-based text encryption and hiding method is proposed. First, the secret text is transformed into a 2-dimensional array and the higher bits of the elements in the transformed array are used to store the bit stream of the secret text, while the lower bits are filled with specific values. Then, the transformed array is encoded with double-random phase-encoding technique. Finally, the encoded array is superimposed on an expanded host image to obtain the image embedded with hidden data. The performance of the proposed technique, including the hiding capacity, the recovery accuracy of the secret text, and the quality of the image embedded with hidden data, is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient. By using optical information processing techniques, the proposed method has been found to significantly improve the security of text information transmission, while ensuring hiding capacity at a prescribed level. PMID:23202003
Ferentinos, Konstantinos P
2005-09-01
Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.
Cross-Domain Semi-Supervised Learning Using Feature Formulation.
Xingquan Zhu
2011-12-01
Semi-Supervised Learning (SSL) traditionally makes use of unlabeled samples by including them into the training set through an automated labeling process. Such a primitive Semi-Supervised Learning (pSSL) approach suffers from a number of disadvantages including false labeling and incapable of utilizing out-of-domain samples. In this paper, we propose a formative Semi-Supervised Learning (fSSL) framework which explores hidden features between labeled and unlabeled samples to achieve semi-supervised learning. fSSL regards that both labeled and unlabeled samples are generated from some hidden concepts with labeling information partially observable for some samples. The key of the fSSL is to recover the hidden concepts, and take them as new features to link labeled and unlabeled samples for semi-supervised learning. Because unlabeled samples are only used to generate new features, but not to be explicitly included in the training set like pSSL does, fSSL overcomes the inherent disadvantages of the traditional pSSL methods, especially for samples not within the same domain as the labeled instances. Experimental results and comparisons demonstrate that fSSL significantly outperforms pSSL-based methods for both within-domain and cross-domain semi-supervised learning.
Yang, Sejung; Lee, Byung-Uk
2015-01-01
In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138
Das, Raibatak; Cairo, Christopher W.; Coombs, Daniel
2009-01-01
The extraction of hidden information from complex trajectories is a continuing problem in single-particle and single-molecule experiments. Particle trajectories are the result of multiple phenomena, and new methods for revealing changes in molecular processes are needed. We have developed a practical technique that is capable of identifying multiple states of diffusion within experimental trajectories. We model single particle tracks for a membrane-associated protein interacting with a homogeneously distributed binding partner and show that, with certain simplifying assumptions, particle trajectories can be regarded as the outcome of a two-state hidden Markov model. Using simulated trajectories, we demonstrate that this model can be used to identify the key biophysical parameters for such a system, namely the diffusion coefficients of the underlying states, and the rates of transition between them. We use a stochastic optimization scheme to compute maximum likelihood estimates of these parameters. We have applied this analysis to single-particle trajectories of the integrin receptor lymphocyte function-associated antigen-1 (LFA-1) on live T cells. Our analysis reveals that the diffusion of LFA-1 is indeed approximately two-state, and is characterized by large changes in cytoskeletal interactions upon cellular activation. PMID:19893741
Learning and inference in a nonequilibrium Ising model with hidden nodes.
Dunn, Benjamin; Roudi, Yasser
2013-02-01
We study inference and reconstruction of couplings in a partially observed kinetic Ising model. With hidden spins, calculating the likelihood of a sequence of observed spin configurations requires performing a trace over the configurations of the hidden ones. This, as we show, can be represented as a path integral. Using this representation, we demonstrate that systematic approximate inference and learning rules can be derived using dynamical mean-field theory. Although naive mean-field theory leads to an unstable learning rule, taking into account Gaussian corrections allows learning the couplings involving hidden nodes. It also improves learning of the couplings between the observed nodes compared to when hidden nodes are ignored.
ERIC Educational Resources Information Center
Pihl, Ole
2015-01-01
How do architecture students experience the contradictions between the individual and the group at the Department of Architecture and Design of Aalborg University? The Problem-Based Learning model has been extensively applied to the department's degree programs in coherence with the Integrated Design Process, but is a group-based architecture and…
Addressing the hidden dimension in nursing education: promoting cultural competence.
Carter, Kimberly F; Xu, Yu
2007-01-01
The authors describe a cultural competence quality enhancement process to address the retention challenge of students who speak English as second language and international students as part of a school of nursing's continuous program quality improvement to achieve excellence. The process, strategies, outcomes, and evaluation of the training program are detailed within the given geographical, institutional, and curriculum context. Lessons and continuing challenges are also specified.
NASA Astrophysics Data System (ADS)
Rogotis, Savvas; Ioannidis, Dimosthenis; Tzovaras, Dimitrios; Likothanassis, Spiros
2015-04-01
The aim of this work is to present a novel approach for automatic recognition of suspicious activities in outdoor perimeter surveillance systems based on infrared video processing. Through the combination of size, speed and appearance based features, like the Center-Symmetric Local Binary Patterns, short-term actions are identified and serve as input, along with user location, for modeling target activities using the theory of Hidden Conditional Random Fields. HCRFs are used to directly link a set of observations to the most appropriate activity label and as such to discriminate high risk activities (e.g. trespassing) from zero risk activities (e.g loitering outside the perimeter). Experimental results demonstrate the effectiveness of our approach in identifying suspicious activities for video surveillance systems.
Interactive learning in 2×2 normal form games by neural network agents
NASA Astrophysics Data System (ADS)
Spiliopoulos, Leonidas
2012-11-01
This paper models the learning process of populations of randomly rematched tabula rasa neural network (NN) agents playing randomly generated 2×2 normal form games of all strategic classes. This approach has greater external validity than the existing models in the literature, each of which is usually applicable to narrow subsets of classes of games (often a single game) and/or to fixed matching protocols. The learning prowess of NNs with hidden layers was impressive as they learned to play unique pure strategy equilibria with near certainty, adhered to principles of dominance and iterated dominance, and exhibited a preference for risk-dominant equilibria. In contrast, perceptron NNs were found to perform significantly worse than hidden layer NN agents and human subjects in experimental studies.
The Hidden Meaning of Inner Speech.
ERIC Educational Resources Information Center
Pomper, Marlene M.
This paper is concerned with the inner speech process, its relationship to thought and behavior, and its theoretical and educational implications. The paper first defines inner speech as a bridge between thought and written or spoken language and traces its development. Second, it investigates competing theories surrounding the subject with an…
Smaller Places for Special People?
ERIC Educational Resources Information Center
Firlik, Russell
As school enrollments increase, schools will need to expand their facilities and playgrounds. School construction and expansion is a part of the "hidden curriculum" of schools and affects children's learning processes. When school expansion is combined with the move from half day to full day kindergarten and increasing the time children…
Collaborative Estimation in Distributed Sensor Networks
ERIC Educational Resources Information Center
Kar, Swarnendu
2013-01-01
Networks of smart ultra-portable devices are already indispensable in our lives, augmenting our senses and connecting our lives through real time processing and communication of sensory (e.g., audio, video, location) inputs. Though usually hidden from the user's sight, the engineering of these devices involves fierce tradeoffs between energy…
The Road to Oxbridge: Schools and Elite University Choices
ERIC Educational Resources Information Center
Donnelly, Michael
2014-01-01
This paper explores hidden messages sent out by schools about Oxbridge, using Basil Bernstein's concepts of classification and framing. Research in three case-study schools captured these messages from their everyday practices and processes, including their events and activities, sorting mechanisms, interactions and resources. Whilst all of the…
Life imitating art: depictions of the hidden curriculum in medical television programs.
Stanek, Agatha; Clarkin, Chantalle; Bould, M Dylan; Writer, Hilary; Doja, Asif
2015-09-26
The hidden curriculum represents influences occurring within the culture of medicine that indirectly alter medical professionals' interactions, beliefs and clinical practices throughout their training. One approach to increase medical student awareness of the hidden curriculum is to provide them with readily available examples of how it is enacted in medicine; as such the purpose of this study was to examine depictions of the hidden curriculum in popular medical television programs. One full season of ER, Grey's Anatomy and Scrubs were selected for review. A summative content analysis was performed to ascertain the presence of depictions of the hidden curriculum, as well as to record the type, frequency and quality of examples. A second reviewer also viewed a random selection of episodes from each series to establish coding reliability. The most prevalent themes across all television programs were: the hierarchical nature of medicine; challenges during transitional stages in medicine; the importance of role modeling; patient dehumanization; faking or overstating one's capabilities; unprofessionalism; the loss of idealism; and difficulties with work-life balance. The hidden curriculum is frequently depicted in popular medical television shows. These examples of the hidden curriculum could serve as a valuable teaching resource in undergraduate medical programs.
Uncovering hidden nodes in complex networks in the presence of noise
Su, Ri-Qi; Lai, Ying-Cheng; Wang, Xiao; Do, Younghae
2014-01-01
Ascertaining the existence of hidden objects in a complex system, objects that cannot be observed from the external world, not only is curiosity-driven but also has significant practical applications. Generally, uncovering a hidden node in a complex network requires successful identification of its neighboring nodes, but a challenge is to differentiate its effects from those of noise. We develop a completely data-driven, compressive-sensing based method to address this issue by utilizing complex weighted networks with continuous-time oscillatory or discrete-time evolutionary-game dynamics. For any node, compressive sensing enables accurate reconstruction of the dynamical equations and coupling functions, provided that time series from this node and all its neighbors are available. For a neighboring node of the hidden node, this condition cannot be met, resulting in abnormally large prediction errors that, counterintuitively, can be used to infer the existence of the hidden node. Based on the principle of differential signal, we demonstrate that, when strong noise is present, insofar as at least two neighboring nodes of the hidden node are subject to weak background noise only, unequivocal identification of the hidden node can be achieved. PMID:24487720
2012-01-01
Background There has been little study of the role of the essay question in selection for medical school. The purpose of this study was to obtain a better understanding of how applicants approached the essay questions used in selection at our medical school in 2007. Methods The authors conducted a qualitative analysis of 210 essays written as part of the medical school admissions process, and developed a conceptual framework to describe the relationships, ideas and concepts observed in the data. Results Findings of this analysis were confirmed in interviews with applicants and assessors. Analysis revealed a tension between "genuine" and "expected" responses that we believe applicants experience when choosing how to answer questions in the admissions process. A theory named "What do they want me to say?" was developed to describe the ways in which applicants modulate their responses to conform to their expectations of the selection process; the elements of this theory were confirmed in interviews with applicants and assessors. Conclusions This work suggests the existence of a "hidden curriculum of admissions" and demonstrates that the process of selection has a strong influence on applicant response. This paper suggests ways that selection might be modified to address this effect. Studies such as this can help us to appreciate the unintended consequences of admissions processes and can identify ways to make the selection process more consistent, transparent and fair. PMID:22448658
A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2010-08-01
A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.
Kim, Yee Suk; Lee, Sungin; Zong, Nansu; Kahng, Jimin
2017-01-01
The present study aimed to investigate differences in prognosis based on human papillomavirus (HPV) infection, persistent infection and genotype variations for patients exhibiting atypical squamous cells of undetermined significance (ASCUS) in their initial Papanicolaou (PAP) test results. A latent Dirichlet allocation (LDA)-based tool was developed that may offer a facilitated means of communication to be employed during patient-doctor consultations. The present study assessed 491 patients (139 HPV-positive and 352 HPV-negative cases) with a PAP test result of ASCUS with a follow-up period ≥2 years. Patients underwent PAP and HPV DNA chip tests between January 2006 and January 2009. The HPV-positive subjects were followed up with at least 2 instances of PAP and HPV DNA chip tests. The most common genotypes observed were HPV-16 (25.9%, 36/139), HPV-52 (14.4%, 20/139), HPV-58 (13.7%, 19/139), HPV-56 (11.5%, 16/139), HPV-51 (9.4%, 13/139) and HPV-18 (8.6%, 12/139). A total of 33.3% (12/36) patients positive for HPV-16 had cervical intraepithelial neoplasia (CIN)2 or a worse result, which was significantly higher than the prevalence of CIN2 of 1.8% (8/455) in patients negative for HPV-16 (P<0.001), while no significant association was identified for other genotypes in terms of genotype and clinical progress. There was a significant association between clearance and good prognosis (P<0.001). Persistent infection was higher in patients aged ≥51 years (38.7%) than in those aged ≤50 years (20.4%; P=0.036). Progression from persistent infection to CIN2 or worse (19/34, 55.9%) was higher than clearance (0/105, 0.0%; P<0.001). In the LDA analysis, using symmetric Dirichlet priors α=0.1 and β=0.01, and clusters (k)=5 or 10 provided the most meaningful groupings. Statistical and LDA analyses produced consistent results regarding the association between persistent infection of HPV-16, old age and long infection period with a clinical progression of CIN2 or worse. Therefore, LDA results may be presented as explanatory evidence during time-constrained patient-doctor consultations in order to deliver information regarding the patient's status. PMID:28587376
The Politics of the Hidden Curriculum
ERIC Educational Resources Information Center
Giroux, Henry A.
1977-01-01
Schools teach much more than the traditional curriculum. They also teach a "hidden curriculum"--those unstated norms, values, and beliefs promoting hierarchic and authoritarian social relations that are transmitted to students through the underlying educational structure. Discusses the effects of the "hidden curriculum" on the…
Birefringence and hidden photons
NASA Astrophysics Data System (ADS)
Arza, Ariel; Gamboa, J.
2018-05-01
We study a model where photons interact with hidden photons and millicharged particles through a kinetic mixing term. Particularly, we focus on vacuum birefringence effects and we find a bound for the millicharged parameter assuming that hidden photons are a piece of the local dark matter density.
A primer on the cost of quality for improvement of laboratory and pathology specimen processes.
Carlson, Richard O; Amirahmadi, Fazlollaah; Hernandez, James S
2012-09-01
In today's environment, many laboratories and pathology practices are challenged to maintain or increase their quality while simultaneously lowering their overall costs. The cost of improving specimen processes is related to quality, and we demonstrate that actual costs can be reduced by designing "quality at the source" into the processes. Various costs are hidden along the total testing process, and we suggest ways to identify opportunities to reduce cost by improving quality in laboratories and pathology practices through the use of Lean, Six Sigma, and industrial engineering.
Implications of the measured angular anisotropy at the hidden order transition of URu2Si2
NASA Astrophysics Data System (ADS)
Chandra, P.; Coleman, P.; Flint, R.; Trinh, J.; Ramirez, A. P.
2018-05-01
The heavy fermion compound URu2Si2 continues to attract great interest due to the long-unidentified nature of the hidden order that develops below 17.5 K. Here we discuss the implications of an angular survey of the linear and nonlinear susceptibility of URu2Si2 in the vicinity of the hidden order transition [1]. While the anisotropic nature of spin fluctuations and low-temperature quasiparticles was previously established, our recent results suggest that the order parameter itself has intrinsic Ising anisotropy, and that moreover this anisotropy extends far above the hidden order transition. Consistency checks and subsequent questions for future experimental and theoretical studies of hidden order are discussed.
Uncovering the wisdom hidden between the lines: the Collaborative Reflexive Deliberative Approach
Crabtree, Benjamin F; Miller, William L; Gunn, Jane M; Hogg, William E; Scott, Cathie M; Levesque, Jean-Frederic; Harris, Mark F; Chase, Sabrina M; Advocat, Jenny R; Halma, Lisa M; Russell, Grant M
2018-01-01
Abstract Background Meta-analysis and meta-synthesis have been developed to synthesize results across published studies; however, they are still largely grounded in what is already published, missing the tacit ‘between the lines’ knowledge generated during many research projects that are not intrinsic to the main objectives of studies. Objective To develop a novel approach to expand and deepen meta-syntheses using researchers’ experience, tacit knowledge and relevant unpublished materials. Methods We established new collaborations among primary health care researchers from different contexts based on common interests in reforming primary care service delivery and a diversity of perspectives. Over 2 years, the team met face-to-face and via tele- and video-conferences to employ the Collaborative Reflexive Deliberative Approach (CRDA) to discuss and reflect on published and unpublished results from participants’ studies to identify new patterns and insights. Results CRDA focuses on uncovering critical insights, interpretations hidden within multiple research contexts. For the process to work, careful attention must be paid to ensure sufficient diversity among participants while also having people who are able to collaborate effectively. Ensuring there are enough studies for contextual variation also matters. It is necessary to balance rigorous facilitation techniques with the creation of safe space for diverse contributions. Conclusions The CRDA requires large commitments of investigator time, the expense of convening facilitated retreats, considerable coordination, and strong leadership. The process creates an environment where interactions among diverse participants can illuminate hidden information within the contexts of studies, effectively enhancing theory development and generating new research questions and strategies. PMID:29069335
Global Binary Optimization on Graphs for Classification of High Dimensional Data
2014-09-01
Buades et al . in [10] introduce a new non-local means algorithm for image denoising and compare it to some of the best methods. In [28], Grady de...scribes a random walk algorithm for image seg- mentation using the solution to a Dirichlet prob- lem. Elmoataz et al . present generalizations of the...graph Laplacian [19] for image denoising and man- ifold smoothing. Couprie et al . in [16] propose a parameterized graph-based energy function that unifies
1988-09-01
Institute for Physical Science and Teennology rUniversity of Maryland o College Park, MD 20742 B. Gix) Engineering Mechanics Research Corporation Troy...OF THE FINITE ELEMENT METHOD by Ivo Babuska Institute for Physical Science and Technology University of Maryland College Park, MD 20742 B. Guo 2...2Research partially supported by the National Science Foundation under Grant DMS-85-16191 during the stay at the Institute for Physical Science and
Lifshits Tails for Randomly Twisted Quantum Waveguides
NASA Astrophysics Data System (ADS)
Kirsch, Werner; Krejčiřík, David; Raikov, Georgi
2018-03-01
We consider the Dirichlet Laplacian H_γ on a 3D twisted waveguide with random Anderson-type twisting γ . We introduce the integrated density of states N_γ for the operator H_γ , and investigate the Lifshits tails of N_γ , i.e. the asymptotic behavior of N_γ (E) as E \\downarrow \\inf supp dN_γ . In particular, we study the dependence of the Lifshits exponent on the decay rate of the single-site twisting at infinity.
Evaluation of the path integral for flow through random porous media
NASA Astrophysics Data System (ADS)
Westbroek, Marise J. E.; Coche, Gil-Arnaud; King, Peter R.; Vvedensky, Dimitri D.
2018-04-01
We present a path integral formulation of Darcy's equation in one dimension with random permeability described by a correlated multivariate lognormal distribution. This path integral is evaluated with the Markov chain Monte Carlo method to obtain pressure distributions, which are shown to agree with the solutions of the corresponding stochastic differential equation for Dirichlet and Neumann boundary conditions. The extension of our approach to flow through random media in two and three dimensions is discussed.
Using phrases and document metadata to improve topic modeling of clinical reports.
Speier, William; Ong, Michael K; Arnold, Corey W
2016-06-01
Probabilistic topic models provide an unsupervised method for analyzing unstructured text, which have the potential to be integrated into clinical automatic summarization systems. Clinical documents are accompanied by metadata in a patient's medical history and frequently contains multiword concepts that can be valuable for accurately interpreting the included text. While existing methods have attempted to address these problems individually, we present a unified model for free-text clinical documents that integrates contextual patient- and document-level data, and discovers multi-word concepts. In the proposed model, phrases are represented by chained n-grams and a Dirichlet hyper-parameter is weighted by both document-level and patient-level context. This method and three other Latent Dirichlet allocation models were fit to a large collection of clinical reports. Examples of resulting topics demonstrate the results of the new model and the quality of the representations are evaluated using empirical log likelihood. The proposed model was able to create informative prior probabilities based on patient and document information, and captured phrases that represented various clinical concepts. The representation using the proposed model had a significantly higher empirical log likelihood than the compared methods. Integrating document metadata and capturing phrases in clinical text greatly improves the topic representation of clinical documents. The resulting clinically informative topics may effectively serve as the basis for an automatic summarization system for clinical reports. Copyright © 2016 Elsevier Inc. All rights reserved.
Nonlocal Reformulations of Water and Internal Waves and Asymptotic Reductions
NASA Astrophysics Data System (ADS)
Ablowitz, Mark J.
2009-09-01
Nonlocal reformulations of the classical equations of water waves and two ideal fluids separated by a free interface, bounded above by either a rigid lid or a free surface, are obtained. The kinematic equations may be written in terms of integral equations with a free parameter. By expressing the pressure, or Bernoulli, equation in terms of the surface/interface variables, a closed system is obtained. An advantage of this formulation, referred to as the nonlocal spectral (NSP) formulation, is that the vertical component is eliminated, thus reducing the dimensionality and fixing the domain in which the equations are posed. The NSP equations and the Dirichlet-Neumann operators associated with the water wave or two-fluid equations can be related to each other and the Dirichlet-Neumann series can be obtained from the NSP equations. Important asymptotic reductions obtained from the two-fluid nonlocal system include the generalizations of the Benney-Luke and Kadomtsev-Petviashvili (KP) equations, referred to as intermediate-long wave (ILW) generalizations. These 2+1 dimensional equations possess lump type solutions. In the water wave problem high-order asymptotic series are obtained for two and three dimensional gravity-capillary solitary waves. In two dimensions, the first term in the asymptotic series is the well-known hyperbolic secant squared solution of the KdV equation; in three dimensions, the first term is the rational lump solution of the KP equation.
Bell's theorem and the problem of decidability between the views of Einstein and Bohr.
Hess, K; Philipp, W
2001-12-04
Einstein, Podolsky, and Rosen (EPR) have designed a gedanken experiment that suggested a theory that was more complete than quantum mechanics. The EPR design was later realized in various forms, with experimental results close to the quantum mechanical prediction. The experimental results by themselves have no bearing on the EPR claim that quantum mechanics must be incomplete nor on the existence of hidden parameters. However, the well known inequalities of Bell are based on the assumption that local hidden parameters exist and, when combined with conflicting experimental results, do appear to prove that local hidden parameters cannot exist. This fact leaves only instantaneous actions at a distance (called "spooky" by Einstein) to explain the experiments. The Bell inequalities are based on a mathematical model of the EPR experiments. They have no experimental confirmation, because they contradict the results of all EPR experiments. In addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions; for instance, he assumes that the hidden parameters are governed by a single probability measure independent of the analyzer settings. We argue that the mathematical model of Bell excludes a large set of local hidden variables and a large variety of probability densities. Our set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does permit derivation of the quantum result and is consistent with all known experiments.
Greedy feature selection for glycan chromatography data with the generalized Dirichlet distribution
2013-01-01
Background Glycoproteins are involved in a diverse range of biochemical and biological processes. Changes in protein glycosylation are believed to occur in many diseases, particularly during cancer initiation and progression. The identification of biomarkers for human disease states is becoming increasingly important, as early detection is key to improving survival and recovery rates. To this end, the serum glycome has been proposed as a potential source of biomarkers for different types of cancers. High-throughput hydrophilic interaction liquid chromatography (HILIC) technology for glycan analysis allows for the detailed quantification of the glycan content in human serum. However, the experimental data from this analysis is compositional by nature. Compositional data are subject to a constant-sum constraint, which restricts the sample space to a simplex. Statistical analysis of glycan chromatography datasets should account for their unusual mathematical properties. As the volume of glycan HILIC data being produced increases, there is a considerable need for a framework to support appropriate statistical analysis. Proposed here is a methodology for feature selection in compositional data. The principal objective is to provide a template for the analysis of glycan chromatography data that may be used to identify potential glycan biomarkers. Results A greedy search algorithm, based on the generalized Dirichlet distribution, is carried out over the feature space to search for the set of “grouping variables” that best discriminate between known group structures in the data, modelling the compositional variables using beta distributions. The algorithm is applied to two glycan chromatography datasets. Statistical classification methods are used to test the ability of the selected features to differentiate between known groups in the data. Two well-known methods are used for comparison: correlation-based feature selection (CFS) and recursive partitioning (rpart). CFS is a feature selection method, while recursive partitioning is a learning tree algorithm that has been used for feature selection in the past. Conclusions The proposed feature selection method performs well for both glycan chromatography datasets. It is computationally slower, but results in a lower misclassification rate and a higher sensitivity rate than both correlation-based feature selection and the classification tree method. PMID:23651459
Flexible link functions in nonparametric binary regression with Gaussian process priors.
Li, Dan; Wang, Xia; Lin, Lizhen; Dey, Dipak K
2016-09-01
In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. © 2015, The International Biometric Society.
Flexible Link Functions in Nonparametric Binary Regression with Gaussian Process Priors
Li, Dan; Lin, Lizhen; Dey, Dipak K.
2015-01-01
Summary In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. PMID:26686333
Boundary Regularity for the Porous Medium Equation
NASA Astrophysics Data System (ADS)
Björn, Anders; Björn, Jana; Gianazza, Ugo; Siljander, Juhana
2018-05-01
We study the boundary regularity of solutions to the porous medium equation {u_t = Δ u^m} in the degenerate range {m > 1} . In particular, we show that in cylinders the Dirichlet problem with positive continuous boundary data on the parabolic boundary has a solution which attains the boundary values, provided that the spatial domain satisfies the elliptic Wiener criterion. This condition is known to be optimal, and it is a consequence of our main theorem which establishes a barrier characterization of regular boundary points for general—not necessarily cylindrical—domains in {{R}^{n+1}} . One of our fundamental tools is a new strict comparison principle between sub- and superparabolic functions, which makes it essential for us to study both nonstrict and strict Perron solutions to be able to develop a fruitful boundary regularity theory. Several other comparison principles and pasting lemmas are also obtained. In the process we obtain a rather complete picture of the relation between sub/superparabolic functions and weak sub/supersolutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal
Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information.more » As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.« less
Fungi diversity from different depths and times in chicken manure waste static aerobic composting.
Gu, Wenjie; Lu, Yusheng; Tan, Zhiyuan; Xu, Peizhi; Xie, Kaizhi; Li, Xia; Sun, Lili
2017-09-01
The Dirichlet multinomial mixtures mode was used to analyse illumina sequencing data to reveal both temporal and spatial variations of the fungi community present in the aerobic composting. Results showed that 670 operational taxonomic units (OTUs) were detected, and the dominant phylum was Ascomycota. There were four types of samples fungi communities during the composting process. Samples from the early composting stage were mainly grouped into type I and Saccharomycetales sp. was dominant. Fungi community in the medium composting stage were fallen into type II and III, Sordariales sp. and Acremonium alcalophilum, Saccharomycetales sp. and Scedosporium minutisporum were the dominant OTUs respectively. Samples from the late composting stage were mainly grouped into type IV and Scedosporium minutisporum was the dominant OTU; Scedosporium minutisporum was significantly affected by depth (P<0.05). Results indicate that time and depth both are factors that influence fungi distribution and variation in c waste during static aerobic composting. Copyright © 2017. Published by Elsevier Ltd.
Jones, Michael N.
2017-01-01
A central goal of cognitive neuroscience is to decode human brain activity—that is, to infer mental processes from observed patterns of whole-brain activation. Previous decoding efforts have focused on classifying brain activity into a small set of discrete cognitive states. To attain maximal utility, a decoding framework must be open-ended, systematic, and context-sensitive—that is, capable of interpreting numerous brain states, presented in arbitrary combinations, in light of prior information. Here we take steps towards this objective by introducing a probabilistic decoding framework based on a novel topic model—Generalized Correspondence Latent Dirichlet Allocation—that learns latent topics from a database of over 11,000 published fMRI studies. The model produces highly interpretable, spatially-circumscribed topics that enable flexible decoding of whole-brain images. Importantly, the Bayesian nature of the model allows one to “seed” decoder priors with arbitrary images and text—enabling researchers, for the first time, to generate quantitative, context-sensitive interpretations of whole-brain patterns of brain activity. PMID:29059185
Andrei, Victor; Arandjelović, Ognjen
2016-12-01
The rapidly expanding corpus of medical research literature presents major challenges in the understanding of previous work, the extraction of maximum information from collected data, and the identification of promising research directions. We present a case for the use of advanced machine learning techniques as an aide in this task and introduce a novel methodology that is shown to be capable of extracting meaningful information from large longitudinal corpora and of tracking complex temporal changes within it. Our framework is based on (i) the discretization of time into epochs, (ii) epoch-wise topic discovery using a hierarchical Dirichlet process-based model, and (iii) a temporal similarity graph which allows for the modelling of complex topic changes. More specifically, this is the first work that discusses and distinguishes between two groups of particularly challenging topic evolution phenomena: topic splitting and speciation and topic convergence and merging, in addition to the more widely recognized emergence and disappearance and gradual evolution. The proposed framework is evaluated on a public medical literature corpus.
Rapid Airplane Parametric Input Design(RAPID)
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Bloor, Malcolm I. G.; Wilson, Michael J.; Thomas, Almuttil M.
2004-01-01
An efficient methodology is presented for defining a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. A small set of design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tail, horizontal tail, and canard components. The wing, tail, and canard components are manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. Grid sensitivity is obtained by applying the automatic differentiation precompiler ADIFOR to software for the grid generation. The computed surface grids, volume grids, and sensitivity derivatives are suitable for a wide range of Computational Fluid Dynamics simulation and configuration optimizations.
Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal
2010-10-07
Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information. As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.
NASA Astrophysics Data System (ADS)
Wang, Yuan; Wu, Rongsheng
2001-12-01
Theoretical argumentation for so-called suitable spatial condition is conducted by the aid of homotopy framework to demonstrate that the proposed boundary condition does guarantee that the over-specification boundary condition resulting from an adjoint model on a limited-area is no longer an issue, and yet preserve its well-poseness and optimal character in the boundary setting. The ill-poseness of over-specified spatial boundary condition is in a sense, inevitable from an adjoint model since data assimilation processes have to adapt prescribed observations that used to be over-specified at the spatial boundaries of the modeling domain. In the view of pragmatic implement, the theoretical framework of our proposed condition for spatial boundaries indeed can be reduced to the hybrid formulation of nudging filter, radiation condition taking account of ambient forcing, together with Dirichlet kind of compatible boundary condition to the observations prescribed in data assimilation procedure. All of these treatments, no doubt, are very familiar to mesoscale modelers.
Das, Kiranmoy; Daniels, Michael J.
2014-01-01
Summary Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium and low) at baseline. PMID:24400941
Systematic identification of latent disease-gene associations from PubMed articles.
Zhang, Yuji; Shen, Feichen; Mojarad, Majid Rastegar; Li, Dingcheng; Liu, Sijia; Tao, Cui; Yu, Yue; Liu, Hongfang
2018-01-01
Recent scientific advances have accumulated a tremendous amount of biomedical knowledge providing novel insights into the relationship between molecular and cellular processes and diseases. Literature mining is one of the commonly used methods to retrieve and extract information from scientific publications for understanding these associations. However, due to large data volume and complicated associations with noises, the interpretability of such association data for semantic knowledge discovery is challenging. In this study, we describe an integrative computational framework aiming to expedite the discovery of latent disease mechanisms by dissecting 146,245 disease-gene associations from over 25 million of PubMed indexed articles. We take advantage of both Latent Dirichlet Allocation (LDA) modeling and network-based analysis for their capabilities of detecting latent associations and reducing noises for large volume data respectively. Our results demonstrate that (1) the LDA-based modeling is able to group similar diseases into disease topics; (2) the disease-specific association networks follow the scale-free network property; (3) certain subnetwork patterns were enriched in the disease-specific association networks; and (4) genes were enriched in topic-specific biological processes. Our approach offers promising opportunities for latent disease-gene knowledge discovery in biomedical research.
Systematic identification of latent disease-gene associations from PubMed articles
Mojarad, Majid Rastegar; Li, Dingcheng; Liu, Sijia; Tao, Cui; Yu, Yue; Liu, Hongfang
2018-01-01
Recent scientific advances have accumulated a tremendous amount of biomedical knowledge providing novel insights into the relationship between molecular and cellular processes and diseases. Literature mining is one of the commonly used methods to retrieve and extract information from scientific publications for understanding these associations. However, due to large data volume and complicated associations with noises, the interpretability of such association data for semantic knowledge discovery is challenging. In this study, we describe an integrative computational framework aiming to expedite the discovery of latent disease mechanisms by dissecting 146,245 disease-gene associations from over 25 million of PubMed indexed articles. We take advantage of both Latent Dirichlet Allocation (LDA) modeling and network-based analysis for their capabilities of detecting latent associations and reducing noises for large volume data respectively. Our results demonstrate that (1) the LDA-based modeling is able to group similar diseases into disease topics; (2) the disease-specific association networks follow the scale-free network property; (3) certain subnetwork patterns were enriched in the disease-specific association networks; and (4) genes were enriched in topic-specific biological processes. Our approach offers promising opportunities for latent disease-gene knowledge discovery in biomedical research. PMID:29373609
Wireless Wearable Multisensory Suite and Real-Time Prediction of Obstructive Sleep Apnea Episodes.
Le, Trung Q; Cheng, Changqing; Sangasoongsong, Akkarapol; Wongdhamma, Woranat; Bukkapatnam, Satish T S
2013-01-01
Obstructive sleep apnea (OSA) is a common sleep disorder found in 24% of adult men and 9% of adult women. Although continuous positive airway pressure (CPAP) has emerged as a standard therapy for OSA, a majority of patients are not tolerant to this treatment, largely because of the uncomfortable nasal air delivery during their sleep. Recent advances in wireless communication and advanced ("bigdata") preditive analytics technologies offer radically new point-of-care treatment approaches for OSA episodes with unprecedented comfort and afforadability. We introduce a Dirichlet process-based mixture Gaussian process (DPMG) model to predict the onset of sleep apnea episodes based on analyzing complex cardiorespiratory signals gathered from a custom-designed wireless wearable multisensory suite. Extensive testing with signals from the multisensory suite as well as PhysioNet's OSA database suggests that the accuracy of offline OSA classification is 88%, and accuracy for predicting an OSA episode 1-min ahead is 83% and 3-min ahead is 77%. Such accurate prediction of an impending OSA episode can be used to adaptively adjust CPAP airflow (toward improving the patient's adherence) or the torso posture (e.g., minor chin adjustments to maintain steady levels of the airflow).
The Hidden Curriculum in Distance Education: An Updated View.
ERIC Educational Resources Information Center
Anderson, Terry
2001-01-01
Addressing recent criticism of distance education, explores the distinctive hidden curriculum (supposed "real" agenda) of distance education, focusing on both its positive and negative expressions. Also offers an updated view of the hidden curriculum of traditional, campus-based education, grounded in an emerging worldwide context of broadening…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
... ENVIRONMENTAL PROTECTION AGENCY [FRL-9631-3] Notice of Proposed Settlement Agreement and Opportunity for Public Comment: Hidden Lane Landfill Superfund Site ACTION: Notice. SUMMARY: In accordance... (``DOJ'') on behalf of EPA, in connection with the Hidden Lane Landfill Superfund Site, Sterling, Loudoun...
Hidden Curriculum as One of Current Issue of Curriculum
ERIC Educational Resources Information Center
Alsubaie, Merfat Ayesh
2015-01-01
There are several issues in the education system, especially in the curriculum field that affect education. Hidden curriculum is one of current controversial curriculum issues. Many hidden curricular issues are the result of assumptions and expectations that are not formally communicated, established, or conveyed within the learning environment.…
Hidden Variable Theories and Quantum Nonlocality
ERIC Educational Resources Information Center
Boozer, A. D.
2009-01-01
We clarify the meaning of Bell's theorem and its implications for the construction of hidden variable theories by considering an example system consisting of two entangled spin-1/2 particles. Using this example, we present a simplified version of Bell's theorem and describe several hidden variable theories that agree with the predictions of…
Building Simple Hidden Markov Models. Classroom Notes
ERIC Educational Resources Information Center
Ching, Wai-Ki; Ng, Michael K.
2004-01-01
Hidden Markov models (HMMs) are widely used in bioinformatics, speech recognition and many other areas. This note presents HMMs via the framework of classical Markov chain models. A simple example is given to illustrate the model. An estimation method for the transition probabilities of the hidden states is also discussed.
Seuss's Butter Battle Book: Is There Hidden Harm?
ERIC Educational Resources Information Center
Van Cleaf, David W.; Martin, Rita J.
1986-01-01
Examines whether elementary school children relate to the "harmful hidden message" about nuclear war in Dr. Seuss's THE BUTTER BATTLE BOOK. After ascertaining the children's cognitive level, they participated in activities to find hidden meanings in stories, including Seuss's book. Students failed to identify the nuclear war message in…
Comment on 'All quantum observables in a hidden-variable model must commute simultaneously'
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagata, Koji
Malley discussed [Phys. Rev. A 69, 022118 (2004)] that all quantum observables in a hidden-variable model for quantum events must commute simultaneously. In this comment, we discuss that Malley's theorem is indeed valid for the hidden-variable theoretical assumptions, which were introduced by Kochen and Specker. However, we give an example that the local hidden-variable (LHV) model for quantum events preserves noncommutativity of quantum observables. It turns out that Malley's theorem is not related to the LHV model for quantum events, in general.
Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.
Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo
Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.
Dynamic Assessment of EFL Reading: Revealing Hidden Aspects at Different Proficiency Levels
ERIC Educational Resources Information Center
Ajideh, Parviz; Farrokhi, Farahman; Nourdad, Nava
2012-01-01
Dynamic assessment as a complementary approach to traditional static assessment emphasizes the learning process and accounts for the amount and nature of examiner investment. The present qualitative study analyzed interactions for 270 reading test items which were recorded and tape scripted. The reading ability of 9 EFL participants at three…
A Generative Approach to the Development of Hidden-Figure Items.
ERIC Educational Resources Information Center
Bejar, Issac I.; Yocom, Peter
This report explores an approach to item development and psychometric modeling which explicitly incorporates knowledge about the mental models used by examinees in the solution of items into a psychometric model that characterize performances on a test, as well as incorporating that knowledge into the item development process. The paper focuses on…
When Leadership and Policy Making Collide: The Valley View Middle School Experience
ERIC Educational Resources Information Center
Hodge, Warren A.; Osborne-Lampkin, La'Tara
2014-01-01
This case demonstrates the multifaceted nature of the school uniform debate. It shows how conflicts and tensions between stakeholder groups develop and persist when policymakers and school leaders allow hidden agendas and communication barriers to subvert the decision- and policymaking processes. In particular, the case demonstrates what happens…
Moving beyond "Shut up and Learn"
ERIC Educational Resources Information Center
Watkins, Chris
2016-01-01
This article analyses the sort of classroom talk that leads to effective learning, and some of the forces which operate against such practices. It starts with an analysis of the classroom context and the dominant patterns of interaction. These cause processes of learning to be hidden. It then develops by an analysis of effective learning,…
The Neglected World of the Workplace Trainer
ERIC Educational Resources Information Center
Elshafie, Marwa
2014-01-01
The aim of this small-scale research is to explore the hidden world of the workplace trainer. Four trainers from a training institute in Qatar were interviewed and asked about their opinions of the employee as a learner, the trainer's work, and the role of quality compliance in the training process. After transcribing and analysing the…
The Hidden Picture: Administrators' Perspectives on Least Restrictive Environment
ERIC Educational Resources Information Center
Garner, Gina Marlene
2009-01-01
This study looks to better understand how administrators make a decision about a least restrictive environment placement recommendation. What decision processes do they engage in when merging information of individual and environment to create a working plan of access that will benefit all involved? It also seeks the factors that are primary in…
A Not-so-Hidden Curriculum: Using Auto/Biographies to Teach Educational History
ERIC Educational Resources Information Center
Bailey, Lucy E.
2015-01-01
Autobiography and biography are productive genres for exploring historical events and processes, even as such works have sometimes held a peripheral role in the "community" of history of education scholarship. This paper focuses on the pedagogical productivity and challenges of a recent graduate course the author offered in…
ERIC Educational Resources Information Center
Rubin, Alec
1976-01-01
Defines primal therapy as an approach to growth and change the goal of which is to rediscover the real self, the natural child. Relates this concept to primal theatre where an effort is made to express on stage what rarely occurs in life and what is usually hidden. Basic processes for primal theatre workshops are discussed. For availability see CS…
Processes Underlying Young Children's Spatial Orientation during Movement.
ERIC Educational Resources Information Center
Bremner, J. Gavin; And Others
1994-01-01
Tested children 18 months to 4 years for their ability to relocate a hidden object after self-produced movement around an array of 4 locations. Children encountered no specific difficulty in coordinating dimensions, or they solved the task without recourse to such a system. They also appeared to change strategy when the problem requires more…
Learning to Learn: A Hidden Dimension within Community Dance Practice
ERIC Educational Resources Information Center
Barr, Sherrie
2013-01-01
This article explores ways of learning experienced by university dance students participating in a community dance project. The students were unfamiliar with community-based practices and found themselves needing to remediate held attitudes about dance. How the students came to approach their learning within the dance-making process drew on…
Educational Reform: The Players and the Politics.
ERIC Educational Resources Information Center
Farkas, Steve
Substantive arguments on school reform may be disguising a hidden debate over the process and politics of such reform--a debate over who should be responsible for educating our youth, the parties responsible for the current difficulties, and how severe these difficulties are. This document attempts to identify the attitudes that drive this hidden…
Content Analysis as a Best Practice in Technical Communication Research
ERIC Educational Resources Information Center
Thayer, Alexander; Evans, Mary; McBride, Alicia; Queen, Matt; Spyridakis, Jan
2007-01-01
Content analysis is a powerful empirical method for analyzing text, a method that technical communicators can use on the job and in their research. Content analysis can expose hidden connections among concepts, reveal relationships among ideas that initially seem unconnected, and inform the decision-making processes associated with many technical…
ERIC Educational Resources Information Center
Chang, Ho-Jun
2009-01-01
This dissertation deals with the tense relation between the visibility of unauthorized economic practices and the invisibility of law in Zhongguancun (ZGC) Beijing, a Chinese information technology (IT) industry center dubbed "China's Silicon Valley." This dissertation ethnographically examines the double process of extra-legal/illegal…
Perspective: Sloppiness and emergent theories in physics, biology, and beyond.
Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P
2015-07-07
Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.
Glass transition and relaxation processes of nanocomposite polymer electrolytes.
Money, Benson K; Hariharan, K; Swenson, Jan
2012-07-05
This study focus on the effect of δ-Al(2)O(3) nanofillers on the dc-conductivity, glass transition, and dielectric relaxations in the polymer electrolyte (PEO)(4):LiClO(4). The results show that there are three dielectric relaxation processes, α, β, and γ, in the systems, although the structural α-relaxation is hidden in the strong conductivity contribution and could therefore not be directly observed. However, by comparing an enhanced dc-conductivity, by approximately 2 orders of magnitude with 4 wt % δ-Al(2)O(3) added, with a decrease in calorimetric glass transition temperature, we are able to conclude that the dc-conductivity is directly coupled to the hidden α-relaxation, even in the presence of nanofillers (at least in the case of δ-Al(2)O(3) nanofillers at concentrations up to 4 wt %). This filler induced speeding up of the segmental polymer dynamics, i.e., the α-relaxation, can be explained by the nonattractive nature of the polymer-filler interactions, which enhance the "free volume" and mobility of polymer segments in the vicinity of filler surfaces.
Jung, Won-Mo; Park, In-Soo; Lee, Ye-Seul; Kim, Chang-Eop; Lee, Hyangsook; Hahm, Dae-Hyun; Park, Hi-Joon; Jang, Bo-Hyoung; Chae, Younbyoung
2018-04-12
Comprehension of the medical diagnoses of doctors and treatment of diseases is important to understand the underlying principle in selecting appropriate acupoints. The pattern recognition process that pertains to symptoms and diseases and informs acupuncture treatment in a clinical setting was explored. A total of 232 clinical records were collected using a Charting Language program. The relationship between symptom information and selected acupoints was trained using an artificial neural network (ANN). A total of 11 hidden nodes with the highest average precision score were selected through a tenfold cross-validation. Our ANN model could predict the selected acupoints based on symptom and disease information with an average precision score of 0.865 (precision, 0.911; recall, 0.811). This model is a useful tool for diagnostic classification or pattern recognition and for the prediction and modeling of acupuncture treatment based on clinical data obtained in a real-world setting. The relationship between symptoms and selected acupoints could be systematically characterized through knowledge discovery processes, such as pattern identification.
Markov Chain Monte Carlo in the Analysis of Single-Molecule Experimental Data
NASA Astrophysics Data System (ADS)
Kou, S. C.; Xie, X. Sunney; Liu, Jun S.
2003-11-01
This article provides a Bayesian analysis of the single-molecule fluorescence lifetime experiment designed to probe the conformational dynamics of a single DNA hairpin molecule. The DNA hairpin's conformational change is initially modeled as a two-state Markov chain, which is not observable and has to be indirectly inferred. The Brownian diffusion of the single molecule, in addition to the hidden Markov structure, further complicates the matter. We show that the analytical form of the likelihood function can be obtained in the simplest case and a Metropolis-Hastings algorithm can be designed to sample from the posterior distribution of the parameters of interest and to compute desired estiamtes. To cope with the molecular diffusion process and the potentially oscillating energy barrier between the two states of the DNA hairpin, we introduce a data augmentation technique to handle both the Brownian diffusion and the hidden Ornstein-Uhlenbeck process associated with the fluctuating energy barrier, and design a more sophisticated Metropolis-type algorithm. Our method not only increases the estimating resolution by several folds but also proves to be successful for model discrimination.
Increased taxon sampling reveals thousands of hidden orthologs in flatworms
2017-01-01
Gains and losses shape the gene complement of animal lineages and are a fundamental aspect of genomic evolution. Acquiring a comprehensive view of the evolution of gene repertoires is limited by the intrinsic limitations of common sequence similarity searches and available databases. Thus, a subset of the gene complement of an organism consists of hidden orthologs, i.e., those with no apparent homology to sequenced animal lineages—mistakenly considered new genes—but actually representing rapidly evolving orthologs or undetected paralogs. Here, we describe Leapfrog, a simple automated BLAST pipeline that leverages increased taxon sampling to overcome long evolutionary distances and identify putative hidden orthologs in large transcriptomic databases by transitive homology. As a case study, we used 35 transcriptomes of 29 flatworm lineages to recover 3427 putative hidden orthologs, some unidentified by OrthoFinder and HaMStR, two common orthogroup inference algorithms. Unexpectedly, we do not observe a correlation between the number of putative hidden orthologs in a lineage and its “average” evolutionary rate. Hidden orthologs do not show unusual sequence composition biases that might account for systematic errors in sequence similarity searches. Instead, gene duplication with divergence of one paralog and weak positive selection appear to underlie hidden orthology in Platyhelminthes. By using Leapfrog, we identify key centrosome-related genes and homeodomain classes previously reported as absent in free-living flatworms, e.g., planarians. Altogether, our findings demonstrate that hidden orthologs comprise a significant proportion of the gene repertoire in flatworms, qualifying the impact of gene losses and gains in gene complement evolution. PMID:28400424
Hafferty, Frederic W; Martimianakis, Maria Athina
2017-11-07
In this Commentary, the authors explore the scoping review by Lawrence and colleagues by challenging their conclusion that with over 25 years' worth of "ambiguous and seemingly ubiquitous use" of the hidden curriculum construct in health professions education scholarship, it is time to either move to a more uniform definitional foundation or abandon the term altogether. The commentary authors counter these remedial propositions by foregrounding the importance of theoretical diversity and the conceptual richness afforded when the hidden curriculum construct is used as an entry point for studying the interstitial space between the formal and a range of other-than-formal domains of learning. Further, they document how tightly-delimited scoping strategies fail to capture the wealth of educational scholarship that operates within a hidden curriculum framework, including "hidden" hidden curriculum articles, studies that employ alternative constructs, and investigations that target important tacit socio-cultural influences on learners and faculty without formally deploying the term. They offer examples of how the hidden curriculum construct, while undergoing significant transformation in its application within the field of health professions education, has created the conceptual foundation for the application of a number of critical perspectives that make visible the field's political investments in particular forms of knowing and associated practices. Finally, the commentary authors invite readers to consider the methodological promise afforded by conceptual heterogeneity, particularly strands of scholarship that resituate the hidden curriculum concept within the magically expansive dance of social relationships, social learning, and social life that form the learning environments of health professions education.
FIMP dark matter freeze-in gauge mediation and hidden sector
NASA Astrophysics Data System (ADS)
Tsao, Kuo-Hsing
2018-07-01
We explore the dark matter freeze-in mechanism within the gauge mediation framework, which involves a hidden feebly interacting massive particle (FIMP) coupling feebly with the messenger fields while the messengers are still in the thermal bath. The FIMP is the fermionic component of the pseudo-moduli in a generic metastable supersymmetry (SUSY) breaking model and resides in the hidden sector. The relic abundance and the mass of the FIMP are determined by the SUSY breaking scale and the feeble coupling. The gravitino, which is the canonical dark matter candidate in the gauge mediation framework, contributes to the dark matter relic abundance along with the freeze-in of the FIMP. The hidden sector thus becomes two-component with both the FIMP and gravitino lodging in the SUSY breaking hidden sector. We point out that the ratio between the FIMP and the gravitino is determined by how SUSY breaking is communicated to the messengers. In particular when the FIMP dominates the hidden sector, the gravitino becomes the minor contributor in the hidden sector. Meanwhile, the neutralino is assumed to be both the weakly interacting massive particle dark matter candidate in the freeze-out mechanism and the lightest observable SUSY particle. We further find out the neutralino has the sub-leading contribution to the current dark matter relic density in the parameter space of our freeze-in gauge mediation model. Our result links the SUSY breaking scale in the gauge mediation framework with the FIMP freeze-in production rate leading to a natural and predicting scenario for the studies of the dark matter in the hidden sector.
NASA Astrophysics Data System (ADS)
Rogotis, Savvas; Palaskas, Christos; Ioannidis, Dimosthenis; Tzovaras, Dimitrios; Likothanassis, Spiros
2015-11-01
This work aims to present an extended framework for automatically recognizing suspicious activities in outdoor perimeter surveilling systems based on infrared video processing. By combining size-, speed-, and appearance-based features, like the local phase quantization and the histograms of oriented gradients, actions of small duration are recognized and used as input, along with spatial information, for modeling target activities using the theory of hidden conditional random fields (HCRFs). HCRFs are used to classify an observation sequence into the most appropriate activity label class, thus discriminating high-risk activities like trespassing from zero risk activities, such as loitering outside the perimeter. The effectiveness of this approach is demonstrated with experimental results in various scenarios that represent suspicious activities in perimeter surveillance systems.
Hidden action or hidden strategy: China's control of its national oil companies
NASA Astrophysics Data System (ADS)
Humphrey, Charles
China's rapid economic growth has been accompanied by parallel growth in energy demand, particularly in demand for oil. Due to political and economic constraints on domestic reform, the CPC has focused on the international dimension through the creation of vertically integrated national oil companies. The foreign investments of these companies have become increasingly controversial due to the high levels of political and financial support afforded them by the CPC. I measure control by employing a model of institutional constraints on state-owned enterprises in conjunction with a managerial variant of Principal Agent theory well suited to political analyses. I conclude that the combination of institutional overlap, the process which led to the formation of the CNOCs as they currently exist and the current overseas activities of the CNOCs all demonstrate that the CPC is in control of the CNOCs.
Emergence of Leadership in Communication
Allahverdyan, Armen E.; Galstyan, Aram
2016-01-01
We study a neuro-inspired model that mimics a discussion (or information dissemination) process in a network of agents. During their interaction, agents redistribute activity and network weights, resulting in emergence of leader(s). The model is able to reproduce the basic scenarios of leadership known in nature and society: laissez-faire (irregular activity, weak leadership, sizable inter-follower interaction, autonomous sub-leaders); participative or democratic (strong leadership, but with feedback from followers); and autocratic (no feedback, one-way influence). Several pertinent aspects of these scenarios are found as well—e.g., hidden leadership (a hidden clique of agents driving the official autocratic leader), and successive leadership (two leaders influence followers by turns). We study how these scenarios emerge from inter-agent dynamics and how they depend on behavior rules of agents—in particular, on their inertia against state changes. PMID:27532484
Emergence of Leadership in Communication.
Allahverdyan, Armen E; Galstyan, Aram
2016-01-01
We study a neuro-inspired model that mimics a discussion (or information dissemination) process in a network of agents. During their interaction, agents redistribute activity and network weights, resulting in emergence of leader(s). The model is able to reproduce the basic scenarios of leadership known in nature and society: laissez-faire (irregular activity, weak leadership, sizable inter-follower interaction, autonomous sub-leaders); participative or democratic (strong leadership, but with feedback from followers); and autocratic (no feedback, one-way influence). Several pertinent aspects of these scenarios are found as well-e.g., hidden leadership (a hidden clique of agents driving the official autocratic leader), and successive leadership (two leaders influence followers by turns). We study how these scenarios emerge from inter-agent dynamics and how they depend on behavior rules of agents-in particular, on their inertia against state changes.
Key ingredients needed when building large data processing systems for scientists
NASA Technical Reports Server (NTRS)
Miller, K. C.
2002-01-01
Why is building a large science software system so painful? Weren't teams of software engineers supposed to make life easier for scientists? Does it sometimes feel as if it would be easier to write the million lines of code in Fortran 77 yourself? The cause of this dissatisfaction is that many of the needs of the science customer remain hidden in discussions with software engineers until after a system has already been built. In fact, many of the hidden needs of the science customer conflict with stated needs and are therefore very difficult to meet unless they are addressed from the outset in a system's architectural requirements. What's missing is the consideration of a small set of key software properties in initial agreements about the requirements, the design and the cost of the system.
Detection of Objects Hidden in Highly Scattering Media Using Time-Gated Imaging Methods
NASA Technical Reports Server (NTRS)
Galland, Pierre A.; Wang, L.; Liang, X.; Ho, P. P.; Alfano, R. R.
2000-01-01
Non-intrusive and non-invasive optical imaging techniques has generated great interest among researchers for their potential applications to biological study, device characterization, surface defect detection, and jet fuel dynamics. Non-linear optical parametric amplification gate (NLOPG) has been used to detect back-scattered images of objects hidden in diluted Intralipid solutions. To directly detect objects hidden in highly scattering media, the diffusive component of light needs to be sorted out from early arrived ballistic and snake photons. In an optical imaging system, images are collected in transmission or back-scattered geometry. The early arrival photons in the transmission approach, always carry the direct information of the hidden object embedded in the turbid medium. In the back-scattered approach, the result is not so forth coming. In the presence of a scattering host, the first arrival photons in back-scattered approach will be directly photons from the host material. In the presentation, NLOPG was applied to acquire time resolved back-scattered images under the phase matching condition. A time-gated amplified signal was obtained through this NLOPG process. The system's gain was approximately 100 times. The time-gate was achieved through phase matching condition where only coherent photons retain their phase. As a result, the diffusive photons, which were the primary contributor to the background, were removed. With a large dynamic range and high resolution, time-gated early light imaging has the potential for improving rocket/aircraft design by determining jets shape and particle sizes. Refinements to these techniques may enable drop size measurements in the highly scattering, optically dense region of multi-element rocket injectors. These types of measurements should greatly enhance the design of stable, and higher performing rocket engines.
Barratt, Monica J; Potter, Gary R; Wouters, Marije; Wilkins, Chris; Werse, Bernd; Perälä, Jussi; Pedersen, Michael Mulbjerg; Nguyen, Holly; Malm, Aili; Lenton, Simon; Korf, Dirk; Klein, Axel; Heyde, Julie; Hakkarainen, Pekka; Frank, Vibeke Asmussen; Decorte, Tom; Bouchard, Martin; Blok, Thomas
2015-03-01
Internet-mediated research methods are increasingly used to access hidden populations. The International Cannabis Cultivation Questionnaire (ICCQ) is an online survey designed to facilitate international comparisons into the relatively under-researched but increasingly significant phenomenon of domestic cannabis cultivation. The Global Cannabis Cultivation Research Consortium has used the ICCQ to survey over 6000 cannabis cultivators across 11 countries. In this paper, we describe and reflect upon our methodological approach, focusing on the digital and traditional recruitment methods used to access this hidden population and the challenges of working across multiple countries, cultures and languages. Descriptive statistics showing eligibility and completion rates and recruitment source by country of residence. Over three quarters of eligible respondents who were presented with the survey were included in the final sample of n=6528. English-speaking countries expended more effort to recruit participants than non-English-speaking countries. The most effective recruitment modes were cannabis websites/groups (33%), Facebook (14%) and news articles (11%). While respondents recruited through news articles were older, growing practice variables were strikingly similar between these main recruitment modes. Through this process, we learnt that there are trade-offs between hosting multiple surveys in each country vs. using one integrated database. We also found that although perceived anonymity is routinely assumed to be a benefit of using digital research methodologies, there are significant limits to research participant anonymity in the current era of mass digital surveillance, especially when the target group is particularly concerned about evading law enforcement. Finally, we list a number of specific recommendations for future researchers utilising Internet-mediated approaches to researching hidden populations. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Feist, S.; Maclachlan, J. C.; Reinhardt, E. G.; McNeill-Jewer, C.; Eyles, C.
2016-12-01
Hidden River Cave is part of a cave system hydrogeologically related to Mammoth Cave in Kentucky and is a multi-level active cave system with 25km of mapped passages. Upper levels experience flow during flood events and lower levels have continuously flowing water. Improper industrial and domestic waste disposal and poor understanding of local hydrogeology lead to contamination of Hidden River Cave in the early 1940s. Previously used for hydroelectric power generation and as a source of potable water the cave was closed to the public for almost 50 years. A new sewage treatment plant and remediation efforts since 1989 have improved the cave system's health. This project focuses on sedimentological studies in the Hidden River Cave system. Water and sediment transport in the cave are being investigated using sediment cores, surface sediment samples and water level data. An Itrax core scanner is used to analyze sediment cores for elemental concentrations, magnetic susceptibility, radiography, and high resolution photography. Horizons of metal concentrations in the core allow correlation of sedimentation events in the cave system. Thecamoebian (testate amoebae) microfossils identified in surface samples allow for further constraint of sediment sources, sedimentation rates, and paleoclimatic analysis. Dive recorders monitor water levels, providing data to further understand the movement of sediment through the cave system. A general time constraint on the sediment's age is based on the presence of microplastic in the surface samples and sediment cores, and data from radiocarbon and lead-210 dating. The integration of various sedimentological data allows for better understanding of sedimentation processes and their record of paleoenvironmental change in the cave system. Sediment studies and methodologies from this project can be applied to other karst systems, and have important applications for communities living on karst landscapes and their water management policies.
What Should We Do With a Hidden Curriculum When We Fine One?
ERIC Educational Resources Information Center
Martin, Jane R.
1976-01-01
A hidden curriculum consists of those learning states of a setting that are either unintended or intended but not openly acknowledged to the learners in the setting unless the learners are aware of them. Consciousness-raising may be the best weapon of individuals who are subject to hidden curricula. (Author/MLF)
The Hidden Reason Behind Children's Misbehavior.
ERIC Educational Resources Information Center
Nystul, Michael S.
1986-01-01
Discusses hidden reason theory based on the assumptions that: (1) the nature of people is positive; (2) a child's most basic psychological need is involvement; and (3) a child has four possible choices in life (good somebody, good nobody, bad somebody, or severely mentally ill.) A three step approach for implementing hidden reason theory is…
Student Teaching: A Hidden Wholeness
ERIC Educational Resources Information Center
Bowman, Richard F.
2007-01-01
Productive student teachers lead learning by emergently sensing and honoring the hidden wholeness of life in classrooms. That hidden wholeness mirrors seven contextual concerns which learners reflect upon in the everydayness of classroom life: What are we going to do in class today? What am I going to have to do in class? What counts in today's…
2008-03-01
vivipara Hidden flower Cryptantha crassisepala Hidden flower Cryptantha fulvocanescens James’s hidden flower Cryptantha jamesii Buffalo gourd...pumila Bigbract verbena ta Verbena bractea Banana yucca ta Yucca bacca Soapweed yucca Yucca glauca Rocky Mountain zinnia Zinnia grandiflora A-9
Hidden Agendas in Marriage: Affective and Longitudinal Dimensions.
ERIC Educational Resources Information Center
Krokoff, Lowell J.
1990-01-01
Examines how couples' discussions of troublesome problems reveal hidden agendas (issues not directly discussed or explored). Finds disgust and contempt are at the core of both love and respect agendas for husbands and wives. Finds that wives' more than husbands' hidden agendas are directly predictive of how negatively they argue at home. (SR)
Driving style recognition method using braking characteristics based on hidden Markov model
Wu, Chaozhong; Lyu, Nengchao; Huang, Zhen
2017-01-01
Since the advantage of hidden Markov model in dealing with time series data and for the sake of identifying driving style, three driving style (aggressive, moderate and mild) are modeled reasonably through hidden Markov model based on driver braking characteristics to achieve efficient driving style. Firstly, braking impulse and the maximum braking unit area of vacuum booster within a certain time are collected from braking operation, and then general braking and emergency braking characteristics are extracted to code the braking characteristics. Secondly, the braking behavior observation sequence is used to describe the initial parameters of hidden Markov model, and the generation of the hidden Markov model for differentiating and an observation sequence which is trained and judged by the driving style is introduced. Thirdly, the maximum likelihood logarithm could be implied from the observable parameters. The recognition accuracy of algorithm is verified through experiments and two common pattern recognition algorithms. The results showed that the driving style discrimination based on hidden Markov model algorithm could realize effective discriminant of driving style. PMID:28837580
A possible loophole in the theorem of Bell.
Hess, K; Philipp, W
2001-12-04
The celebrated inequalities of Bell are based on the assumption that local hidden parameters exist. When combined with conflicting experimental results, these inequalities appear to prove that local hidden parameters cannot exist. This contradiction suggests to many that only instantaneous action at a distance can explain the Einstein, Podolsky, and Rosen type of experiments. We show that, in addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions that contribute to his being able to obtain the desired contradiction. For instance, Bell assumes that the hidden parameters do not depend on time and are governed by a single probability measure independent of the analyzer settings. We argue that the exclusion of time has neither a physical nor a mathematical basis but is based on Bell's translation of the concept of Einstein locality into the language of probability theory. Our additional set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does not permit Bell-type proofs to go forward.
Reputation and Competition in a Hidden Action Model
Fedele, Alessandro; Tedeschi, Piero
2014-01-01
The economics models of reputation and quality in markets can be classified in three categories. (i) Pure hidden action, where only one type of seller is present who can provide goods of different quality. (ii) Pure hidden information, where sellers of different types have no control over product quality. (iii) Mixed frameworks, which include both hidden action and hidden information. In this paper we develop a pure hidden action model of reputation and Bertrand competition, where consumers and firms interact repeatedly in a market with free entry. The price of the good produced by the firms is contractible, whilst the quality is noncontractible, hence it is promised by the firms when a contract is signed. Consumers infer future quality from all available information, i.e., both from what they know about past quality and from current prices. According to early contributions, competition should make reputation unable to induce the production of high-quality goods. We provide a simple solution to this problem by showing that high quality levels are sustained as an outcome of a stationary symmetric equilibrium. PMID:25329387
Reputation and competition in a hidden action model.
Fedele, Alessandro; Tedeschi, Piero
2014-01-01
The economics models of reputation and quality in markets can be classified in three categories. (i) Pure hidden action, where only one type of seller is present who can provide goods of different quality. (ii) Pure hidden information, where sellers of different types have no control over product quality. (iii) Mixed frameworks, which include both hidden action and hidden information. In this paper we develop a pure hidden action model of reputation and Bertrand competition, where consumers and firms interact repeatedly in a market with free entry. The price of the good produced by the firms is contractible, whilst the quality is noncontractible, hence it is promised by the firms when a contract is signed. Consumers infer future quality from all available information, i.e., both from what they know about past quality and from current prices. According to early contributions, competition should make reputation unable to induce the production of high-quality goods. We provide a simple solution to this problem by showing that high quality levels are sustained as an outcome of a stationary symmetric equilibrium.
Engelhardt, Benjamin; Kschischo, Maik; Fröhlich, Holger
2017-06-01
Ordinary differential equations (ODEs) are a popular approach to quantitatively model molecular networks based on biological knowledge. However, such knowledge is typically restricted. Wrongly modelled biological mechanisms as well as relevant external influence factors that are not included into the model are likely to manifest in major discrepancies between model predictions and experimental data. Finding the exact reasons for such observed discrepancies can be quite challenging in practice. In order to address this issue, we suggest a Bayesian approach to estimate hidden influences in ODE-based models. The method can distinguish between exogenous and endogenous hidden influences. Thus, we can detect wrongly specified as well as missed molecular interactions in the model. We demonstrate the performance of our Bayesian dynamic elastic-net with several ordinary differential equation models from the literature, such as human JAK-STAT signalling, information processing at the erythropoietin receptor, isomerization of liquid α -Pinene, G protein cycling in yeast and UV-B triggered signalling in plants. Moreover, we investigate a set of commonly known network motifs and a gene-regulatory network. Altogether our method supports the modeller in an algorithmic manner to identify possible sources of errors in ODE-based models on the basis of experimental data. © 2017 The Author(s).
Photoacoustic imaging of hidden dental caries by using a bundle of hollow optical fibers
NASA Astrophysics Data System (ADS)
Koyama, Takuya; Kakino, Satoko; Matsuura, Yuji
2018-02-01
Photoacoustic imaging system using a bundle of hollow-optical fibers to detect hidden dental caries is proposed. Firstly, we fabricated a hidden caries model with a brown pigment simulating a common color of caries lesion. It was found that high frequency ultrasonic waves are generated from hidden carious part when radiating Nd:YAG laser light with a 532 nm wavelength to occlusal surface of model tooth. We calculated by Fourier transform and found that the waveform from the carious part provides frequency components of approximately from 0.5 to 1.2 MHz. Then a photoacoustic imaging system using a bundle of hollow optical fiber was fabricated for clinical applications. From intensity map of frequency components in 0.5-1.2 MHz, photoacoustic images of hidden caries in the simulated samples were successfully obtained.
On the LHC sensitivity for non-thermalised hidden sectors
NASA Astrophysics Data System (ADS)
Kahlhoefer, Felix
2018-04-01
We show under rather general assumptions that hidden sectors that never reach thermal equilibrium in the early Universe are also inaccessible for the LHC. In other words, any particle that can be produced at the LHC must either have been in thermal equilibrium with the Standard Model at some point or must be produced via the decays of another hidden sector particle that has been in thermal equilibrium. To reach this conclusion, we parametrise the cross section connecting the Standard Model to the hidden sector in a very general way and use methods from linear programming to calculate the largest possible number of LHC events compatible with the requirement of non-thermalisation. We find that even the HL-LHC cannot possibly produce more than a few events with energy above 10 GeV involving states from a non-thermalised hidden sector.
NASA Astrophysics Data System (ADS)
Idris, N. H.; Salim, N. A.; Othman, M. M.; Yasin, Z. M.
2018-03-01
This paper presents the Evolutionary Programming (EP) which proposed to optimize the training parameters for Artificial Neural Network (ANN) in predicting cascading collapse occurrence due to the effect of protection system hidden failure. The data has been collected from the probability of hidden failure model simulation from the historical data. The training parameters of multilayer-feedforward with backpropagation has been optimized with objective function to minimize the Mean Square Error (MSE). The optimal training parameters consists of the momentum rate, learning rate and number of neurons in first hidden layer and second hidden layer is selected in EP-ANN. The IEEE 14 bus system has been tested as a case study to validate the propose technique. The results show the reliable prediction of performance validated through MSE and Correlation Coefficient (R).
Functional level-set derivative for a polymer self consistent field theory Hamiltonian
NASA Astrophysics Data System (ADS)
Ouaknin, Gaddiel; Laachi, Nabil; Bochkov, Daniil; Delaney, Kris; Fredrickson, Glenn H.; Gibou, Frederic
2017-09-01
We derive functional level-set derivatives for the Hamiltonian arising in self-consistent field theory, which are required to solve free boundary problems in the self-assembly of polymeric systems such as block copolymer melts. In particular, we consider Dirichlet, Neumann and Robin boundary conditions. We provide numerical examples that illustrate how these shape derivatives can be used to find equilibrium and metastable structures of block copolymer melts with a free surface in both two and three spatial dimensions.
Image Annotation and Topic Extraction Using Super-Word Latent Dirichlet Allocation
2013-09-01
an image can be used to improve automated image annotation performance over existing generalized annotators. Second, image anno - 3 tations can be used...the other variables. The first ratio in the sampling Equation 2.18 uses word frequency by total words, φ̂ (w) j . The second ratio divides word...topics by total words in that document θ̂ (d) j . Both leave out the current assignment of zi and the results are used to randomly choose a new topic
Time-Bound Analytic Tasks on Large Data Sets Through Dynamic Configuration of Workflows
2013-11-01
Assessment and Efficient Retrieval of Semantic Workflows.” Information Systems Journal, . 2012. [2] Blei, D., Ng, A., and M . Jordan. “Latent Dirichlet...25 (561-567), 2009. [5] Furlani, T. R., Jones, M . D., Gallo, S. M ., Bruno, A. E., Lu, C., Ghadersohi, A., Gentner, R. J., Patra, A., DeLeon, R. L...Proceedings of the IEEE e- Science Conference, Oxford, UK, pages 244–351. 2009. [8] Gil, Y.; Deelman, E.; Ellisman, M . H.; Fahringer, T.; Fox, G.; Gannon, D
NASA Technical Reports Server (NTRS)
Gelinas, R. J.; Doss, S. K.; Vajk, J. P.; Djomehri, J.; Miller, K.
1983-01-01
The mathematical background regarding the moving finite element (MFE) method of Miller and Miller (1981) is discussed, taking into account a general system of partial differential equations (PDE) and the amenability of the MFE method in two dimensions to code modularization and to semiautomatic user-construction of numerous PDE systems for both Dirichlet and zero-Neumann boundary conditions. A description of test problem results is presented, giving attention to aspects of single square wave propagation, and a solution of the heat equation.
On the Boussinesq-Burgers equations driven by dynamic boundary conditions
NASA Astrophysics Data System (ADS)
Zhu, Neng; Liu, Zhengrong; Zhao, Kun
2018-02-01
We study the qualitative behavior of the Boussinesq-Burgers equations on a finite interval subject to the Dirichlet type dynamic boundary conditions. Assuming H1 ×H2 initial data which are compatible with boundary conditions and utilizing energy methods, we show that under appropriate conditions on the dynamic boundary data, there exist unique global-in-time solutions to the initial-boundary value problem, and the solutions converge to the boundary data as time goes to infinity, regardless of the magnitude of the initial data.
Quasi-periodic solutions of nonlinear beam equation with prescribed frequencies
NASA Astrophysics Data System (ADS)
Chang, Jing; Gao, Yixian; Li, Yong
2015-05-01
Consider the one dimensional nonlinear beam equation utt + uxxxx + mu + u3 = 0 under Dirichlet boundary conditions. We show that for any m > 0 but a set of small Lebesgue measure, the above equation admits a family of small-amplitude quasi-periodic solutions with n-dimensional Diophantine frequencies. These Diophantine frequencies are the small dilation of a prescribed Diophantine vector. The proofs are based on an infinite dimensional Kolmogorov-Arnold-Moser iteration procedure and a partial Birkhoff normal form.
Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Ditkowski, Adi
1996-01-01
An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.
Optimal decay rate for the wave equation on a square with constant damping on a strip
NASA Astrophysics Data System (ADS)
Stahn, Reinhard
2017-04-01
We consider the damped wave equation with Dirichlet boundary conditions on the unit square parametrized by Cartesian coordinates x and y. We assume the damping a to be strictly positive and constant for x<σ and zero for x>σ . We prove the exact t^{-4/3}-decay rate for the energy of classical solutions. Our main result (Theorem 1) answers question (1) of Anantharaman and Léautaud (Anal PDE 7(1):159-214, 2014, Section 2C).
Hidden charged dark matter and chiral dark radiation
NASA Astrophysics Data System (ADS)
Ko, P.; Nagata, Natsumi; Tang, Yong
2017-10-01
In the light of recent possible tensions in the Hubble constant H0 and the structure growth rate σ8 between the Planck and other measurements, we investigate a hidden-charged dark matter (DM) model where DM interacts with hidden chiral fermions, which are charged under the hidden SU(N) and U(1) gauge interactions. The symmetries in this model assure these fermions to be massless. The DM in this model, which is a Dirac fermion and singlet under the hidden SU(N), is also assumed to be charged under the U(1) gauge symmetry, through which it can interact with the chiral fermions. Below the confinement scale of SU(N), the hidden quark condensate spontaneously breaks the U(1) gauge symmetry such that there remains a discrete symmetry, which accounts for the stability of DM. This condensate also breaks a flavor symmetry in this model and Nambu-Goldstone bosons associated with this flavor symmetry appear below the confinement scale. The hidden U(1) gauge boson and hidden quarks/Nambu-Goldstone bosons are components of dark radiation (DR) above/below the confinement scale. These light fields increase the effective number of neutrinos by δNeff ≃ 0.59 above the confinement scale for N = 2, resolving the tension in the measurements of the Hubble constant by Planck and Hubble Space Telescope if the confinement scale is ≲1 eV. DM and DR continuously scatter with each other via the hidden U(1) gauge interaction, which suppresses the matter power spectrum and results in a smaller structure growth rate. The DM sector couples to the Standard Model sector through the exchange of a real singlet scalar mixing with the Higgs boson, which makes it possible to probe our model in DM direct detection experiments. Variants of this model are also discussed, which may offer alternative ways to investigate this scenario.
Hall, Deborah A; Guest, Hannah; Prendergast, Garreth; Plack, Christopher J; Francis, Susan T
2018-01-01
Background Rodent studies indicate that noise exposure can cause permanent damage to synapses between inner hair cells and high-threshold auditory nerve fibers, without permanently altering threshold sensitivity. These demonstrations of what is commonly known as hidden hearing loss have been confirmed in several rodent species, but the implications for human hearing are unclear. Objective Our Medical Research Council–funded program aims to address this unanswered question, by investigating functional consequences of the damage to the human peripheral and central auditory nervous system that results from cumulative lifetime noise exposure. Behavioral and neuroimaging techniques are being used in a series of parallel studies aimed at detecting hidden hearing loss in humans. The planned neuroimaging study aims to (1) identify central auditory biomarkers associated with hidden hearing loss; (2) investigate whether there are any additive contributions from tinnitus or diminished sound tolerance, which are often comorbid with hearing problems; and (3) explore the relation between subcortical functional magnetic resonance imaging (fMRI) measures and the auditory brainstem response (ABR). Methods Individuals aged 25 to 40 years with pure tone hearing thresholds ≤20 dB hearing level over the range 500 Hz to 8 kHz and no contraindications for MRI or signs of ear disease will be recruited into the study. Lifetime noise exposure will be estimated using an in-depth structured interview. Auditory responses throughout the central auditory system will be recorded using ABR and fMRI. Analyses will focus predominantly on correlations between lifetime noise exposure and auditory response characteristics. Results This paper reports the study protocol. The funding was awarded in July 2013. Enrollment for the study described in this protocol commenced in February 2017 and was completed in December 2017. Results are expected in 2018. Conclusions This challenging and comprehensive study will have the potential to impact diagnostic procedures for hidden hearing loss, enabling early identification of noise-induced auditory damage via the detection of changes in central auditory processing. Consequently, this will generate the opportunity to give personalized advice regarding provision of ear defense and monitoring of further damage, thus reducing the incidence of noise-induced hearing loss. PMID:29523503
Zhao, Zhibiao
2011-06-01
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.
State Space Model with hidden variables for reconstruction of gene regulatory networks.
Wu, Xi; Li, Peng; Wang, Nan; Gong, Ping; Perkins, Edward J; Deng, Youping; Zhang, Chaoyang
2011-01-01
State Space Model (SSM) is a relatively new approach to inferring gene regulatory networks. It requires less computational time than Dynamic Bayesian Networks (DBN). There are two types of variables in the linear SSM, observed variables and hidden variables. SSM uses an iterative method, namely Expectation-Maximization, to infer regulatory relationships from microarray datasets. The hidden variables cannot be directly observed from experiments. How to determine the number of hidden variables has a significant impact on the accuracy of network inference. In this study, we used SSM to infer Gene regulatory networks (GRNs) from synthetic time series datasets, investigated Bayesian Information Criterion (BIC) and Principle Component Analysis (PCA) approaches to determining the number of hidden variables in SSM, and evaluated the performance of SSM in comparison with DBN. True GRNs and synthetic gene expression datasets were generated using GeneNetWeaver. Both DBN and linear SSM were used to infer GRNs from the synthetic datasets. The inferred networks were compared with the true networks. Our results show that inference precision varied with the number of hidden variables. For some regulatory networks, the inference precision of DBN was higher but SSM performed better in other cases. Although the overall performance of the two approaches is compatible, SSM is much faster and capable of inferring much larger networks than DBN. This study provides useful information in handling the hidden variables and improving the inference precision.
1997-11-01
The goal of the ELF investigation is to improve our fundamental understanding of the effects of the flow environment on flame stability. The flame's stability refers to the position of its base and ultimately its continued existence. Combustion research focuses on understanding the important hidden processes of ignitions, flame spreading, and flame extinction. Understanding these processes will directly affect the efficiency of combustion operations in converting chemical energy to heat and will create a more balanced ecology and healthy environment by reducing pollutants emitted during combustion.