Constructing Weyl group multiple Dirichlet series
NASA Astrophysics Data System (ADS)
Chinta, Gautam; Gunnells, Paul E.
2010-01-01
Let Phi be a reduced root system of rank r . A Weyl group multiple Dirichlet series for Phi is a Dirichlet series in r complex variables s_1,dots,s_r , initially converging for {Re}(s_i) sufficiently large, that has meromorphic continuation to {{C}}^r and satisfies functional equations under the transformations of {{C}}^r corresponding to the Weyl group of Phi . A heuristic definition of such a series was given by Brubaker, Bump, Chinta, Friedberg, and Hoffstein, and they have been investigated in certain special cases by others. In this paper we generalize results by Chinta and Gunnells to construct Weyl group multiple Dirichlet series by a uniform method and show in all cases that they have the expected properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plotnikov, Mikhail G
2011-02-11
Multiple Walsh series (S) on the group G{sup m} are studied. It is proved that every at most countable set is a uniqueness set for series (S) under convergence over cubes. The recovery problem is solved for the coefficients of series (S) that converge outside countable sets or outside sets of Dirichlet type. A number of analogues of the de la Vallee Poussin theorem are established for series (S). Bibliography: 28 titles.
Spectral multigrid methods for elliptic equations 2
NASA Technical Reports Server (NTRS)
Zang, T. A.; Wong, Y. S.; Hussaini, M. Y.
1983-01-01
A detailed description of spectral multigrid methods is provided. This includes the interpolation and coarse-grid operators for both periodic and Dirichlet problems. The spectral methods for periodic problems use Fourier series and those for Dirichlet problems are based upon Chebyshev polynomials. An improved preconditioning for Dirichlet problems is given. Numerical examples and practical advice are included.
Generalized Riemann hypothesis and stochastic time series
NASA Astrophysics Data System (ADS)
Mussardo, Giuseppe; LeClair, André
2018-06-01
Using the Dirichlet theorem on the equidistribution of residue classes modulo q and the Lemke Oliver–Soundararajan conjecture on the distribution of pairs of residues on consecutive primes, we show that the domain of convergence of the infinite product of Dirichlet L-functions of non-principal characters can be extended from down to , without encountering any zeros before reaching this critical line. The possibility of doing so can be traced back to a universal diffusive random walk behavior of a series C N over the primes which underlies the convergence of the infinite product of the Dirichlet functions. The series C N presents several aspects in common with stochastic time series and its control requires to address a problem similar to the single Brownian trajectory problem in statistical mechanics. In the case of the Dirichlet functions of non principal characters, we show that this problem can be solved in terms of a self-averaging procedure based on an ensemble of block variables computed on extended intervals of primes. Those intervals, called inertial intervals, ensure the ergodicity and stationarity of the time series underlying the quantity C N . The infinity of primes also ensures the absence of rare events which would have been responsible for a different scaling behavior than the universal law of the random walks.
Spectral decompositions of multiple time series: a Bayesian non-parametric approach.
Macaro, Christian; Prado, Raquel
2014-01-01
We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.
A Stochastic Diffusion Process for the Dirichlet Distribution
Bakosi, J.; Ristorcelli, J. R.
2013-03-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less
ERIC Educational Resources Information Center
Brilleslyper, Michael A.; Wolverton, Robert H.
2008-01-01
In this article we consider an example suitable for investigation in many mid and upper level undergraduate mathematics courses. Fourier series provide an excellent example of the differences between uniform and non-uniform convergence. We use Dirichlet's test to investigate the convergence of the Fourier series for a simple periodic saw tooth…
A Meinardus Theorem with Multiple Singularities
NASA Astrophysics Data System (ADS)
Granovsky, Boris L.; Stark, Dudley
2012-09-01
Meinardus proved a general theorem about the asymptotics of the number of weighted partitions, when the Dirichlet generating function for weights has a single pole on the positive real axis. Continuing (Granovsky et al., Adv. Appl. Math. 41:307-328, 2008), we derive asymptotics for the numbers of three basic types of decomposable combinatorial structures (or, equivalently, ideal gas models in statistical mechanics) of size n, when their Dirichlet generating functions have multiple simple poles on the positive real axis. Examples to which our theorem applies include ones related to vector partitions and quantum field theory. Our asymptotic formula for the number of weighted partitions disproves the belief accepted in the physics literature that the main term in the asymptotics is determined by the rightmost pole.
Hu, Weiming; Tian, Guodong; Kang, Yongxin; Yuan, Chunfeng; Maybank, Stephen
2017-09-25
In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequence of atomic activities, the action represented by the trajectory can be described in natural language in as automatic a way as possible. The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene.
Three-dimensional analytical solutions of the atmospheric diffusion equation with multiple sources and height-dependent wind speed and eddy diffusivities are derived in a systematic fashion. For homogeneous Neumann (total reflection), Dirichlet (total adsorpti...
Three-dimensional analytical solutions of the atmospheric diffusion equation with multiple sources and height-dependent wind speed and eddy diffusivities are derived in a systematic fashion. For homogeneous Neumann (total reflection), Dirichlet (total adsorpti...
Multiple Positive Solutions in the Second Order Autonomous Nonlinear Boundary Value Problems
NASA Astrophysics Data System (ADS)
Atslega, Svetlana; Sadyrbaev, Felix
2009-09-01
We construct the second order autonomous equations with arbitrarily large number of positive solutions satisfying homogeneous Dirichlet boundary conditions. Phase plane approach and bifurcation of solutions are the main tools.
Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation.
Chen, Chao; Zare, Alina; Trinh, Huy N; Omotara, Gbenga O; Cobb, James Tory; Lagaunne, Timotius A
2017-12-01
Topic models [e.g., probabilistic latent semantic analysis, latent Dirichlet allocation (LDA), and supervised LDA] have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership LDA (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery, where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.
Briggs, Andrew H; Ades, A E; Price, Martin J
2003-01-01
In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.
NASA Astrophysics Data System (ADS)
Chang, Ya-Chi; Yeh, Hund-Der
2010-06-01
The constant-head pumping tests are usually employed to determine the aquifer parameters and they can be performed in fully or partially penetrating wells. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The mathematical model describing the aquifer response to a constant-head test performed in a fully penetrating well can be easily solved by the conventional integral transform technique under the uniform Dirichlet-type condition along the rim of wellbore. However, the boundary condition for a test well with partial penetration should be considered as a mixed-type condition. This mixed boundary value problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the Laplace and finite Fourier transforms in conjunction with the triple series equations method. This approach provides analytical results for the drawdown in a partially penetrating well for arbitrary location of the well screen in a finite thickness aquifer. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.
Nonlocal Reformulations of Water and Internal Waves and Asymptotic Reductions
NASA Astrophysics Data System (ADS)
Ablowitz, Mark J.
2009-09-01
Nonlocal reformulations of the classical equations of water waves and two ideal fluids separated by a free interface, bounded above by either a rigid lid or a free surface, are obtained. The kinematic equations may be written in terms of integral equations with a free parameter. By expressing the pressure, or Bernoulli, equation in terms of the surface/interface variables, a closed system is obtained. An advantage of this formulation, referred to as the nonlocal spectral (NSP) formulation, is that the vertical component is eliminated, thus reducing the dimensionality and fixing the domain in which the equations are posed. The NSP equations and the Dirichlet-Neumann operators associated with the water wave or two-fluid equations can be related to each other and the Dirichlet-Neumann series can be obtained from the NSP equations. Important asymptotic reductions obtained from the two-fluid nonlocal system include the generalizations of the Benney-Luke and Kadomtsev-Petviashvili (KP) equations, referred to as intermediate-long wave (ILW) generalizations. These 2+1 dimensional equations possess lump type solutions. In the water wave problem high-order asymptotic series are obtained for two and three dimensional gravity-capillary solitary waves. In two dimensions, the first term in the asymptotic series is the well-known hyperbolic secant squared solution of the KdV equation; in three dimensions, the first term is the rational lump solution of the KP equation.
Meta-analysis using Dirichlet process.
Muthukumarana, Saman; Tiwari, Ram C
2016-02-01
This article develops a Bayesian approach for meta-analysis using the Dirichlet process. The key aspect of the Dirichlet process in meta-analysis is the ability to assess evidence of statistical heterogeneity or variation in the underlying effects across study while relaxing the distributional assumptions. We assume that the study effects are generated from a Dirichlet process. Under a Dirichlet process model, the study effects parameters have support on a discrete space and enable borrowing of information across studies while facilitating clustering among studies. We illustrate the proposed method by applying it to a dataset on the Program for International Student Assessment on 30 countries. Results from the data analysis, simulation studies, and the log pseudo-marginal likelihood model selection procedure indicate that the Dirichlet process model performs better than conventional alternative methods. © The Author(s) 2012.
A stochastic diffusion process for Lochner's generalized Dirichlet distribution
Bakosi, J.; Ristorcelli, J. R.
2013-10-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less
Quantum "violation" of Dirichlet boundary condition
NASA Astrophysics Data System (ADS)
Park, I. Y.
2017-02-01
Dirichlet boundary conditions have been widely used in general relativity. They seem at odds with the holographic property of gravity simply because a boundary configuration can be varying and dynamic instead of dying out as required by the conditions. In this work we report what should be a tension between the Dirichlet boundary conditions and quantum gravitational effects, and show that a quantum-corrected black hole solution of the 1PI action no longer obeys, in the naive manner one may expect, the Dirichlet boundary conditions imposed at the classical level. We attribute the 'violation' of the Dirichlet boundary conditions to a certain mechanism of the information storage on the boundary.
Bayesian correlated clustering to integrate multiple datasets
Kirk, Paul; Griffin, Jim E.; Savage, Richard S.; Ghahramani, Zoubin; Wild, David L.
2012-01-01
Motivation: The integration of multiple datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct—but often complementary—information. We present a Bayesian method for the unsupervised integrative modelling of multiple datasets, which we refer to as MDI (Multiple Dataset Integration). MDI can integrate information from a wide range of different datasets and data types simultaneously (including the ability to model time series data explicitly using Gaussian processes). Each dataset is modelled using a Dirichlet-multinomial allocation (DMA) mixture model, with dependencies between these models captured through parameters that describe the agreement among the datasets. Results: Using a set of six artificially constructed time series datasets, we show that MDI is able to integrate a significant number of datasets simultaneously, and that it successfully captures the underlying structural similarity between the datasets. We also analyse a variety of real Saccharomyces cerevisiae datasets. In the two-dataset case, we show that MDI’s performance is comparable with the present state-of-the-art. We then move beyond the capabilities of current approaches and integrate gene expression, chromatin immunoprecipitation–chip and protein–protein interaction data, to identify a set of protein complexes for which genes are co-regulated during the cell cycle. Comparisons to other unsupervised data integration techniques—as well as to non-integrative approaches—demonstrate that MDI is competitive, while also providing information that would be difficult or impossible to extract using other methods. Availability: A Matlab implementation of MDI is available from http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/. Contact: D.L.Wild@warwick.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23047558
USING DIRICHLET TESSELLATION TO HELP ESTIMATE MICROBIAL BIOMASS CONCENTRATIONS
Dirichlet tessellation was applied to estimate microbial concentrations from microscope well slides. The use of microscopy/Dirichlet tessellation to quantify biomass was illustrated with two species of morphologically distinct cyanobacteria, and validated empirically by compariso...
Study on monostable and bistable reaction-diffusion equations by iteration of travelling wave maps
NASA Astrophysics Data System (ADS)
Yi, Taishan; Chen, Yuming
2017-12-01
In this paper, based on the iterative properties of travelling wave maps, we develop a new method to obtain spreading speeds and asymptotic propagation for monostable and bistable reaction-diffusion equations. Precisely, for Dirichlet problems of monostable reaction-diffusion equations on the half line, by making links between travelling wave maps and integral operators associated with the Dirichlet diffusion kernel (the latter is NOT invariant under translation), we obtain some iteration properties of the Dirichlet diffusion and some a priori estimates on nontrivial solutions of Dirichlet problems under travelling wave transformation. We then provide the asymptotic behavior of nontrivial solutions in the space-time region for Dirichlet problems. These enable us to develop a unified method to obtain results on heterogeneous steady states, travelling waves, spreading speeds, and asymptotic spreading behavior for Dirichlet problem of monostable reaction-diffusion equations on R+ as well as of monostable/bistable reaction-diffusion equations on R.
Feature extraction for document text using Latent Dirichlet Allocation
NASA Astrophysics Data System (ADS)
Prihatini, P. M.; Suryawan, I. K.; Mandia, IN
2018-01-01
Feature extraction is one of stages in the information retrieval system that used to extract the unique feature values of a text document. The process of feature extraction can be done by several methods, one of which is Latent Dirichlet Allocation. However, researches related to text feature extraction using Latent Dirichlet Allocation method are rarely found for Indonesian text. Therefore, through this research, a text feature extraction will be implemented for Indonesian text. The research method consists of data acquisition, text pre-processing, initialization, topic sampling and evaluation. The evaluation is done by comparing Precision, Recall and F-Measure value between Latent Dirichlet Allocation and Term Frequency Inverse Document Frequency KMeans which commonly used for feature extraction. The evaluation results show that Precision, Recall and F-Measure value of Latent Dirichlet Allocation method is higher than Term Frequency Inverse Document Frequency KMeans method. This shows that Latent Dirichlet Allocation method is able to extract features and cluster Indonesian text better than Term Frequency Inverse Document Frequency KMeans method.
New solutions to the constant-head test performed at a partially penetrating well
NASA Astrophysics Data System (ADS)
Chang, Y. C.; Yeh, H. D.
2009-05-01
SummaryThe mathematical model describing the aquifer response to a constant-head test performed at a fully penetrating well can be easily solved by the conventional integral transform technique. In addition, the Dirichlet-type condition should be chosen as the boundary condition along the rim of wellbore for such a test well. However, the boundary condition for a test well with partial penetration must be considered as a mixed-type condition. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The model for such a mixed boundary problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the dual series equations and perturbation method. This approach provides analytical results for the drawdown in the partially penetrating well and the well discharge along the screen. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.
Quantum Gravitational Effects on the Boundary
NASA Astrophysics Data System (ADS)
James, F.; Park, I. Y.
2018-04-01
Quantum gravitational effects might hold the key to some of the outstanding problems in theoretical physics. We analyze the perturbative quantum effects on the boundary of a gravitational system and the Dirichlet boundary condition imposed at the classical level. Our analysis reveals that for a black hole solution, there is a contradiction between the quantum effects and the Dirichlet boundary condition: the black hole solution of the one-particle-irreducible action no longer satisfies the Dirichlet boundary condition as would be expected without going into details. The analysis also suggests that the tension between the Dirichlet boundary condition and loop effects is connected with a certain mechanism of information storage on the boundary.
An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.
Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei
2013-05-01
Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mejri, Youssef, E-mail: josef-bizert@hotmail.fr; Dép. des Mathématiques, Faculté des Sciences de Bizerte, 7021 Jarzouna; Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT BP 37, Le Belvedere, 1002 Tunis
In this article, we study the boundary inverse problem of determining the aligned magnetic field appearing in the magnetic Schrödinger equation in a periodic quantum cylindrical waveguide, by knowledge of the Dirichlet-to-Neumann map. We prove a Hölder stability estimate with respect to the Dirichlet-to-Neumann map, by means of the geometrical optics solutions of the magnetic Schrödinger equation.
The impact of the rate prior on Bayesian estimation of divergence times with multiple Loci.
Dos Reis, Mario; Zhu, Tianqi; Yang, Ziheng
2014-07-01
Bayesian methods provide a powerful way to estimate species divergence times by combining information from molecular sequences with information from the fossil record. With the explosive increase of genomic data, divergence time estimation increasingly uses data of multiple loci (genes or site partitions). Widely used computer programs to estimate divergence times use independent and identically distributed (i.i.d.) priors on the substitution rates for different loci. The i.i.d. prior is problematic. As the number of loci (L) increases, the prior variance of the average rate across all loci goes to zero at the rate 1/L. As a consequence, the rate prior dominates posterior time estimates when many loci are analyzed, and if the rate prior is misspecified, the estimated divergence times will converge to wrong values with very narrow credibility intervals. Here we develop a new prior on the locus rates based on the Dirichlet distribution that corrects the problematic behavior of the i.i.d. prior. We use computer simulation and real data analysis to highlight the differences between the old and new priors. For a dataset for six primate species, we show that with the old i.i.d. prior, if the prior rate is too high (or too low), the estimated divergence times are too young (or too old), outside the bounds imposed by the fossil calibrations. In contrast, with the new Dirichlet prior, posterior time estimates are insensitive to the rate prior and are compatible with the fossil calibrations. We re-analyzed a phylogenomic data set of 36 mammal species and show that using many fossil calibrations can alleviate the adverse impact of a misspecified rate prior to some extent. We recommend the use of the new Dirichlet prior in Bayesian divergence time estimation. [Bayesian inference, divergence time, relaxed clock, rate prior, partition analysis.]. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Multispike solutions for the Brezis-Nirenberg problem in dimension three
NASA Astrophysics Data System (ADS)
Musso, Monica; Salazar, Dora
2018-06-01
We consider the problem Δu + λu +u5 = 0, u > 0, in a smooth bounded domain Ω in R3, under zero Dirichlet boundary conditions. We obtain solutions to this problem exhibiting multiple bubbling behavior at k different points of the domain as λ tends to a special positive value λ0, which we characterize in terms of the Green function of - Δ - λ.
Visibility of quantum graph spectrum from the vertices
NASA Astrophysics Data System (ADS)
Kühn, Christian; Rohleder, Jonathan
2018-03-01
We investigate the relation between the eigenvalues of the Laplacian with Kirchhoff vertex conditions on a finite metric graph and a corresponding Titchmarsh-Weyl function (a parameter-dependent Neumann-to-Dirichlet map). We give a complete description of all real resonances, including multiplicities, in terms of the edge lengths and the connectivity of the graph, and apply it to characterize all eigenvalues which are visible for the Titchmarsh-Weyl function.
Bounded solutions in a T-shaped waveguide and the spectral properties of the Dirichlet ladder
NASA Astrophysics Data System (ADS)
Nazarov, S. A.
2014-08-01
The Dirichlet problem is considered on the junction of thin quantum waveguides (of thickness h ≪ 1) in the shape of an infinite two-dimensional ladder. Passage to the limit as h → +0 is discussed. It is shown that the asymptotically correct transmission conditions at nodes of the corresponding one-dimensional quantum graph are Dirichlet conditions rather than the conventional Kirchhoff transmission conditions. The result is obtained by analyzing bounded solutions of a problem in the T-shaped waveguide that the boundary layer phenomenon.
General stability of memory-type thermoelastic Timoshenko beam acting on shear force
NASA Astrophysics Data System (ADS)
Apalara, Tijani A.
2018-03-01
In this paper, we consider a linear thermoelastic Timoshenko system with memory effects where the thermoelastic coupling is acting on shear force under Neumann-Dirichlet-Dirichlet boundary conditions. The same system with fully Dirichlet boundary conditions was considered by Messaoudi and Fareh (Nonlinear Anal TMA 74(18):6895-6906, 2011, Acta Math Sci 33(1):23-40, 2013), but they obtained a general stability result which depends on the speeds of wave propagation. In our case, we obtained a general stability result irrespective of the wave speeds of the system.
Synthesis and X-ray Crystallography of [Mg(H2O)6][AnO2(C2H5COO)3]2 (An = U, Np, or Pu).
Serezhkin, Viktor N; Grigoriev, Mikhail S; Abdulmyanov, Aleksey R; Fedoseev, Aleksandr M; Savchenkov, Anton V; Serezhkina, Larisa B
2016-08-01
Synthesis and X-ray crystallography of single crystals of [Mg(H2O)6][AnO2(C2H5COO)3]2, where An = U (I), Np (II), or Pu (III), are reported. Compounds I-III are isostructural and crystallize in the trigonal crystal system. The structures of I-III are built of hydrated magnesium cations [Mg(H2O)6](2+) and mononuclear [AnO2(C2H5COO)3](-) complexes, which belong to the AB(01)3 crystallochemical group of uranyl complexes (A = AnO2(2+), B(01) = C2H5COO(-)). Peculiarities of intermolecular interactions in the structures of [Mg(H2O)6][UO2(L)3]2 complexes depending on the carboxylate ion L (acetate, propionate, or n-butyrate) are investigated using the method of molecular Voronoi-Dirichlet polyhedra. Actinide contraction in the series of U(VI)-Np(VI)-Pu(VI) in compounds I-III is reflected in a decrease in the mean An═O bond lengths and in the volume and sphericity degree of Voronoi-Dirichlet polyhedra of An atoms.
A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times
Heath, Tracy A.
2012-01-01
In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343
Application of the perfectly matched layer in 3-D marine controlled-source electromagnetic modelling
NASA Astrophysics Data System (ADS)
Li, Gang; Li, Yuguo; Han, Bo; Liu, Zhan
2018-01-01
In this study, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 3-D frequency-domain marine controlled-source electromagnetic (CSEM) field modelling. The Dirichlet boundary, which is usually used within the traditional framework of EM modelling algorithms, assumes that the electric or magnetic field values are zero at the boundaries. This requires the boundaries to be sufficiently far away from the area of interest. To mitigate the boundary artefacts, a large modelling area may be necessary even though cell sizes are allowed to grow toward the boundaries due to the diffusion of the electromagnetic wave propagation. Compared with the conventional Dirichlet boundary, the PML boundary is preferred as the modelling area of interest could be restricted to the target region and only a few absorbing layers surrounding can effectively depress the artificial boundary effect without losing the numerical accuracy. Furthermore, for joint inversion of seismic and marine CSEM data, if we use the PML for CSEM field simulation instead of the conventional Dirichlet, the modelling area for these two different geophysical data collected from the same survey area could be the same, which is convenient for joint inversion grid matching. We apply the CFS-PML boundary to 3-D marine CSEM modelling by using the staggered finite-difference discretization. Numerical test indicates that the modelling algorithm using the CFS-PML also shows good accuracy compared to the Dirichlet. Furthermore, the modelling algorithm using the CFS-PML shows advantages in computational time and memory saving than that using the Dirichlet boundary. For the 3-D example in this study, the memory saving using the PML is nearly 42 per cent and the time saving is around 48 per cent compared to using the Dirichlet.
A Dirichlet-Multinomial Bayes Classifier for Disease Diagnosis with Microbial Compositions.
Gao, Xiang; Lin, Huaiying; Dong, Qunfeng
2017-01-01
Dysbiosis of microbial communities is associated with various human diseases, raising the possibility of using microbial compositions as biomarkers for disease diagnosis. We have developed a Bayes classifier by modeling microbial compositions with Dirichlet-multinomial distributions, which are widely used to model multicategorical count data with extra variation. The parameters of the Dirichlet-multinomial distributions are estimated from training microbiome data sets based on maximum likelihood. The posterior probability of a microbiome sample belonging to a disease or healthy category is calculated based on Bayes' theorem, using the likelihood values computed from the estimated Dirichlet-multinomial distribution, as well as a prior probability estimated from the training microbiome data set or previously published information on disease prevalence. When tested on real-world microbiome data sets, our method, called DMBC (for Dirichlet-multinomial Bayes classifier), shows better classification accuracy than the only existing Bayesian microbiome classifier based on a Dirichlet-multinomial mixture model and the popular random forest method. The advantage of DMBC is its built-in automatic feature selection, capable of identifying a subset of microbial taxa with the best classification accuracy between different classes of samples based on cross-validation. This unique ability enables DMBC to maintain and even improve its accuracy at modeling species-level taxa. The R package for DMBC is freely available at https://github.com/qunfengdong/DMBC. IMPORTANCE By incorporating prior information on disease prevalence, Bayes classifiers have the potential to estimate disease probability better than other common machine-learning methods. Thus, it is important to develop Bayes classifiers specifically tailored for microbiome data. Our method shows higher classification accuracy than the only existing Bayesian classifier and the popular random forest method, and thus provides an alternative option for using microbial compositions for disease diagnosis.
Comment on "Exact solution of resonant modes in a rectangular resonator".
Gutiérrez-Vega, Julio C; Bandres, Miguel A
2006-08-15
We comment on the recent Letter by J. Wu and A. Liu [Opt. Lett. 31, 1720 (2006)] in which an exact scalar solution to the resonant modes and the resonant frequencies in a two-dimensional rectangular microcavity were presented. The analysis is incorrect because (a) the field solutions were imposed to satisfy simultaneously both Dirichlet and Neumann boundary conditions at the four sides of the rectangle, leading to an overdetermined problem, and (b) the modes in the cavity were expanded using an incorrect series ansatz, leading to an expression for the mode fields that does not satisfy the Helmholtz equation.
Prior Design for Dependent Dirichlet Processes: An Application to Marathon Modeling
F. Pradier, Melanie; J. R. Ruiz, Francisco; Perez-Cruz, Fernando
2016-01-01
This paper presents a novel application of Bayesian nonparametrics (BNP) for marathon data modeling. We make use of two well-known BNP priors, the single-p dependent Dirichlet process and the hierarchical Dirichlet process, in order to address two different problems. First, we study the impact of age, gender and environment on the runners’ performance. We derive a fair grading method that allows direct comparison of runners regardless of their age and gender. Unlike current grading systems, our approach is based not only on top world records, but on the performances of all runners. The presented methodology for comparison of densities can be adopted in many other applications straightforwardly, providing an interesting perspective to build dependent Dirichlet processes. Second, we analyze the running patterns of the marathoners in time, obtaining information that can be valuable for training purposes. We also show that these running patterns can be used to predict finishing time given intermediate interval measurements. We apply our models to New York City, Boston and London marathons. PMID:26821155
NASA Astrophysics Data System (ADS)
Feehan, Paul M. N.
2017-09-01
We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].
NASA Astrophysics Data System (ADS)
Ahn, Chi Young; Jeon, Kiwan; Park, Won-Kwang
2015-06-01
This study analyzes the well-known MUltiple SIgnal Classification (MUSIC) algorithm to identify unknown support of thin penetrable electromagnetic inhomogeneity from scattered field data collected within the so-called multi-static response matrix in limited-view inverse scattering problems. The mathematical theories of MUSIC are partially discovered, e.g., in the full-view problem, for an unknown target of dielectric contrast or a perfectly conducting crack with the Dirichlet boundary condition (Transverse Magnetic-TM polarization) and so on. Hence, we perform further research to analyze the MUSIC-type imaging functional and to certify some well-known but theoretically unexplained phenomena. For this purpose, we establish a relationship between the MUSIC imaging functional and an infinite series of Bessel functions of integer order of the first kind. This relationship is based on the rigorous asymptotic expansion formula in the existence of a thin inhomogeneity with a smooth supporting curve. Various results of numerical simulation are presented in order to support the identified structure of MUSIC. Although a priori information of the target is needed, we suggest a least condition of range of incident and observation directions to apply MUSIC in the limited-view problem.
NASA Astrophysics Data System (ADS)
Ben Amara, Jamel; Bouzidi, Hedi
2018-01-01
In this paper, we consider a linear hybrid system which is composed by two non-homogeneous rods connected by a point mass with Dirichlet boundary conditions on the left end and a boundary control acts on the right end. We prove that this system is null controllable with Dirichlet or Neumann boundary controls. Our approach is mainly based on a detailed spectral analysis together with the moment method. In particular, we show that the associated spectral gap in both cases (Dirichlet or Neumann boundary controls) is positive without further conditions on the coefficients other than the regularities.
NASA Astrophysics Data System (ADS)
Grobbelaar-Van Dalsen, Marié
2015-02-01
In this article, we are concerned with the polynomial stabilization of a two-dimensional thermoelastic Mindlin-Timoshenko plate model with no mechanical damping. The model is subject to Dirichlet boundary conditions on the elastic as well as the thermal variables. The work complements our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 64:1305-1325, 2013) on the polynomial stabilization of a Mindlin-Timoshenko model in a radially symmetric domain under Dirichlet boundary conditions on the displacement and thermal variables and free boundary conditions on the shear angle variables. In particular, our aim is to investigate the effect of the Dirichlet boundary conditions on all the variables on the polynomial decay rate of the model. By once more applying a frequency domain method in which we make critical use of an inequality for the trace of Sobolev functions on the boundary of a bounded, open connected set we show that the decay is slower than in the model considered in the cited work. A comparison of our result with our polynomial decay result for a magnetoelastic Mindlin-Timoshenko model subject to Dirichlet boundary conditions on the elastic variables in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) also indicates a correlation between the robustness of the coupling between parabolic and hyperbolic dynamics and the polynomial decay rate in the two models.
On the Dirichlet's Box Principle
ERIC Educational Resources Information Center
Poon, Kin-Keung; Shiu, Wai-Chee
2008-01-01
In this note, we will focus on several applications on the Dirichlet's box principle in Discrete Mathematics lesson and number theory lesson. In addition, the main result is an innovative game on a triangular board developed by the authors. The game has been used in teaching and learning mathematics in Discrete Mathematics and some high schools in…
NASA Astrophysics Data System (ADS)
Dai, Guowei; Romero, Alfonso; Torres, Pedro J.
2018-06-01
We study the existence of spacelike graphs for the prescribed mean curvature equation in the Friedmann-Lemaître-Robertson-Walker (FLRW) spacetime. By using a conformal change of variable, this problem is translated into an equivalent problem in the Lorentz-Minkowski spacetime. Then, by using Rabinowitz's global bifurcation method, we obtain the existence and multiplicity of positive solutions for this equation with 0-Dirichlet boundary condition on a ball. Moreover, the global structure of the positive solution set is studied.
Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave
2014-01-01
We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...
Uniform gradient estimates on manifolds with a boundary and applications
NASA Astrophysics Data System (ADS)
Cheng, Li-Juan; Thalmaier, Anton; Thompson, James
2018-04-01
We revisit the problem of obtaining uniform gradient estimates for Dirichlet and Neumann heat semigroups on Riemannian manifolds with boundary. As applications, we obtain isoperimetric inequalities, using Ledoux's argument, and uniform quantitative gradient estimates, firstly for C^2_b functions with boundary conditions and then for the unit spectral projection operators of Dirichlet and Neumann Laplacians.
Dirichlet to Neumann operator for Abelian Yang-Mills gauge fields
NASA Astrophysics Data System (ADS)
Díaz-Marín, Homero G.
We consider the Dirichlet to Neumann operator for Abelian Yang-Mills boundary conditions. The aim is constructing a complex structure for the symplectic space of boundary conditions of Euler-Lagrange solutions modulo gauge for space-time manifolds with smooth boundary. Thus we prepare a suitable scenario for geometric quantization within the reduced symplectic space of boundary conditions of Abelian gauge fields.
A fast approach to designing airfoils from given pressure distribution in compressible flows
NASA Technical Reports Server (NTRS)
Daripa, Prabir
1987-01-01
A new inverse method for aerodynamic design of airfols is presented for subcritical flows. The pressure distribution in this method can be prescribed as a function of the arc length of the as-yet unknown body. This inverse problem is shown to be mathematically equivalent to solving only one nonlinear boundary value problem subject to known Dirichlet data on the boundary. The solution to this problem determines the airfoil, the freestream Mach number, and the upstream flow direction. The existence of a solution to a given pressure distribution is discussed. The method is easy to implement and extremely efficient. A series of results for which comparisons are made with the known airfoils is presented.
The Cr dependence problem of eigenvalues of the Laplace operator on domains in the plane
NASA Astrophysics Data System (ADS)
Haddad, Julian; Montenegro, Marcos
2018-03-01
The Cr dependence problem of multiple Dirichlet eigenvalues on domains is discussed for elliptic operators by regarding C r + 1-smooth one-parameter families of C1 perturbations of domains in Rn. As applications of our main theorem (Theorem 1), we provide a fairly complete description for all eigenvalues of the Laplace operator on disks and squares in R2 and also for its second eigenvalue on balls in Rn for any n ≥ 3. The central tool used in our proof is a degenerate implicit function theorem on Banach spaces (Theorem 2) of independent interest.
Theory of multicolor lattice gas - A cellular automaton Poisson solver
NASA Technical Reports Server (NTRS)
Chen, H.; Matthaeus, W. H.; Klein, L. W.
1990-01-01
The present class of models for cellular automata involving a quiescent hydrodynamic lattice gas with multiple-valued passive labels termed 'colors', the lattice collisions change individual particle colors while preserving net color. The rigorous proofs of the multicolor lattice gases' essential features are rendered more tractable by an equivalent subparticle representation in which the color is represented by underlying two-state 'spins'. Schemes for the introduction of Dirichlet and Neumann boundary conditions are described, and two illustrative numerical test cases are used to verify the theory. The lattice gas model is equivalent to a Poisson equation solution.
NASA Astrophysics Data System (ADS)
Li, Dong; Guo, Shangjiang
Chemotaxis is an observed phenomenon in which a biological individual moves preferentially toward a relatively high concentration, which is contrary to the process of natural diffusion. In this paper, we study a reaction-diffusion model with chemotaxis and nonlocal delay effect under Dirichlet boundary condition by using Lyapunov-Schmidt reduction and the implicit function theorem. The existence, multiplicity, stability and Hopf bifurcation of spatially nonhomogeneous steady state solutions are investigated. Moreover, our results are illustrated by an application to the model with a logistic source, homogeneous kernel and one-dimensional spatial domain.
NASA Astrophysics Data System (ADS)
Liang, Hui; Chen, Xiaobo
2017-10-01
A novel multi-domain method based on an analytical control surface is proposed by combining the use of free-surface Green function and Rankine source function. A cylindrical control surface is introduced to subdivide the fluid domain into external and internal domains. Unlike the traditional domain decomposition strategy or multi-block method, the control surface here is not panelized, on which the velocity potential and normal velocity components are analytically expressed as a series of base functions composed of Laguerre function in vertical coordinate and Fourier series in the circumference. Free-surface Green function is applied in the external domain, and the boundary integral equation is constructed on the control surface in the sense of Galerkin collocation via integrating test functions orthogonal to base functions over the control surface. The external solution gives rise to the so-called Dirichlet-to-Neumann [DN2] and Neumann-to-Dirichlet [ND2] relations on the control surface. Irregular frequencies, which are only dependent on the radius of the control surface, are present in the external solution, and they are removed by extending the boundary integral equation to the interior free surface (circular disc) on which the null normal derivative of potential is imposed, and the dipole distribution is expressed as Fourier-Bessel expansion on the disc. In the internal domain, where the Rankine source function is adopted, new boundary integral equations are formulated. The point collocation is imposed over the body surface and free surface, while the collocation of the Galerkin type is applied on the control surface. The present method is valid in the computation of both linear and second-order mean drift wave loads. Furthermore, the second-order mean drift force based on the middle-field formulation can be calculated analytically by using the coefficients of the Fourier-Laguerre expansion.
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
Effective implementation of wavelet Galerkin method
NASA Astrophysics Data System (ADS)
Finěk, Václav; Šimunková, Martina
2012-11-01
It was proved by W. Dahmen et al. that an adaptive wavelet scheme is asymptotically optimal for a wide class of elliptic equations. This scheme approximates the solution u by a linear combination of N wavelets and a benchmark for its performance is the best N-term approximation, which is obtained by retaining the N largest wavelet coefficients of the unknown solution. Moreover, the number of arithmetic operations needed to compute the approximate solution is proportional to N. The most time consuming part of this scheme is the approximate matrix-vector multiplication. In this contribution, we will introduce our implementation of wavelet Galerkin method for Poisson equation -Δu = f on hypercube with homogeneous Dirichlet boundary conditions. In our implementation, we identified nonzero elements of stiffness matrix corresponding to the above problem and we perform matrix-vector multiplication only with these nonzero elements.
Automated airplane surface generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.E.; Cordero, Y.; Jones, W.
1996-12-31
An efficient methodology and software axe presented for defining a class of airplane configurations. A small set of engineering design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tall, horizontal tail, and canard components. Wing, canard, and tail surface grids axe manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage is described by an algebraic function with four design parameters. The computed surface grids are suitablemore » for a wide range of Computational Fluid Dynamics simulation and configuration optimizations. Both batch and interactive software are discussed for applying the methodology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pupyshev, V.I.; Scherbinin, A.V.; Stepanov, N.F.
1997-11-01
The approach based on the multiplicative form of a trial wave function within the framework of the variational method, initially proposed by Kirkwood and Buckingham, is shown to be an effective analytical tool in the quantum mechanical study of atoms and molecules. As an example, the elementary proof is given to the fact that the ground state energy of a molecular system placed into the box with walls of finite height goes to the corresponding eigenvalue of the Dirichlet boundary value problem when the height of the walls is growing up to infinity. {copyright} {ital 1997 American Institute of Physics.}
Exact semi-separation of variables in waveguides with non-planar boundaries
NASA Astrophysics Data System (ADS)
Athanassoulis, G. A.; Papoutsellis, Ch. E.
2017-05-01
Series expansions of unknown fields Φ =∑φn Zn in elongated waveguides are commonly used in acoustics, optics, geophysics, water waves and other applications, in the context of coupled-mode theories (CMTs). The transverse functions Zn are determined by solving local Sturm-Liouville problems (reference waveguides). In most cases, the boundary conditions assigned to Zn cannot be compatible with the physical boundary conditions of Φ, leading to slowly convergent series, and rendering CMTs mild-slope approximations. In the present paper, the heuristic approach introduced in Athanassoulis & Belibassakis (Athanassoulis & Belibassakis 1999 J. Fluid Mech. 389, 275-301) is generalized and justified. It is proved that an appropriately enhanced series expansion becomes an exact, rapidly convergent representation of the field Φ, valid for any smooth, non-planar boundaries and any smooth enough Φ. This series expansion can be differentiated termwise everywhere in the domain, including the boundaries, implementing an exact semi-separation of variables for non-separable domains. The efficiency of the method is illustrated by solving a boundary value problem for the Laplace equation, and computing the corresponding Dirichlet-to-Neumann operator, involved in Hamiltonian equations for nonlinear water waves. The present method provides accurate results with only a few modes for quite general domains. Extensions to general waveguides are also discussed.
NASA Astrophysics Data System (ADS)
Alessandrini, Giovanni; de Hoop, Maarten V.; Gaburro, Romina
2017-12-01
We discuss the inverse problem of determining the, possibly anisotropic, conductivity of a body Ω\\subset{R}n when the so-called Neumann-to-Dirichlet map is locally given on a non-empty curved portion Σ of the boundary \\partialΩ . We prove that anisotropic conductivities that are a priori known to be piecewise constant matrices on a given partition of Ω with curved interfaces can be uniquely determined in the interior from the knowledge of the local Neumann-to-Dirichlet map.
Sedghi, Aliasghar; Rezaei, Behrooz
2016-11-20
Using the Dirichlet-to-Neumann map method, we have calculated the photonic band structure of two-dimensional metallodielectric photonic crystals having the square and triangular lattices of circular metal rods in a dielectric background. We have selected the transverse electric mode of electromagnetic waves, and the resulting band structures showed the existence of photonic bandgap in these structures. We theoretically study the effect of background dielectric on the photonic bandgap.
Generalized species sampling priors with latent Beta reinforcements
Airoldi, Edoardo M.; Costa, Thiago; Bassetti, Federico; Leisen, Fabrizio; Guindani, Michele
2014-01-01
Many popular Bayesian nonparametric priors can be characterized in terms of exchangeable species sampling sequences. However, in some applications, exchangeability may not be appropriate. We introduce a novel and probabilistically coherent family of non-exchangeable species sampling sequences characterized by a tractable predictive probability function with weights driven by a sequence of independent Beta random variables. We compare their theoretical clustering properties with those of the Dirichlet Process and the two parameters Poisson-Dirichlet process. The proposed construction provides a complete characterization of the joint process, differently from existing work. We then propose the use of such process as prior distribution in a hierarchical Bayes modeling framework, and we describe a Markov Chain Monte Carlo sampler for posterior inference. We evaluate the performance of the prior and the robustness of the resulting inference in a simulation study, providing a comparison with popular Dirichlet Processes mixtures and Hidden Markov Models. Finally, we develop an application to the detection of chromosomal aberrations in breast cancer by leveraging array CGH data. PMID:25870462
NASA Astrophysics Data System (ADS)
Boyd, John P.; Amore, Paolo; Fernández, Francisco M.
2018-03-01
A "bent waveguide" in the sense used here is a small perturbation of a two-dimensional rectangular strip which is infinitely long in the down-channel direction and has a finite, constant width in the cross-channel coordinate. The goal is to calculate the smallest ("ground state") eigenvalue of the stationary Schrödinger equation which here is a two-dimensional Helmholtz equation, ψxx +ψyy + Eψ = 0 where E is the eigenvalue and homogeneous Dirichlet boundary conditions are imposed on the walls of the waveguide. Perturbation theory gives a good description when the "bending strength" parameter ɛ is small as described in our previous article (Amore et al., 2017) and other works cited therein. However, such series are asymptotic, and it is often impractical to calculate more than a handful of terms. It is therefore useful to develop numerical methods for the perturbed strip to cover intermediate ɛ where the perturbation series may be inaccurate and also to check the pertubation expansion when ɛ is small. The perturbation-induced change-in-eigenvalue, δ ≡ E(ɛ) - E(0) , is O(ɛ2) . We show that the computation becomes very challenging as ɛ → 0 because (i) the ground state eigenfunction varies on both O(1) and O(1 / ɛ) length scales and (ii) high accuracy is needed to compute several correct digits in δ, which is itself small compared to the eigenvalue E. The multiple length scales are not geographically separate, but rather are inextricably commingled in the neighborhood of the boundary deformation. We show that coordinate mapping and immersed boundary strategies both reduce the computational domain to the uniform strip, allowing application of pseudospectral methods on tensor product grids with tensor product basis functions. We compared different basis sets; Chebyshev polynomials are best in the cross-channel direction. However, sine functions generate rather accurate analytical approximations with just a single basis function. In the down-channel coordinate, X ∈ [ - ∞ , ∞ ] , Fourier domain truncation using the change of coordinate X = sinh(Lt) is considerably more efficient than rational Chebyshev functions TBn(X ; L) . All the spectral methods, however, yielded the required accuracy on a desktop computer.
Analysis of Classes of Singular Steady State Reaction Diffusion Equations
NASA Astrophysics Data System (ADS)
Son, Byungjae
We study positive radial solutions to classes of steady state reaction diffusion problems on the exterior of a ball with both Dirichlet and nonlinear boundary conditions. We study both Laplacian as well as p-Laplacian problems with reaction terms that are p-sublinear at infinity. We consider both positone and semipositone reaction terms and establish existence, multiplicity and uniqueness results. Our existence and multiplicity results are achieved by a method of sub-supersolutions and uniqueness results via a combination of maximum principles, comparison principles, energy arguments and a-priori estimates. Our results significantly enhance the literature on p-sublinear positone and semipositone problems. Finally, we provide exact bifurcation curves for several one-dimensional problems. In the autonomous case, we extend and analyze a quadrature method, and in the nonautonomous case, we employ shooting methods. We use numerical solvers in Mathematica to generate the bifurcation curves.
A New Family of Solvable Pearson-Dirichlet Random Walks
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2011-07-01
An n-step Pearson-Gamma random walk in ℝ d starts at the origin and consists of n independent steps with gamma distributed lengths and uniform orientations. The gamma distribution of each step length has a shape parameter q>0. Constrained random walks of n steps in ℝ d are obtained from the latter walks by imposing that the sum of the step lengths is equal to a fixed value. Simple closed-form expressions were obtained in particular for the distribution of the endpoint of such constrained walks for any d≥ d 0 and any n≥2 when q is either q = d/2 - 1 ( d 0=3) or q= d-1 ( d 0=2) (Le Caër in J. Stat. Phys. 140:728-751, 2010). When the total walk length is chosen, without loss of generality, to be equal to 1, then the constrained step lengths have a Dirichlet distribution whose parameters are all equal to q and the associated walk is thus named a Pearson-Dirichlet random walk. The density of the endpoint position of a n-step planar walk of this type ( n≥2), with q= d=2, was shown recently to be a weighted mixture of 1+ floor( n/2) endpoint densities of planar Pearson-Dirichlet walks with q=1 (Beghin and Orsingher in Stochastics 82:201-229, 2010). The previous result is generalized to any walk space dimension and any number of steps n≥2 when the parameter of the Pearson-Dirichlet random walk is q= d>1. We rely on the connection between an unconstrained random walk and a constrained one, which have both the same n and the same q= d, to obtain a closed-form expression of the endpoint density. The latter is a weighted mixture of 1+ floor( n/2) densities with simple forms, equivalently expressed as a product of a power and a Gauss hypergeometric function. The weights are products of factors which depends both on d and n and Bessel numbers independent of d.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthias C. M. Troffaes; Gero Walter; Dana Kelly
In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus onmore » elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model.« less
NASA Astrophysics Data System (ADS)
Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.
2018-06-01
We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.
Semiparametric Bayesian classification with longitudinal markers
De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter
2013-01-01
Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871
Low frequency acoustic and electromagnetic scattering
NASA Technical Reports Server (NTRS)
Hariharan, S. I.; Maccamy, R. C.
1986-01-01
This paper deals with two classes of problems arising from acoustics and electromagnetics scattering in the low frequency stations. The first class of problem is solving Helmholtz equation with Dirichlet boundary conditions on an arbitrary two dimensional body while the second one is an interior-exterior interface problem with Helmholtz equation in the exterior. Low frequency analysis show that there are two intermediate problems which solve the above problems accurate to 0(k/2/ log k) where k is the frequency. These solutions greatly differ from the zero frequency approximations. For the Dirichlet problem numerical examples are shown to verify the theoretical estimates.
The first eigenvalue of the p-Laplacian on quantum graphs
NASA Astrophysics Data System (ADS)
Del Pezzo, Leandro M.; Rossi, Julio D.
2016-12-01
We study the first eigenvalue of the p-Laplacian (with 1
Detecting Anisotropic Inclusions Through EIT
NASA Astrophysics Data System (ADS)
Cristina, Jan; Päivärinta, Lassi
2017-12-01
We study the evolution equation {partialtu=-Λtu} where {Λt} is the Dirichlet-Neumann operator of a decreasing family of Riemannian manifolds with boundary {Σt}. We derive a lower bound for the solution of such an equation, and apply it to a quantitative density estimate for the restriction of harmonic functions on M}=Σ_{0 to the boundaries of {partialΣt}. Consequently we are able to derive a lower bound for the difference of the Dirichlet-Neumann maps in terms of the difference of a background metrics g and an inclusion metric {g+χ_{Σ}(h-g)} on a manifold M.
Determinants and conformal anomalies of GJMS operators on spheres
NASA Astrophysics Data System (ADS)
Dowker, J. S.
2011-03-01
The conformal anomalies and functional determinants of the Branson-GJMS operators, P2k, on the d-dimensional sphere are evaluated in explicit terms for any d and k such that k <= d/2 (if d is even). The determinants are given in terms of multiple gamma functions and a rational multiplicative anomaly, which vanishes for odd d. Taking the mode system on the sphere as the union of Neumann and Dirichlet ones on the hemisphere is a basic part of the method and leads to a heuristic explanation of the non-existence of 'super-critical' operators, 2k > d for even d. Significant use is made of the Barnes zeta function. The results are given in terms of ratios of determinants of operators on a (d + 1)-dimensional bulk dual sphere. For odd dimensions, the log determinant is written in terms of multiple sine functions and agreement is found with holographic computations, yielding an integral over a Plancherel measure. The N-D determinant ratio is also found explicitly for even dimensions. Ehrhart polynomials are encountered.
NASA Astrophysics Data System (ADS)
Nakamura, Gen; Wang, Haibing
2017-05-01
Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.
NASA Astrophysics Data System (ADS)
Reimer, Ashton S.; Cheviakov, Alexei F.
2013-03-01
A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.
Ko, Yi-An; Mukherjee, Bhramar; Smith, Jennifer A; Kardia, Sharon L R; Allison, Matthew; Diez Roux, Ana V
2016-11-01
There has been an increased interest in identifying gene-environment interaction (G × E) in the context of multiple environmental exposures. Most G × E studies analyze one exposure at a time, but we are exposed to multiple exposures in reality. Efficient analysis strategies for complex G × E with multiple environmental factors in a single model are still lacking. Using the data from the Multiethnic Study of Atherosclerosis, we illustrate a two-step approach for modeling G × E with multiple environmental factors. First, we utilize common clustering and classification strategies (e.g., k-means, latent class analysis, classification and regression trees, Bayesian clustering using Dirichlet Process) to define subgroups corresponding to distinct environmental exposure profiles. Second, we illustrate the use of an additive main effects and multiplicative interaction model, instead of the conventional saturated interaction model using product terms of factors, to study G × E with the data-driven exposure subgroups defined in the first step. We demonstrate useful analytical approaches to translate multiple environmental exposures into one summary class. These tools not only allow researchers to consider several environmental exposures in G × E analysis but also provide some insight into how genes modify the effect of a comprehensive exposure profile instead of examining effect modification for each exposure in isolation.
A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions
NASA Astrophysics Data System (ADS)
Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.
2014-01-01
We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.
On the exterior Dirichlet problem for Hessian quotient equations
NASA Astrophysics Data System (ADS)
Li, Dongsheng; Li, Zhisu
2018-06-01
In this paper, we establish the existence and uniqueness theorem for solutions of the exterior Dirichlet problem for Hessian quotient equations with prescribed asymptotic behavior at infinity. This extends the previous related results on the Monge-Ampère equations and on the Hessian equations, and rearranges them in a systematic way. Based on the Perron's method, the main ingredient of this paper is to construct some appropriate subsolutions of the Hessian quotient equation, which is realized by introducing some new quantities about the elementary symmetric polynomials and using them to analyze the corresponding ordinary differential equation related to the generalized radially symmetric subsolutions of the original equation.
A three dimensional Dirichlet-to-Neumann map for surface waves over topography
NASA Astrophysics Data System (ADS)
Nachbin, Andre; Andrade, David
2016-11-01
We consider three dimensional surface water waves in the potential theory regime. The bottom topography can have a quite general profile. In the case of linear waves the Dirichlet-to-Neumann operator is formulated in a matrix decomposition form. Computational simulations illustrate the performance of the method. Two dimensional periodic bottom variations are considered in both the Bragg resonance regime as well as the rapidly varying (homogenized) regime. In the three-dimensional case we use the Luneburg lens-shaped submerged mound, which promotes the focusing of the underlying rays. FAPERJ Cientistas do Nosso Estado Grant 102917/2011 and ANP/PRH-32.
Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar
2005-11-04
The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.
Two-point correlation function for Dirichlet L-functions
NASA Astrophysics Data System (ADS)
Bogomolny, E.; Keating, J. P.
2013-03-01
The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy-Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question.
Sheng, Yin; Zhang, Hao; Zeng, Zhigang
2017-10-01
This paper is concerned with synchronization for a class of reaction-diffusion neural networks with Dirichlet boundary conditions and infinite discrete time-varying delays. By utilizing theories of partial differential equations, Green's formula, inequality techniques, and the concept of comparison, algebraic criteria are presented to guarantee master-slave synchronization of the underlying reaction-diffusion neural networks via a designed controller. Additionally, sufficient conditions on exponential synchronization of reaction-diffusion neural networks with finite time-varying delays are established. The proposed criteria herein enhance and generalize some published ones. Three numerical examples are presented to substantiate the validity and merits of the obtained theoretical results.
Modeling unobserved sources of heterogeneity in animal abundance using a Dirichlet process prior
Dorazio, R.M.; Mukherjee, B.; Zhang, L.; Ghosh, M.; Jelks, H.L.; Jordan, F.
2008-01-01
In surveys of natural populations of animals, a sampling protocol is often spatially replicated to collect a representative sample of the population. In these surveys, differences in abundance of animals among sample locations may induce spatial heterogeneity in the counts associated with a particular sampling protocol. For some species, the sources of heterogeneity in abundance may be unknown or unmeasurable, leading one to specify the variation in abundance among sample locations stochastically. However, choosing a parametric model for the distribution of unmeasured heterogeneity is potentially subject to error and can have profound effects on predictions of abundance at unsampled locations. In this article, we develop an alternative approach wherein a Dirichlet process prior is assumed for the distribution of latent abundances. This approach allows for uncertainty in model specification and for natural clustering in the distribution of abundances in a data-adaptive way. We apply this approach in an analysis of counts based on removal samples of an endangered fish species, the Okaloosa darter. Results of our data analysis and simulation studies suggest that our implementation of the Dirichlet process prior has several attractive features not shared by conventional, fully parametric alternatives. ?? 2008, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Hong, Youngjoon; Nicholls, David P.
2017-09-01
The capability to rapidly and robustly simulate the scattering of linear waves by periodic, multiply layered media in two and three dimensions is crucial in many engineering applications. In this regard, we present a High-Order Perturbation of Surfaces method for linear wave scattering in a multiply layered periodic medium to find an accurate numerical solution of the governing Helmholtz equations. For this we truncate the bi-infinite computational domain to a finite one with artificial boundaries, above and below the structure, and enforce transparent boundary conditions there via Dirichlet-Neumann Operators. This is followed by a Transformed Field Expansion resulting in a Fourier collocation, Legendre-Galerkin, Taylor series method for solving the problem in a transformed set of coordinates. Assorted numerical simulations display the spectral convergence of the proposed algorithm.
Asymptotic stability of a nonlinear Korteweg-de Vries equation with critical lengths
NASA Astrophysics Data System (ADS)
Chu, Jixun; Coron, Jean-Michel; Shang, Peipei
2015-10-01
We study an initial-boundary-value problem of a nonlinear Korteweg-de Vries equation posed on the finite interval (0, 2 kπ) where k is a positive integer. The whole system has Dirichlet boundary condition at the left end-point, and both of Dirichlet and Neumann homogeneous boundary conditions at the right end-point. It is known that the origin is not asymptotically stable for the linearized system around the origin. We prove that the origin is (locally) asymptotically stable for the nonlinear system if the integer k is such that the kernel of the linear Korteweg-de Vries stationary equation is of dimension 1. This is for example the case if k = 1.
Robust boundary treatment for open-channel flows in divergence-free incompressible SPH
NASA Astrophysics Data System (ADS)
Pahar, Gourabananda; Dhar, Anirban
2017-03-01
A robust Incompressible Smoothed Particle Hydrodynamics (ISPH) framework is developed to simulate specified inflow and outflow boundary conditions for open-channel flow. Being purely divergence-free, the framework offers smoothed and structured pressure distribution. An implicit treatment of Pressure Poison Equation and Dirichlet boundary condition is applied on free-surface to minimize error in velocity-divergence. Beyond inflow and outflow threshold, multiple layers of dummy particles are created according to specified boundary condition. Inflow boundary acts as a soluble wave-maker. Fluid particles beyond outflow threshold are removed and replaced with dummy particles with specified boundary velocity. The framework is validated against different cases of open channel flow with different boundary conditions. The model can efficiently capture flow evolution and vortex generation for random geometry and variable boundary conditions.
Scalar Casimir densities and forces for parallel plates in cosmic string spacetime
NASA Astrophysics Data System (ADS)
Bezerra de Mello, E. R.; Saharian, A. A.; Abajyan, S. V.
2018-04-01
We analyze the Green function, the Casimir densities and forces associated with a massive scalar quantum field confined between two parallel plates in a higher dimensional cosmic string spacetime. The plates are placed orthogonal to the string, and the field obeys the Robin boundary conditions on them. The boundary-induced contributions are explicitly extracted in the vacuum expectation values (VEVs) of the field squared and of the energy-momentum tensor for both the single plate and two plates geometries. The VEV of the energy-momentum tensor, in additional to the diagonal components, contains an off diagonal component corresponding to the shear stress. The latter vanishes on the plates in special cases of Dirichlet and Neumann boundary conditions. For points outside the string core the topological contributions in the VEVs are finite on the plates. Near the string the VEVs are dominated by the boundary-free part, whereas at large distances the boundary-induced contributions dominate. Due to the nonzero off diagonal component of the vacuum energy-momentum tensor, in addition to the normal component, the Casimir forces have nonzero component parallel to the boundary (shear force). Unlike the problem on the Minkowski bulk, the normal forces acting on the separate plates, in general, do not coincide if the corresponding Robin coefficients are different. Another difference is that in the presence of the cosmic string the Casimir forces for Dirichlet and Neumann boundary conditions differ. For Dirichlet boundary condition the normal Casimir force does not depend on the curvature coupling parameter. This is not the case for other boundary conditions. A new qualitative feature induced by the cosmic string is the appearance of the shear stress acting on the plates. The corresponding force is directed along the radial coordinate and vanishes for Dirichlet and Neumann boundary conditions. Depending on the parameters of the problem, the radial component of the shear force can be either positive or negative.
pong: fast analysis and visualization of latent clusters in population genetic data.
Behr, Aaron A; Liu, Katherine Z; Liu-Fang, Gracie; Nakka, Priyanka; Ramachandran, Sohini
2016-09-15
A series of methods in population genetics use multilocus genotype data to assign individuals membership in latent clusters. These methods belong to a broad class of mixed-membership models, such as latent Dirichlet allocation used to analyze text corpora. Inference from mixed-membership models can produce different output matrices when repeatedly applied to the same inputs, and the number of latent clusters is a parameter that is often varied in the analysis pipeline. For these reasons, quantifying, visualizing, and annotating the output from mixed-membership models are bottlenecks for investigators across multiple disciplines from ecology to text data mining. We introduce pong, a network-graphical approach for analyzing and visualizing membership in latent clusters with a native interactive D3.js visualization. pong leverages efficient algorithms for solving the Assignment Problem to dramatically reduce runtime while increasing accuracy compared with other methods that process output from mixed-membership models. We apply pong to 225 705 unlinked genome-wide single-nucleotide variants from 2426 unrelated individuals in the 1000 Genomes Project, and identify previously overlooked aspects of global human population structure. We show that pong outpaces current solutions by more than an order of magnitude in runtime while providing a customizable and interactive visualization of population structure that is more accurate than those produced by current tools. pong is freely available and can be installed using the Python package management system pip. pong's source code is available at https://github.com/abehr/pong aaron_behr@alumni.brown.edu or sramachandran@brown.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Ding, Xiao-Li; Nieto, Juan J.
2017-11-01
In this paper, we consider the analytical solutions of coupling fractional partial differential equations (FPDEs) with Dirichlet boundary conditions on a finite domain. Firstly, the method of successive approximations is used to obtain the analytical solutions of coupling multi-term time fractional ordinary differential equations. Then, the technique of spectral representation of the fractional Laplacian operator is used to convert the coupling FPDEs to the coupling multi-term time fractional ordinary differential equations. By applying the obtained analytical solutions to the resulting multi-term time fractional ordinary differential equations, the desired analytical solutions of the coupling FPDEs are given. Our results are applied to derive the analytical solutions of some special cases to demonstrate their applicability.
The Casimir effect for parallel plates revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawakami, N. A.; Nemes, M. C.; Wreszinski, Walter F.
2007-10-15
The Casimir effect for a massless scalar field with Dirichlet and periodic boundary conditions (bc's) on infinite parallel plates is revisited in the local quantum field theory (lqft) framework introduced by Kay [Phys. Rev. D 20, 3052 (1979)]. The model displays a number of more realistic features than the ones he treated. In addition to local observables, as the energy density, we propose to consider intensive variables, such as the energy per unit area {epsilon}, as fundamental observables. Adopting this view, lqft rejects Dirichlet (the same result may be proved for Neumann or mixed) bc, and accepts periodic bc: inmore » the former case {epsilon} diverges, in the latter it is finite, as is shown by an expression for the local energy density obtained from lqft through the use of the Poisson summation formula. Another way to see this uses methods from the Euler summation formula: in the proof of regularization independence of the energy per unit area, a regularization-dependent surface term arises upon use of Dirichlet bc, but not periodic bc. For the conformally invariant scalar quantum field, this surface term is absent due to the condition of zero trace of the energy momentum tensor, as remarked by De Witt [Phys. Rep. 19, 295 (1975)]. The latter property does not hold in the application to the dark energy problem in cosmology, in which we argue that periodic bc might play a distinguished role.« less
Relating zeta functions of discrete and quantum graphs
NASA Astrophysics Data System (ADS)
Harrison, Jonathan; Weyand, Tracy
2018-02-01
We write the spectral zeta function of the Laplace operator on an equilateral metric graph in terms of the spectral zeta function of the normalized Laplace operator on the corresponding discrete graph. To do this, we apply a relation between the spectrum of the Laplacian on a discrete graph and that of the Laplacian on an equilateral metric graph. As a by-product, we determine how the multiplicity of eigenvalues of the quantum graph, that are also in the spectrum of the graph with Dirichlet conditions at the vertices, depends on the graph geometry. Finally we apply the result to calculate the vacuum energy and spectral determinant of a complete bipartite graph and compare our results with those for a star graph, a graph in which all vertices are connected to a central vertex by a single edge.
Nicholls, David P
2018-04-01
The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.
A three-dimensional Dirichlet-to-Neumann operator for water waves over topography
NASA Astrophysics Data System (ADS)
Andrade, D.; Nachbin, A.
2018-06-01
Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.
NASA Astrophysics Data System (ADS)
Nicholls, David P.
2018-04-01
The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.
Modification of Classical SPM for Slightly Rough Surface Scattering with Low Grazing Angle Incidence
NASA Astrophysics Data System (ADS)
Guo, Li-Xin; Wei, Guo-Hui; Kim, Cheyoung; Wu, Zhen-Sen
2005-11-01
Based on the impedance/admittance rough boundaries, the reflection coefficients and the scattering cross section with low grazing angle incidence are obtained for both VV and HH polarizations. The error of the classical perturbation method at grazing angle is overcome for the vertical polarization at a rough Neumann boundary of infinite extent. The derivation of the formulae and the numerical results show that the backscattering cross section depends on the grazing angle to the fourth power for both Neumann and Dirichlet boundary conditions with low grazing angle incidence. Our results can reduce to that of the classical small perturbation method by neglecting the Neumann and Dirichlet boundary conditions. The project supported by National Natural Science Foundation of China under Grant No. 60101001 and the National Defense Foundation of China
Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay
2013-12-01
Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.
Signatures of ecological processes in microbial community time series.
Faust, Karoline; Bauchinger, Franziska; Laroche, Béatrice; de Buyl, Sophie; Lahti, Leo; Washburne, Alex D; Gonze, Didier; Widder, Stefanie
2018-06-28
Growth rates, interactions between community members, stochasticity, and immigration are important drivers of microbial community dynamics. In sequencing data analysis, such as network construction and community model parameterization, we make implicit assumptions about the nature of these drivers and thereby restrict model outcome. Despite apparent risk of methodological bias, the validity of the assumptions is rarely tested, as comprehensive procedures are lacking. Here, we propose a classification scheme to determine the processes that gave rise to the observed time series and to enable better model selection. We implemented a three-step classification scheme in R that first determines whether dependence between successive time steps (temporal structure) is present in the time series and then assesses with a recently developed neutrality test whether interactions between species are required for the dynamics. If the first and second tests confirm the presence of temporal structure and interactions, then parameters for interaction models are estimated. To quantify the importance of temporal structure, we compute the noise-type profile of the community, which ranges from black in case of strong dependency to white in the absence of any dependency. We applied this scheme to simulated time series generated with the Dirichlet-multinomial (DM) distribution, Hubbell's neutral model, the generalized Lotka-Volterra model and its discrete variant (the Ricker model), and a self-organized instability model, as well as to human stool microbiota time series. The noise-type profiles for all but DM data clearly indicated distinctive structures. The neutrality test correctly classified all but DM and neutral time series as non-neutral. The procedure reliably identified time series for which interaction inference was suitable. Both tests were required, as we demonstrated that all structured time series, including those generated with the neutral model, achieved a moderate to high goodness of fit to the Ricker model. We present a fast and robust scheme to classify community structure and to assess the prevalence of interactions directly from microbial time series data. The procedure not only serves to determine ecological drivers of microbial dynamics, but also to guide selection of appropriate community models for prediction and follow-up analysis.
A class of renormalised meshless Laplacians for boundary value problems
NASA Astrophysics Data System (ADS)
Basic, Josip; Degiuli, Nastia; Ban, Dario
2018-02-01
A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.
Bound states and propagating modes in quantum wires with sharp bends and/or constrictions
NASA Astrophysics Data System (ADS)
Razavy, M.
1997-06-01
A number of interesting problems of quantum wires with different geometries can be studied with the help of conformal mapping. These include crossed wires, twisting wires, conductors with constrictions, and wires with a bend. Here the Helmholz equation with Dirichlet boundary condition on the surface of the wire is transformed to a Schröautdinger-like equation with an energy-dependent nonseparable potential but with boundary conditions given on two straight lines. By expanding the wave function in terms of the Fourier series of one of the variables one obtains an infinite set of coupled ordinary differential equations. Only the propagating modes plus a few of the localized modes contribute significantly to the total wave function. Once the problem is solved, one can express the results in terms of the original variables using the inverse conformal mapping. As an example, the total wave function, the components of the current density, and the bound-state energy for a Γ-shaped quantum wire is calculated in detail.
Exact sum rules for inhomogeneous drums
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amore, Paolo, E-mail: paolo.amore@gmail.com
2013-09-15
We derive general expressions for the sum rules of the eigenvalues of drums of arbitrary shape and arbitrary density, obeying different boundary conditions. The formulas that we present are a generalization of the analogous formulas for one dimensional inhomogeneous systems that we have obtained in a previous paper. We also discuss the extension of these formulas to higher dimensions. We show that in the special case of a density depending only on one variable the sum rules of any integer order can be expressed in terms of a single series. As an application of our result we derive exact summore » rules for the homogeneous circular annulus with different boundary conditions, for a homogeneous circular sector and for a radially inhomogeneous circular annulus with Dirichlet boundary conditions. -- Highlights: •We derive an explicit expression for the sum rules of inhomogeneous drums. •We discuss the extension to higher dimensions. •We discuss the special case of an inhomogeneity only along one direction.« less
Rapid Airplane Parametric Input Design(RAPID)
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Bloor, Malcolm I. G.; Wilson, Michael J.; Thomas, Almuttil M.
2004-01-01
An efficient methodology is presented for defining a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. A small set of design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tail, horizontal tail, and canard components. The wing, tail, and canard components are manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. Grid sensitivity is obtained by applying the automatic differentiation precompiler ADIFOR to software for the grid generation. The computed surface grids, volume grids, and sensitivity derivatives are suitable for a wide range of Computational Fluid Dynamics simulation and configuration optimizations.
An approach to get thermodynamic properties from speed of sound
NASA Astrophysics Data System (ADS)
Núñez, M. A.; Medina, L. A.
2017-01-01
An approach for estimating thermodynamic properties of gases from the speed of sound u, is proposed. The square u2, the compression factor Z and the molar heat capacity at constant volume C V are connected by two coupled nonlinear partial differential equations. Previous approaches to solving this system differ in the conditions used on the range of temperature values [Tmin,Tmax]. In this work we propose the use of Dirichlet boundary conditions at Tmin, Tmax. The virial series of the compression factor Z = 1+Bρ+Cρ2+… and other properties leads the problem to the solution of a recursive set of linear ordinary differential equations for the B, C. Analytic solutions of the B equation for Argon are used to study the stability of our approach and previous ones under perturbation errors of the input data. The results show that the approach yields B with a relative error bounded basically by that of the boundary values and the error of other approaches can be some orders of magnitude lager.
NASA Astrophysics Data System (ADS)
Yun, Ana; Shin, Jaemin; Li, Yibao; Lee, Seunggyu; Kim, Junseok
We numerically investigate periodic traveling wave solutions for a diffusive predator-prey system with landscape features. The landscape features are modeled through the homogeneous Dirichlet boundary condition which is imposed at the edge of the obstacle domain. To effectively treat the Dirichlet boundary condition, we employ a robust and accurate numerical technique by using a boundary control function. We also propose a robust algorithm for calculating the numerical periodicity of the traveling wave solution. In numerical experiments, we show that periodic traveling waves which move out and away from the obstacle are effectively generated. We explain the formation of the traveling waves by comparing the wavelengths. The spatial asynchrony has been shown in quantitative detail for various obstacles. Furthermore, we apply our numerical technique to the complicated real landscape features.
Hierarchical Dirichlet process model for gene expression clustering
2013-01-01
Clustering is an important data processing tool for interpreting microarray data and genomic network inference. In this article, we propose a clustering algorithm based on the hierarchical Dirichlet processes (HDP). The HDP clustering introduces a hierarchical structure in the statistical model which captures the hierarchical features prevalent in biological data such as the gene express data. We develop a Gibbs sampling algorithm based on the Chinese restaurant metaphor for the HDP clustering. We apply the proposed HDP algorithm to both regulatory network segmentation and gene expression clustering. The HDP algorithm is shown to outperform several popular clustering algorithms by revealing the underlying hierarchical structure of the data. For the yeast cell cycle data, we compare the HDP result to the standard result and show that the HDP algorithm provides more information and reduces the unnecessary clustering fragments. PMID:23587447
Sound-turbulence interaction in transonic boundary layers
NASA Astrophysics Data System (ADS)
Lelostec, Ludovic; Scalo, Carlo; Lele, Sanjiva
2014-11-01
Acoustic wave scattering in a transonic boundary layer is investigated through a novel approach. Instead of simulating directly the interaction of an incoming oblique acoustic wave with a turbulent boundary layer, suitable Dirichlet conditions are imposed at the wall to reproduce only the reflected wave resulting from the interaction of the incident wave with the boundary layer. The method is first validated using the laminar boundary layer profiles in a parallel flow approximation. For this scattering problem an exact inviscid solution can be found in the frequency domain which requires numerical solution of an ODE. The Dirichlet conditions are imposed in a high-fidelity unstructured compressible flow solver for Large Eddy Simulation (LES), CharLESx. The acoustic field of the reflected wave is then solved and the interaction between the boundary layer and sound scattering can be studied.
Step scaling and the Yang-Mills gradient flow
NASA Astrophysics Data System (ADS)
Lüscher, Martin
2014-06-01
The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.
Heat kernel for the elliptic system of linear elasticity with boundary conditions
NASA Astrophysics Data System (ADS)
Taylor, Justin; Kim, Seick; Brown, Russell
2014-10-01
We consider the elliptic system of linear elasticity with bounded measurable coefficients in a domain where the second Korn inequality holds. We construct heat kernel of the system subject to Dirichlet, Neumann, or mixed boundary condition under the assumption that weak solutions of the elliptic system are Hölder continuous in the interior. Moreover, we show that if weak solutions of the mixed problem are Hölder continuous up to the boundary, then the corresponding heat kernel has a Gaussian bound. In particular, if the domain is a two dimensional Lipschitz domain satisfying a corkscrew or non-tangential accessibility condition on the set where we specify Dirichlet boundary condition, then we show that the heat kernel has a Gaussian bound. As an application, we construct Green's function for elliptic mixed problem in such a domain.
Saint-Hilary, Gaelle; Cadour, Stephanie; Robert, Veronique; Gasparini, Mauro
2017-05-01
Quantitative methodologies have been proposed to support decision making in drug development and monitoring. In particular, multicriteria decision analysis (MCDA) and stochastic multicriteria acceptability analysis (SMAA) are useful tools to assess the benefit-risk ratio of medicines according to the performances of the treatments on several criteria, accounting for the preferences of the decision makers regarding the relative importance of these criteria. However, even in its probabilistic form, MCDA requires the exact elicitations of the weights of the criteria by the decision makers, which may be difficult to achieve in practice. SMAA allows for more flexibility and can be used with unknown or partially known preferences, but it is less popular due to its increased complexity and the high degree of uncertainty in its results. In this paper, we propose a simple model as a generalization of MCDA and SMAA, by applying a Dirichlet distribution to the weights of the criteria and by making its parameters vary. This unique model permits to fit both MCDA and SMAA, and allows for a more extended exploration of the benefit-risk assessment of treatments. The precision of its results depends on the precision parameter of the Dirichlet distribution, which could be naturally interpreted as the strength of confidence of the decision makers in their elicitation of preferences. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity
NASA Technical Reports Server (NTRS)
Cai, Z.; Manteuffel, T. A.; McCormick, S. F.
1996-01-01
Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.
A resilient domain decomposition polynomial chaos solver for uncertain elliptic PDEs
NASA Astrophysics Data System (ADS)
Mycek, Paul; Contreras, Andres; Le Maître, Olivier; Sargsyan, Khachik; Rizzi, Francesco; Morris, Karla; Safta, Cosmin; Debusschere, Bert; Knio, Omar
2017-07-01
A resilient method is developed for the solution of uncertain elliptic PDEs on extreme scale platforms. The method is based on a hybrid domain decomposition, polynomial chaos (PC) framework that is designed to address soft faults. Specifically, parallel and independent solves of multiple deterministic local problems are used to define PC representations of local Dirichlet boundary-to-boundary maps that are used to reconstruct the global solution. A LAD-lasso type regression is developed for this purpose. The performance of the resulting algorithm is tested on an elliptic equation with an uncertain diffusivity field. Different test cases are considered in order to analyze the impacts of correlation structure of the uncertain diffusivity field, the stochastic resolution, as well as the probability of soft faults. In particular, the computations demonstrate that, provided sufficiently many samples are generated, the method effectively overcomes the occurrence of soft faults.
Electromagnetic MUSIC-type imaging of perfectly conducting, arc-like cracks at single frequency
NASA Astrophysics Data System (ADS)
Park, Won-Kwang; Lesselier, Dominique
2009-11-01
We propose a non-iterative MUSIC (MUltiple SIgnal Classification)-type algorithm for the time-harmonic electromagnetic imaging of one or more perfectly conducting, arc-like cracks found within a homogeneous space R2. The algorithm is based on a factorization of the Multi-Static Response (MSR) matrix collected in the far-field at a single, nonzero frequency in either Transverse Magnetic (TM) mode (Dirichlet boundary condition) or Transverse Electric (TE) mode (Neumann boundary condition), followed by the calculation of a MUSIC cost functional expected to exhibit peaks along the crack curves each half a wavelength. Numerical experimentation from exact, noiseless and noisy data shows that this is indeed the case and that the proposed algorithm behaves in robust manner, with better results in the TM mode than in the TE mode for which one would have to estimate the normal to the crack to get the most optimal results.
Variational Problems with Long-Range Interaction
NASA Astrophysics Data System (ADS)
Soave, Nicola; Tavares, Hugo; Terracini, Susanna; Zilio, Alessandro
2018-06-01
We consider a class of variational problems for densities that repel each other at a distance. Typical examples are given by the Dirichlet functional and the Rayleigh functional D(u) = \\sum_{i=1}^k \\int_{Ω} |\
Latent Dirichlet Allocation (LDA) Model and kNN Algorithm to Classify Research Project Selection
NASA Astrophysics Data System (ADS)
Safi’ie, M. A.; Utami, E.; Fatta, H. A.
2018-03-01
Universitas Sebelas Maret has a teaching staff more than 1500 people, and one of its tasks is to carry out research. In the other side, the funding support for research and service is limited, so there is need to be evaluated to determine the Research proposal submission and devotion on society (P2M). At the selection stage, research proposal documents are collected as unstructured data and the data stored is very large. To extract information contained in the documents therein required text mining technology. This technology applied to gain knowledge to the documents by automating the information extraction. In this articles we use Latent Dirichlet Allocation (LDA) to the documents as a model in feature extraction process, to get terms that represent its documents. Hereafter we use k-Nearest Neighbour (kNN) algorithm to classify the documents based on its terms.
Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.
Yu, Kezi; Quirk, J Gerald; Djurić, Petar M
2017-01-01
In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting.
Study of a mixed dispersal population dynamics model
Chugunova, Marina; Jadamba, Baasansuren; Kao, Chiu -Yen; ...
2016-08-27
In this study, we consider a mixed dispersal model with periodic and Dirichlet boundary conditions and its corresponding linear eigenvalue problem. This model describes the time evolution of a population which disperses both locally and non-locally. We investigate how long time dynamics depend on the parameter values. Furthermore, we study the minimization of the principal eigenvalue under the constraints that the resource function is bounded from above and below, and with a fixed total integral. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for the species to diemore » out more slowly or survive more easily. Our numerical simulations indicate that the optimal favorable region tends to be a simply-connected domain. Numerous results are shown to demonstrate various scenarios of optimal favorable regions for periodic and Dirichlet boundary conditions.« less
NASA Astrophysics Data System (ADS)
Cardone, G.; Durante, T.; Nazarov, S. A.
2017-07-01
We consider the spectral Dirichlet problem for the Laplace operator in the plane Ω∘ with double-periodic perforation but also in the domain Ω• with a semi-infinite foreign inclusion so that the Floquet-Bloch technique and the Gelfand transform do not apply directly. We describe waves which are localized near the inclusion and propagate along it. We give a formulation of the problem with radiation conditions that provides a Fredholm operator of index zero. The main conclusion concerns the spectra σ∘ and σ• of the problems in Ω∘ and Ω•, namely we present a concrete geometry which supports the relation σ∘ ⫋σ• due to a new non-empty spectral band caused by the semi-infinite inclusion called an open waveguide in the double-periodic medium.
Dirichlet Component Regression and its Applications to Psychiatric Data.
Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel
2008-08-15
We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook's distance, and a local jackknife influence metric.
Unstable Mode Solutions to the Klein-Gordon Equation in Kerr-anti-de Sitter Spacetimes
NASA Astrophysics Data System (ADS)
Dold, Dominic
2017-03-01
For any cosmological constant {Λ = -3/ℓ2 < 0} and any {α < 9/4}, we find a Kerr-AdS spacetime {({M}, g_{KAdS})}, in which the Klein-Gordon equation {Box_{g_{KAdS}}ψ + α/ℓ2ψ = 0} has an exponentially growing mode solution satisfying a Dirichlet boundary condition at infinity. The spacetime violates the Hawking-Reall bound {r+2 > |a|ℓ}. We obtain an analogous result for Neumann boundary conditions if {5/4 < α < 9/4}. Moreover, in the Dirichlet case, one can prove that, for any Kerr-AdS spacetime violating the Hawking-Reall bound, there exists an open family of masses {α} such that the corresponding Klein-Gordon equation permits exponentially growing mode solutions. Our result adopts methods of Shlapentokh-Rothman developed in (Commun. Math. Phys. 329:859-891, 2014) and provides the first rigorous construction of a superradiant instability for negative cosmological constant.
NASA Astrophysics Data System (ADS)
Gross, Markus
2018-03-01
We consider a one-dimensional fluctuating interfacial profile governed by the Edwards–Wilkinson or the stochastic Mullins-Herring equation for periodic, standard Dirichlet and Dirichlet no-flux boundary conditions. The minimum action path of an interfacial fluctuation conditioned to reach a given maximum height M at a finite (first-passage) time T is calculated within the weak-noise approximation. Dynamic and static scaling functions for the profile shape are obtained in the transient and the equilibrium regime, i.e. for first-passage times T smaller or larger than the characteristic relaxation time, respectively. In both regimes, the profile approaches the maximum height M with a universal algebraic time dependence characterized solely by the dynamic exponent of the model. It is shown that, in the equilibrium regime, the spatial shape of the profile depends sensitively on boundary conditions and conservation laws, but it is essentially independent of them in the transient regime.
Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models
Yu, Kezi; Quirk, J. Gerald
2017-01-01
In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting. PMID:28953927
Stereochemistry of silicon in oxygen-containing compounds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serezhkin, V. N., E-mail: Serezhkin@samsu.ru; Urusov, V. S.
2017-01-15
Specific stereochemical features of silicon in oxygen-containing compounds, including hybrid silicates with all oxygen atoms of SiO{sub n} groups ({sub n} = 4, 5, or 6) entering into the composition of organic anions or molecules, are described by characteristics of Voronoi—Dirichlet polyhedra. It is found that in rutile-like stishovite and post-stishovite phases with the structures similar to those of СаСl{sub 2}, α-PbO{sub 2}, or pyrite FeS{sub 2}, the volume of Voronoi—Dirichlet polyhedra of silicon and oxygen atoms decreases linearly with pressure increasing to 268 GPa. Based on these results, the possibility of formation of new post-stishovite phases is shown, namely,more » the fluorite-like structure (transition predicted at ~400 GPa) and a body-centered cubic lattice with statistical arrangement of silicon and oxygen atoms (~900 GPa).« less
Multiple Indicator Stationary Time Series Models.
ERIC Educational Resources Information Center
Sivo, Stephen A.
2001-01-01
Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2018-05-01
In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.
Casimir interaction between spheres in ( D + 1)-dimensional Minkowski spacetime
NASA Astrophysics Data System (ADS)
Teo, L. P.
2014-05-01
We consider the Casimir interaction between two spheres in ( D + 1)-dimensional Minkowski spacetime due to the vacuum fluctuations of scalar fields. We consider combinations of Dirichlet and Neumann boundary conditions. The TGTG formula of the Casimir interaction energy is derived. The computations of the T matrices of the two spheres are straightforward. To compute the two G matrices, known as translation matrices, which relate the hyper-spherical waves in two spherical coordinate frames differ by a translation, we generalize the operator approach employed in [39]. The result is expressed in terms of an integral over Gegenbauer polynomials. In contrast to the D=3 case, we do not re-express the integral in terms of 3 j-symbols and hyper-spherical waves, which in principle, can be done but does not simplify the formula. Using our expression for the Casimir interaction energy, we derive the large separation and small separation asymptotic expansions of the Casimir interaction energy. In the large separation regime, we find that the Casimir interaction energy is of order L -2 D+3, L -2 D+1 and L -2 D-1 respectively for Dirichlet-Dirichlet, Dirichlet-Neumann and Neumann-Neumann boundary conditions, where L is the center-to-center distance of the two spheres. In the small separation regime, we confirm that the leading term of the Casimir interaction agrees with the proximity force approximation, which is of order , where d is the distance between the two spheres. Another main result of this work is the analytic computations of the next-to-leading order term in the small separation asymptotic expansion. This term is computed using careful order analysis as well as perturbation method. In the case the radius of one of the sphere goes to infinity, we find that the results agree with the one we derive for sphere-plate configuration. When D=3, we also recover previously known results. We find that when D is large, the ratio of the next-to-leading order term to the leading order term is linear in D, indicating a larger correction at higher dimensions. The methodologies employed in this work and the results obtained can be used to study the one-loop effective action of the system of two spherical objects in the universe.
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes. PMID:29872389
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot.
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback-Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes.
Boundary conditions in Chebyshev and Legendre methods
NASA Technical Reports Server (NTRS)
Canuto, C.
1984-01-01
Two different ways of treating non-Dirichlet boundary conditions in Chebyshev and Legendre collocation methods are discussed for second order differential problems. An error analysis is provided. The effect of preconditioning the corresponding spectral operators by finite difference matrices is also investigated.
A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION
We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...
Simon, Laurent; Ospina, Juan
2016-07-25
Three-dimensional solute transport was investigated for a spherical device with a release hole. The governing equation was derived using the Fick's second law. A mixed Neumann-Dirichlet condition was imposed at the boundary to represent diffusion through a small region on the surface of the device. The cumulative percentage of drug released was calculated in the Laplace domain and represented by the first term of an infinite series of Legendre and modified Bessel functions of the first kind. Application of the Zakian algorithm yielded the time-domain closed-form expression. The first-order solution closely matched a numerical solution generated by Mathematica(®). The proposed method allowed computation of the characteristic time. A larger surface pore resulted in a smaller effective time constant. The agreement between the numerical solution and the semi-analytical method improved noticeably as the size of the orifice increased. It took four time constants for the device to release approximately ninety-eight of its drug content. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Diestra Cruz, Heberth Alexander
The Green's functions integral technique is used to determine the conduction heat transfer temperature field in flat plates, circular plates, and solid spheres with saw tooth heat generating sources. In all cases the boundary temperature is specified (Dirichlet's condition) and the thermal conductivity is constant. The method of images is used to find the Green's function in infinite solids, semi-infinite solids, infinite quadrants, circular plates, and solid spheres. The saw tooth heat generation source has been modeled using Dirac delta function and Heaviside step function. The use of Green's functions allows obtain the temperature distribution in the form of an integral that avoids the convergence problems of infinite series. For the infinite solid and the sphere, the temperature distribution is three-dimensional and in the cases of semi-infinite solid, infinite quadrant and circular plate the distribution is two-dimensional. The method used in this work is superior to other methods because it obtains elegant analytical or quasi-analytical solutions to complex heat conduction problems with less computational effort and more accuracy than the use of fully numerical methods.
NASA Astrophysics Data System (ADS)
López, O. E.; Guazzotto, L.
2017-03-01
The Grad-Shafranov-Bernoulli system of equations is a single fluid magnetohydrodynamical description of axisymmetric equilibria with mass flows. Using a variational perturbative approach [E. Hameiri, Phys. Plasmas 20, 024504 (2013)], analytic approximations for high-beta equilibria in circular, elliptical, and D-shaped cross sections in the high aspect ratio approximation are found, which include finite toroidal and poloidal flows. Assuming a polynomial dependence of the free functions on the poloidal flux, the equilibrium problem is reduced to an inhomogeneous Helmholtz partial differential equation (PDE) subject to homogeneous Dirichlet conditions. An application of the Green's function method leads to a closed form for the circular solution and to a series solution in terms of Mathieu functions for the elliptical case, which is valid for arbitrary elongations. To extend the elliptical solution to a D-shaped domain, a boundary perturbation in terms of the triangularity is used. A comparison with the code FLOW [L. Guazzotto et al., Phys. Plasmas 11(2), 604-614 (2004)] is presented for relevant scenarios.
NASA Astrophysics Data System (ADS)
Brown-Dymkoski, Eric; Kasimov, Nurlybek; Vasilyev, Oleg V.
2014-04-01
In order to introduce solid obstacles into flows, several different methods are used, including volume penalization methods which prescribe appropriate boundary conditions by applying local forcing to the constitutive equations. One well known method is Brinkman penalization, which models solid obstacles as porous media. While it has been adapted for compressible, incompressible, viscous and inviscid flows, it is limited in the types of boundary conditions that it imposes, as are most volume penalization methods. Typically, approaches are limited to Dirichlet boundary conditions. In this paper, Brinkman penalization is extended for generalized Neumann and Robin boundary conditions by introducing hyperbolic penalization terms with characteristics pointing inward on solid obstacles. This Characteristic-Based Volume Penalization (CBVP) method is a comprehensive approach to conditions on immersed boundaries, providing for homogeneous and inhomogeneous Dirichlet, Neumann, and Robin boundary conditions on hyperbolic and parabolic equations. This CBVP method can be used to impose boundary conditions for both integrated and non-integrated variables in a systematic manner that parallels the prescription of exact boundary conditions. Furthermore, the method does not depend upon a physical model, as with porous media approach for Brinkman penalization, and is therefore flexible for various physical regimes and general evolutionary equations. Here, the method is applied to scalar diffusion and to direct numerical simulation of compressible, viscous flows. With the Navier-Stokes equations, both homogeneous and inhomogeneous Neumann boundary conditions are demonstrated through external flow around an adiabatic and heated cylinder. Theoretical and numerical examination shows that the error from penalized Neumann and Robin boundary conditions can be rigorously controlled through an a priori penalization parameter η. The error on a transient boundary is found to converge as O(η), which is more favorable than the error convergence of the already established Dirichlet boundary condition.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
A numerical technique for linear elliptic partial differential equations in polygonal domains.
Hashemzadeh, P; Fokas, A S; Smitheman, S A
2015-03-08
Integral representations for the solution of linear elliptic partial differential equations (PDEs) can be obtained using Green's theorem. However, these representations involve both the Dirichlet and the Neumann values on the boundary, and for a well-posed boundary-value problem (BVPs) one of these functions is unknown. A new transform method for solving BVPs for linear and integrable nonlinear PDEs usually referred to as the unified transform ( or the Fokas transform ) was introduced by the second author in the late Nineties. For linear elliptic PDEs, this method can be considered as the analogue of Green's function approach but now it is formulated in the complex Fourier plane instead of the physical plane. It employs two global relations also formulated in the Fourier plane which couple the Dirichlet and the Neumann boundary values. These relations can be used to characterize the unknown boundary values in terms of the given boundary data, yielding an elegant approach for determining the Dirichlet to Neumann map . The numerical implementation of the unified transform can be considered as the counterpart in the Fourier plane of the well-known boundary integral method which is formulated in the physical plane. For this implementation, one must choose (i) a suitable basis for expanding the unknown functions and (ii) an appropriate set of complex values, which we refer to as collocation points, at which to evaluate the global relations. Here, by employing a variety of examples we present simple guidelines of how the above choices can be made. Furthermore, we provide concrete rules for choosing the collocation points so that the condition number of the matrix of the associated linear system remains low.
Simplification of multiple Fourier series - An example of algorithmic approach
NASA Technical Reports Server (NTRS)
Ng, E. W.
1981-01-01
This paper describes one example of multiple Fourier series which originate from a problem of spectral analysis of time series data. The example is exercised here with an algorithmic approach which can be generalized for other series manipulation on a computer. The generalized approach is presently pursued towards applications to a variety of multiple series and towards a general purpose algorithm for computer algebra implementation.
DUTIR at TREC 2009: Chemical IR Track
2009-11-01
We set the Dirichlet prior empirically at 1,500 as recommended in [2]. For example, Topic 15 “ Betaines for peripheral arterial disease” is...converted into the following Indri query: # (combine betaines for peripheral arterial disease ) which produces results rank-equivalent to a simple query
Modifications to holographic entanglement entropy in warped CFT
NASA Astrophysics Data System (ADS)
Song, Wei; Wen, Qiang; Xu, Jianfei
2017-02-01
In [1] it was observed that asymptotic boundary conditions play an important role in the study of holographic entanglement beyond AdS/CFT. In particular, the Ryu-Takayanagi proposal must be modified for warped AdS3 (WAdS3) with Dirichlet boundary conditions. In this paper, we consider AdS3 and WAdS3 with Dirichlet-Neumann boundary conditions. The conjectured holographic duals are warped conformal field theories (WCFTs), featuring a Virasoro-Kac-Moody algebra. We provide a holographic calculation of the entanglement entropy and Rényi entropy using AdS3/WCFT and WAdS3/WCFT dualities. Our bulk results are consistent with the WCFT results derived by Castro-Hofman-Iqbal using the Rindler method. Comparing with [1], we explicitly show that the holographic entanglement entropy is indeed affected by boundary conditions. Both results differ from the Ryu-Takayanagi proposal, indicating new relations between spacetime geometry and quantum entanglement for holographic dualities beyond AdS/CFT.
A generalized Poisson solver for first-principles device simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch; Brück, Sascha
2016-01-28
Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative methodmore » in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.« less
NASA Astrophysics Data System (ADS)
Ciarlet, P.
1994-09-01
Hereafter, we describe and analyze, from both a theoretical and a numerical point of view, an iterative method for efficiently solving symmetric elliptic problems with possibly discontinuous coefficients. In the following, we use the Preconditioned Conjugate Gradient method to solve the symmetric positive definite linear systems which arise from the finite element discretization of the problems. We focus our interest on sparse and efficient preconditioners. In order to define the preconditioners, we perform two steps: first we reorder the unknowns and then we carry out a (modified) incomplete factorization of the original matrix. We study numerically and theoretically two preconditioners, the second preconditioner corresponding to the one investigated by Brand and Heinemann [2]. We prove convergence results about the Poisson equation with either Dirichlet or periodic boundary conditions. For a meshsizeh, Brand proved that the condition number of the preconditioned system is bounded byO(h-1/2) for Dirichlet boundary conditions. By slightly modifying the preconditioning process, we prove that the condition number is bounded byO(h-1/3).
Dirichlet Component Regression and its Applications to Psychiatric Data
Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel
2011-01-01
Summary We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook’s distance, and a local jackknife influence metric. PMID:22058582
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Dr. Li; Cui, Xiaohui; Cemerlic, Alma
Ad hoc networks are very helpful in situations when no fixed network infrastructure is available, such as natural disasters and military conflicts. In such a network, all wireless nodes are equal peers simultaneously serving as both senders and routers for other nodes. Therefore, how to route packets through reliable paths becomes a fundamental problems when behaviors of certain nodes deviate from wireless ad hoc routing protocols. We proposed a novel Dirichlet reputation model based on Bayesian inference theory which evaluates reliability of each node in terms of packet delivery. Our system offers a way to predict and select a reliablemore » path through combination of first-hand observation and second-hand reputation reports. We also proposed moving window mechanism which helps to adjust ours responsiveness of our system to changes of node behaviors. We integrated the Dirichlet reputation into routing protocol of wireless ad hoc networks. Our extensive simulation indicates that our proposed reputation system can improve good throughput of the network and reduce negative impacts caused by misbehaving nodes.« less
Li, Xian-Ying; Hu, Shi-Min
2013-02-01
Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.
Positivity and Almost Positivity of Biharmonic Green's Functions under Dirichlet Boundary Conditions
NASA Astrophysics Data System (ADS)
Grunau, Hans-Christoph; Robert, Frédéric
2010-03-01
In general, for higher order elliptic equations and boundary value problems like the biharmonic equation and the linear clamped plate boundary value problem, neither a maximum principle nor a comparison principle or—equivalently—a positivity preserving property is available. The problem is rather involved since the clamped boundary conditions prevent the boundary value problem from being reasonably written as a system of second order boundary value problems. It is shown that, on the other hand, for bounded smooth domains {Ω subsetmathbb{R}^n} , the negative part of the corresponding Green’s function is “small” when compared with its singular positive part, provided {n≥q 3} . Moreover, the biharmonic Green’s function in balls {Bsubsetmathbb{R}^n} under Dirichlet (that is, clamped) boundary conditions is known explicitly and is positive. It has been known for some time that positivity is preserved under small regular perturbations of the domain, if n = 2. In the present paper, such a stability result is proved for {n≥q 3}.
Modeling Information Content Via Dirichlet-Multinomial Regression Analysis.
Ferrari, Alberto
2017-01-01
Shannon entropy is being increasingly used in biomedical research as an index of complexity and information content in sequences of symbols, e.g. languages, amino acid sequences, DNA methylation patterns and animal vocalizations. Yet, distributional properties of information entropy as a random variable have seldom been the object of study, leading to researchers mainly using linear models or simulation-based analytical approach to assess differences in information content, when entropy is measured repeatedly in different experimental conditions. Here a method to perform inference on entropy in such conditions is proposed. Building on results coming from studies in the field of Bayesian entropy estimation, a symmetric Dirichlet-multinomial regression model, able to deal efficiently with the issue of mean entropy estimation, is formulated. Through a simulation study the model is shown to outperform linear modeling in a vast range of scenarios and to have promising statistical properties. As a practical example, the method is applied to a data set coming from a real experiment on animal communication.
NASA Astrophysics Data System (ADS)
Zhukovsky, K.; Oskolkov, D.
2018-03-01
A system of hyperbolic-type inhomogeneous differential equations (DE) is considered for non-Fourier heat transfer in thin films. Exact harmonic solutions to Guyer-Krumhansl-type heat equation and to the system of inhomogeneous DE are obtained in Cauchy- and Dirichlet-type conditions. The contribution of the ballistic-type heat transport, of the Cattaneo heat waves and of the Fourier heat diffusion is discussed and compared with each other in various conditions. The application of the study to the ballistic heat transport in thin films is performed. Rapid evolution of the ballistic quasi-temperature component in low-dimensional systems is elucidated and compared with slow evolution of its diffusive counterpart. The effect of the ballistic quasi-temperature component on the evolution of the complete quasi-temperature is explored. In this context, the influence of the Knudsen number and of Cauchy- and Dirichlet-type conditions on the evolution of the temperature distribution is explored. The comparative analysis of the obtained solutions is performed.
Memoized Online Variational Inference for Dirichlet Process Mixture Models
2014-06-27
breaking process [7], which places artifically large mass on the final component. It is more efficient and broadly applicable than an alternative trunction...models. In Uncertainty in Artificial Intelligence , 2008. [13] N. Le Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential
Linguistic Extensions of Topic Models
ERIC Educational Resources Information Center
Boyd-Graber, Jordan
2010-01-01
Topic models like latent Dirichlet allocation (LDA) provide a framework for analyzing large datasets where observations are collected into groups. Although topic modeling has been fruitfully applied to problems social science, biology, and computer vision, it has been most widely used to model datasets where documents are modeled as exchangeable…
Scalable Topic Modeling: Online Learning, Diagnostics, and Recommendation
2017-03-01
Chinese restaurant processes. Journal of Machine Learning Research, 12:2461–2488, 2011. 15. L. Hannah, D. Blei and W. Powell. Dirichlet process mixtures of...34. S. Ghosh, A. Ungureanu, E. Sudderth, and D. Blei. A Spatial distance dependent Chinese restaurant process for image segmentation. In Neural
On functional determinants of matrix differential operators with multiple zero modes
NASA Astrophysics Data System (ADS)
Falco, G. M.; Fedorenko, Andrei A.; Gruzberg, Ilya A.
2017-12-01
We generalize the method of computing functional determinants with a single excluded zero eigenvalue developed by McKane and Tarlie to differential operators with multiple zero eigenvalues. We derive general formulas for such functional determinants of r× r matrix second order differential operators O with 0 < n ≤slant 2r linearly independent zero modes. We separately discuss the cases of the homogeneous Dirichlet boundary conditions, when the number of zero modes cannot exceed r, and the case of twisted boundary conditions, including the periodic and anti-periodic ones, when the number of zero modes is bounded above by 2r. In all cases the determinants with excluded zero eigenvalues can be expressed only in terms of the n zero modes and other r-n or 2r-n (depending on the boundary conditions) solutions of the homogeneous equation O h=0 , in the spirit of Gel’fand-Yaglom approach. In instanton calculations, the contribution of the zero modes is taken into account by introducing the so-called collective coordinates. We show that there is a remarkable cancellation of a factor (involving scalar products of zero modes) between the Jacobian of the transformation to the collective coordinates and the functional fluctuation determinant with excluded zero eigenvalues. This cancellation drastically simplifies instanton calculations when one uses our formulas.
Yeo, B T Thomas; Krienen, Fenna M; Chee, Michael W L; Buckner, Randy L
2014-03-01
The organization of the human cerebral cortex has recently been explored using techniques for parcellating the cortex into distinct functionally coupled networks. The divergent and convergent nature of cortico-cortical anatomic connections suggests the need to consider the possibility of regions belonging to multiple networks and hierarchies among networks. Here we applied the Latent Dirichlet Allocation (LDA) model and spatial independent component analysis (ICA) to solve for functionally coupled cerebral networks without assuming that cortical regions belong to a single network. Data analyzed included 1000 subjects from the Brain Genomics Superstruct Project (GSP) and 12 high quality individual subjects from the Human Connectome Project (HCP). The organization of the cerebral cortex was similar regardless of whether a winner-take-all approach or the more relaxed constraints of LDA (or ICA) were imposed. This suggests that large-scale networks may function as partially isolated modules. Several notable interactions among networks were uncovered by the LDA analysis. Many association regions belong to at least two networks, while somatomotor and early visual cortices are especially isolated. As examples of interaction, the precuneus, lateral temporal cortex, medial prefrontal cortex and posterior parietal cortex participate in multiple paralimbic networks that together comprise subsystems of the default network. In addition, regions at or near the frontal eye field and human lateral intraparietal area homologue participate in multiple hierarchically organized networks. These observations were replicated in both datasets and could be detected (and replicated) in individual subjects from the HCP. © 2013.
Yeo, BT Thomas; Krienen, Fenna M; Chee, Michael WL; Buckner, Randy L
2014-01-01
The organization of the human cerebral cortex has recently been explored using techniques for parcellating the cortex into distinct functionally coupled networks. The divergent and convergent nature of cortico-cortical anatomic connections suggests the need to consider the possibility of regions belonging to multiple networks and hierarchies among networks. Here we applied the Latent Dirichlet Allocation (LDA) model and spatial independent component analysis (ICA) to solve for functionally coupled cerebral networks without assuming that cortical regions belong to a single network. Data analyzed included 1,000 subjects from the Brain Genomics Superstruct Project (GSP) and 12 high quality individual subjects from the Human Connectome Project (HCP). The organization of the cerebral cortex was similar regardless of whether a winner-take-all approach or the more relaxed constraints of LDA (or ICA) were imposed. This suggests that large-scale networks may function as partially isolated modules. Several notable interactions among networks were uncovered by the LDA analysis. Many association regions belong to at least two networks, while somatomotor and early visual cortices are especially isolated. As examples of interaction, the precuneus, lateral temporal cortex, medial prefrontal cortex and posterior parietal cortex participate in multiple paralimbic networks that together comprise subsystems of the default network. In addition, regions at or near the frontal eye field and human lateral intraparietal area homologue participate in multiple hierarchically organized networks. These observations were replicated in both datasets and could be detected (and replicated) in individual subjects from the HCP. PMID:24185018
NASA Astrophysics Data System (ADS)
Krishnan, Chethan; Maheshwari, Shubham; Bala Subramanian, P. N.
2017-08-01
We write down a Robin boundary term for general relativity. The construction relies on the Neumann result of arXiv:1605.01603 in an essential way. This is unlike in mechanics and (polynomial) field theory, where two formulations of the Robin problem exist: one with Dirichlet as the natural limiting case, and another with Neumann.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manjunath, Naren; Samajdar, Rhine; Jain, Sudhir R., E-mail: srjain@barc.gov.in
Recently, the nodal domain counts of planar, integrable billiards with Dirichlet boundary conditions were shown to satisfy certain difference equations in Samajdar and Jain (2014). The exact solutions of these equations give the number of domains explicitly. For complete generality, we demonstrate this novel formulation for three additional separable systems and thus extend the statement to all integrable billiards.
A weighted anisotropic variant of the Caffarelli-Kohn-Nirenberg inequality and applications
NASA Astrophysics Data System (ADS)
Bahrouni, Anouar; Rădulescu, Vicenţiu D.; Repovš, Dušan D.
2018-04-01
We present a weighted version of the Caffarelli-Kohn-Nirenberg inequality in the framework of variable exponents. The combination of this inequality with a variant of the fountain theorem, yields the existence of infinitely many solutions for a class of non-homogeneous problems with Dirichlet boundary condition.
The use of MACSYMA for solving elliptic boundary value problems
NASA Technical Reports Server (NTRS)
Thejll, Peter; Gilbert, Robert P.
1990-01-01
A boundary method is presented for the solution of elliptic boundary value problems. An approach based on the use of complete systems of solutions is emphasized. The discussion is limited to the Dirichlet problem, even though the present method can possibly be adapted to treat other boundary value problems.
Test Design Project: Studies in Test Adequacy. Annual Report.
ERIC Educational Resources Information Center
Wilcox, Rand R.
These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…
NASA Astrophysics Data System (ADS)
Chernyshov, A. D.
2018-05-01
The analytical solution of the nonlinear heat conduction problem for a curvilinear region is obtained with the use of the fast-expansion method together with the method of extension of boundaries and pointwise technique of computing Fourier coefficients.
Pig Data and Bayesian Inference on Multinomial Probabilities
ERIC Educational Resources Information Center
Kern, John C.
2006-01-01
Bayesian inference on multinomial probabilities is conducted based on data collected from the game Pass the Pigs[R]. Prior information on these probabilities is readily available from the instruction manual, and is easily incorporated in a Dirichlet prior. Posterior analysis of the scoring probabilities quantifies the discrepancy between empirical…
Comment Data Mining to Estimate Student Performance Considering Consecutive Lessons
ERIC Educational Resources Information Center
Sorour, Shaymaa E.; Goda, Kazumasa; Mine, Tsunenori
2017-01-01
The purpose of this study is to examine different formats of comment data to predict student performance. Having students write comment data after every lesson can reflect students' learning attitudes, tendencies and learning activities involved with the lesson. In this research, Latent Dirichlet Allocation (LDA) and Probabilistic Latent Semantic…
NASA Astrophysics Data System (ADS)
Its, Alexander; Its, Elizabeth
2018-04-01
We revisit the Helmholtz equation in a quarter-plane in the framework of the Riemann-Hilbert approach to linear boundary value problems suggested in late 1990s by A. Fokas. We show the role of the Sommerfeld radiation condition in Fokas' scheme.
A Bayesian Semiparametric Item Response Model with Dirichlet Process Priors
ERIC Educational Resources Information Center
Miyazaki, Kei; Hoshino, Takahiro
2009-01-01
In Item Response Theory (IRT), item characteristic curves (ICCs) are illustrated through logistic models or normal ogive models, and the probability that examinees give the correct answer is usually a monotonically increasing function of their ability parameters. However, since only limited patterns of shapes can be obtained from logistic models…
Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers
ERIC Educational Resources Information Center
Anaya, Leticia H.
2011-01-01
In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed.…
Vectorized multigrid Poisson solver for the CDC CYBER 205
NASA Technical Reports Server (NTRS)
Barkai, D.; Brandt, M. A.
1984-01-01
The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.
Using Dirichlet Processes for Modeling Heterogeneous Treatment Effects across Sites
ERIC Educational Resources Information Center
Miratrix, Luke; Feller, Avi; Pillai, Natesh; Pati, Debdeep
2016-01-01
Modeling the distribution of site level effects is an important problem, but it is also an incredibly difficult one. Current methods rely on distributional assumptions in multilevel models for estimation. There it is hoped that the partial pooling of site level estimates with overall estimates, designed to take into account individual variation as…
Nonparametric Bayesian predictive distributions for future order statistics
Richard A. Johnson; James W. Evans; David W. Green
1999-01-01
We derive the predictive distribution for a specified order statistic, determined from a future random sample, under a Dirichlet process prior. Two variants of the approach are treated and some limiting cases studied. A practical application to monitoring the strength of lumber is discussed including choices of prior expectation and comparisons made to a Bayesian...
Boundary conditions and formation of pure spin currents in magnetic field
NASA Astrophysics Data System (ADS)
Eliashvili, Merab; Tsitsishvili, George
2017-09-01
Schrödinger equation for an electron confined to a two-dimensional strip is considered in the presence of homogeneous orthogonal magnetic field. Since the system has edges, the eigenvalue problem is supplied by the boundary conditions (BC) aimed in preventing the leakage of matter away across the edges. In the case of spinless electrons the Dirichlet and Neumann BC are considered. The Dirichlet BC result in the existence of charge carrying edge states. For the Neumann BC each separate edge comprises two counterflow sub-currents which precisely cancel out each other provided the system is populated by electrons up to certain Fermi level. Cancelation of electric current is a good starting point for developing the spin-effects. In this scope we reconsider the problem for a spinning electron with Rashba coupling. The Neumann BC are replaced by Robin BC. Again, the two counterflow electric sub-currents cancel out each other for a separate edge, while the spin current survives thus modeling what is known as pure spin current - spin flow without charge flow.
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
Chen, Yun; Yang, Hui
2016-01-01
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581
Inverse scattering for an exterior Dirichlet program
NASA Technical Reports Server (NTRS)
Hariharan, S. I.
1981-01-01
Scattering due to a metallic cylinder which is in the field of a wire carrying a periodic current is considered. The location and shape of the cylinder is obtained with a far field measurement in between the wire and the cylinder. The same analysis is applicable in acoustics in the situation that the cylinder is a soft wall body and the wire is a line source. The associated direct problem in this situation is an exterior Dirichlet problem for the Helmholtz equation in two dimensions. An improved low frequency estimate for the solution of this problem using integral equation methods is presented. The far field measurements are related to the solutions of boundary integral equations in the low frequency situation. These solutions are expressed in terms of mapping function which maps the exterior of the unknown curve onto the exterior of a unit disk. The coefficients of the Laurent expansion of the conformal transformations are related to the far field coefficients. The first far field coefficient leads to the calculation of the distance between the source and the cylinder.
Chen, Yun; Yang, Hui
2016-12-14
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.
Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation.
Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu
2017-07-01
In the field of pathology, whole slide image (WSI) has become the major carrier of visual and diagnostic information. Content-based image retrieval among WSIs can aid the diagnosis of an unknown pathological image by finding its similar regions in WSIs with diagnostic information. However, the huge size and complex content of WSI pose several challenges for retrieval. In this paper, we propose an unsupervised, accurate, and fast retrieval method for a breast histopathological image. Specifically, the method presents a local statistical feature of nuclei for morphology and distribution of nuclei, and employs the Gabor feature to describe the texture information. The latent Dirichlet allocation model is utilized for high-level semantic mining. Locality-sensitive hashing is used to speed up the search. Experiments on a WSI database with more than 8000 images from 15 types of breast histopathology demonstrate that our method achieves about 0.9 retrieval precision as well as promising efficiency. Based on the proposed framework, we are developing a search engine for an online digital slide browsing and retrieval platform, which can be applied in computer-aided diagnosis, pathology education, and WSI archiving and management.
DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.
Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei
2018-01-01
Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Hill, Peter; Shanahan, Brendan; Dudson, Ben
2017-04-01
We present a technique for handling Dirichlet boundary conditions with the Flux Coordinate Independent (FCI) parallel derivative operator with arbitrary-shaped material geometry in general 3D magnetic fields. The FCI method constructs a finite difference scheme for ∇∥ by following field lines between poloidal planes and interpolating within planes. Doing so removes the need for field-aligned coordinate systems that suffer from singularities in the metric tensor at null points in the magnetic field (or equivalently, when q → ∞). One cost of this method is that as the field lines are not on the mesh, they may leave the domain at any point between neighbouring planes, complicating the application of boundary conditions. The Leg Value Fill (LVF) boundary condition scheme presented here involves an extrapolation/interpolation of the boundary value onto the field line end point. The usual finite difference scheme can then be used unmodified. We implement the LVF scheme in BOUT++ and use the Method of Manufactured Solutions to verify the implementation in a rectangular domain, and show that it does not modify the error scaling of the finite difference scheme. The use of LVF for arbitrary wall geometry is outlined. We also demonstrate the feasibility of using the FCI approach in no n-axisymmetric configurations for a simple diffusion model in a "straight stellarator" magnetic field. A Gaussian blob diffuses along the field lines, tracing out flux surfaces. Dirichlet boundary conditions impose a last closed flux surface (LCFS) that confines the density. Including a poloidal limiter moves the LCFS to a smaller radius. The expected scaling of the numerical perpendicular diffusion, which is a consequence of the FCI method, in stellarator-like geometry is recovered. A novel technique for increasing the parallel resolution during post-processing, in order to reduce artefacts in visualisations, is described.
Atmospheric effect in three-space scenario for the Stokes-Helmert method of geoid determination
NASA Astrophysics Data System (ADS)
Yang, H.; Tenzer, R.; Vanicek, P.; Santos, M.
2004-05-01
: According to the Stokes-Helmert method for the geoid determination by Vanicek and Martinec (1994) and Vanicek et al. (1999), the Helmert gravity anomalies are computed at the earth surface. To formulate the fundamental formula of physical geodesy, Helmert's gravity anomalies are then downward continued from the earth surface onto the geoid. This procedure, i.e., the inverse Dirichlet's boundary value problem, is realized by solving the Poisson integral equation. The above mentioned "classical" approach can be modified so that the inverse Dirichlet's boundary value problem is solved in the No Topography (NT) space (Vanicek et al., 2004) instead of in the Helmert (H) space. This technique has been introduced by Vanicek et al. (2003) and was used by Tenzer and Vanicek (2003) for the determination of the geoid in the region of the Canadian Rocky Mountains. According to this new approach, the gravity anomalies referred to the earth surface are first transformed into the NT-space. This transformation is realized by subtracting the gravitational attraction of topographical and atmospheric masses from the gravity anomalies at the earth surface. Since the NT-anomalies are harmonic above the geoid, the Dirichlet boundary value problem is solved in the NT-space instead of the Helmert space according to the standard formulation. After being obtained on the geoid, the NT-anomalies are transformed into the H-space to minimize the indirect effect on the geoidal heights. This step, i.e., transformation from NT-space to H-space is realized by adding the gravitational attraction of condensed topographical and condensed atmospheric masses to the NT-anomalies at the geoid. The effects of atmosphere in the standard Stokes-Helmert method was intensively investigated by Sjöberg (1998 and 1999), and Novák (2000). In this presentation, the effect of the atmosphere in the three-space scenario for the Stokes-Helmert method is discussed and the numerical results over Canada are shown. Key words: Atmosphere - Geoid - Gravity
Khan, Farman U; Qamar, Shamsul
2017-05-01
A set of analytical solutions are presented for a model describing the transport of a solute in a fixed-bed reactor of cylindrical geometry subjected to the first (Dirichlet) and third (Danckwerts) type inlet boundary conditions. Linear sorption kinetic process and first-order decay are considered. Cylindrical geometry allows the use of large columns to investigate dispersion, adsorption/desorption and reaction kinetic mechanisms. The finite Hankel and Laplace transform techniques are adopted to solve the model equations. For further analysis, statistical temporal moments are derived from the Laplace-transformed solutions. The developed analytical solutions are compared with the numerical solutions of high-resolution finite volume scheme. Different case studies are presented and discussed for a series of numerical values corresponding to a wide range of mass transfer and reaction kinetics. A good agreement was observed in the analytical and numerical concentration profiles and moments. The developed solutions are efficient tools for analyzing numerical algorithms, sensitivity analysis and simultaneous determination of the longitudinal and transverse dispersion coefficients from a laboratory-scale radial column experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Active electromagnetic invisibility cloaking and radiation force cancellation
NASA Astrophysics Data System (ADS)
Mitri, F. G.
2018-03-01
This investigation shows that an active emitting electromagnetic (EM) Dirichlet source (i.e., with axial polarization of the electric field) in a homogeneous non-dissipative/non-absorptive medium placed near a perfectly conducting boundary can render total invisibility (i.e. zero extinction cross-section or efficiency) in addition to a radiation force cancellation on its surface. Based upon the Poynting theorem, the mathematical expression for the extinction, radiation and amplification cross-sections (or efficiencies) are derived using the partial-wave series expansion method in cylindrical coordinates. Moreover, the analysis is extended to compute the self-induced EM radiation force on the active source, resulting from the waves reflected by the boundary. The numerical results predict the generation of a zero extinction efficiency, achieving total invisibility, in addition to a radiation force cancellation which depend on the source size, the distance from the boundary and the associated EM mode order of the active source. Furthermore, an attractive EM pushing force on the active source directed toward the boundary or a repulsive pulling one pointing away from it can arise accordingly. The numerical predictions and computational results find potential applications in the design and development of EM cloaking devices, invisibility and stealth technologies.
The Value of Interrupted Time-Series Experiments for Community Intervention Research
Biglan, Anthony; Ary, Dennis; Wagenaar, Alexander C.
2015-01-01
Greater use of interrupted time-series experiments is advocated for community intervention research. Time-series designs enable the development of knowledge about the effects of community interventions and policies in circumstances in which randomized controlled trials are too expensive, premature, or simply impractical. The multiple baseline time-series design typically involves two or more communities that are repeatedly assessed, with the intervention introduced into one community at a time. It is particularly well suited to initial evaluations of community interventions and the refinement of those interventions. This paper describes the main features of multiple baseline designs and related repeated-measures time-series experiments, discusses the threats to internal validity in multiple baseline designs, and outlines techniques for statistical analyses of time-series data. Examples are given of the use of multiple baseline designs in evaluating community interventions and policy changes. PMID:11507793
Local recovery of the compressional and shear speeds from the hyperbolic DN map
NASA Astrophysics Data System (ADS)
Stefanov, Plamen; Uhlmann, Gunther; Vasy, Andras
2018-01-01
We study the isotropic elastic wave equation in a bounded domain with boundary. We show that local knowledge of the Dirichlet-to-Neumann map determines uniquely the speed of the p-wave locally if there is a strictly convex foliation with respect to it, and similarly for the s-wave speed.
The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples
ERIC Educational Resources Information Center
Avetisyan, Marianna; Fox, Jean-Paul
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…
Existence and uniqueness of steady state solutions of a nonlocal diffusive logistic equation
NASA Astrophysics Data System (ADS)
Sun, Linan; Shi, Junping; Wang, Yuwen
2013-08-01
In this paper, we consider a dynamical model of population biology which is of the classical Fisher type, but the competition interaction between individuals is nonlocal. The existence, uniqueness, and stability of the steady state solution of the nonlocal problem on a bounded interval with homogeneous Dirichlet boundary conditions are studied.
Using Dirichlet Priors to Improve Model Parameter Plausibility
ERIC Educational Resources Information Center
Rai, Dovan; Gong, Yue; Beck, Joseph E.
2009-01-01
Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…
ERIC Educational Resources Information Center
Li, Dingcheng
2011-01-01
Coreference resolution (CR) and entity relation detection (ERD) aim at finding predefined relations between pairs of entities in text. CR focuses on resolving identity relations while ERD focuses on detecting non-identity relations. Both CR and ERD are important as they can potentially improve other natural language processing (NLP) related tasks…
Quantum field between moving mirrors: A three dimensional example
NASA Technical Reports Server (NTRS)
Hacyan, S.; Jauregui, Roco; Villarreal, Carlos
1995-01-01
The scalar quantum field uniformly moving plates in three dimensional space is studied. Field equations for Dirichlet boundary conditions are solved exactly. Comparison of the resulting wavefunctions with their instantaneous static counterpart is performed via Bogolubov coefficients. Unlike the one dimensional problem, 'particle' creation as well as squeezing may occur. The time dependent Casimir energy is also evaluated.
Micklash. II, Kenneth James; Dutton, Justin James; Kaye, Steven
2014-06-03
An apparatus for testing of multiple material samples includes a gas delivery control system operatively connectable to the multiple material samples and configured to provide gas to the multiple material samples. Both a gas composition measurement device and pressure measurement devices are included in the apparatus. The apparatus includes multiple selectively openable and closable valves and a series of conduits configured to selectively connect the multiple material samples individually to the gas composition device and the pressure measurement devices by operation of the valves. A mixing system is selectively connectable to the series of conduits and is operable to cause forced mixing of the gas within the series of conduits to achieve a predetermined uniformity of gas composition within the series of conduits and passages.
UTOPIAN: user-driven topic modeling based on interactive nonnegative matrix factorization.
Choo, Jaegul; Lee, Changhyun; Reddy, Chandan K; Park, Haesun
2013-12-01
Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.
Leveraging constraints and biotelemetry data to pinpoint repetitively used spatial features
Brost, Brian M.; Hooten, Mevin B.; Small, Robert J.
2016-01-01
Satellite telemetry devices collect valuable information concerning the sites visited by animals, including the location of central places like dens, nests, rookeries, or haul‐outs. Existing methods for estimating the location of central places from telemetry data require user‐specified thresholds and ignore common nuances like measurement error. We present a fully model‐based approach for locating central places from telemetry data that accounts for multiple sources of uncertainty and uses all of the available locational data. Our general framework consists of an observation model to account for large telemetry measurement error and animal movement, and a highly flexible mixture model specified using a Dirichlet process to identify the location of central places. We also quantify temporal patterns in central place use by incorporating ancillary behavioral data into the model; however, our framework is also suitable when no such behavioral data exist. We apply the model to a simulated data set as proof of concept. We then illustrate our framework by analyzing an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that exhibits fidelity to terrestrial haul‐out sites.
A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*
Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.
2013-01-01
This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186
Rotationally symmetric viscous gas flows
NASA Astrophysics Data System (ADS)
Weigant, W.; Plotnikov, P. I.
2017-03-01
The Dirichlet boundary value problem for the Navier-Stokes equations of a barotropic viscous compressible fluid is considered. The flow region and the data of the problem are assumed to be invariant under rotations about a fixed axis. The existence of rotationally symmetric weak solutions for all adiabatic exponents from the interval (γ*,∞) with a critical exponent γ* < 4/3 is proved.
Thermoelectric DC conductivities in hyperscaling violating Lifshitz theories
NASA Astrophysics Data System (ADS)
Cremonini, Sera; Cvetič, Mirjam; Papadimitriou, Ioannis
2018-04-01
We analytically compute the thermoelectric conductivities at zero frequency (DC) in the holographic dual of a four dimensional Einstein-Maxwell-Axion-Dilaton theory that admits a class of asymptotically hyperscaling violating Lifshitz backgrounds with a dynamical exponent z and hyperscaling violating parameter θ. We show that the heat current in the dual Lifshitz theory involves the energy flux, which is an irrelevant operator for z > 1. The linearized fluctuations relevant for computing the thermoelectric conductivities turn on a source for this irrelevant operator, leading to several novel and non-trivial aspects in the holographic renormalization procedure and the identification of the physical observables in the dual theory. Moreover, imposing Dirichlet or Neumann boundary conditions on the spatial components of one of the two Maxwell fields present leads to different thermoelectric conductivities. Dirichlet boundary conditions reproduce the thermoelectric DC conductivities obtained from the near horizon analysis of Donos and Gauntlett, while Neumann boundary conditions result in a new set of DC conductivities. We make preliminary analytical estimates for the temperature behavior of the thermoelectric matrix in appropriate regions of parameter space. In particular, at large temperatures we find that the only case which could lead to a linear resistivity ρ ˜ T corresponds to z = 4 /3.
Repulsive Casimir effect from extra dimensions and Robin boundary conditions: From branes to pistons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elizalde, E.; Odintsov, S. D.; Institucio Catalana de Recerca i Estudis Avanccats
2009-03-15
We evaluate the Casimir energy and force for a massive scalar field with general curvature coupling parameter, subject to Robin boundary conditions on two codimension-one parallel plates, located on a (D+1)-dimensional background spacetime with an arbitrary internal space. The most general case of different Robin coefficients on the two separate plates is considered. With independence of the geometry of the internal space, the Casimir forces are seen to be attractive for special cases of Dirichlet or Neumann boundary conditions on both plates and repulsive for Dirichlet boundary conditions on one plate and Neumann boundary conditions on the other. For Robinmore » boundary conditions, the Casimir forces can be either attractive or repulsive, depending on the Robin coefficients and the separation between the plates, what is actually remarkable and useful. Indeed, we demonstrate the existence of an equilibrium point for the interplate distance, which is stabilized due to the Casimir force, and show that stability is enhanced by the presence of the extra dimensions. Applications of these properties in braneworld models are discussed. Finally, the corresponding results are generalized to the geometry of a piston of arbitrary cross section.« less
Latent Dirichlet Allocation (LDA) for Sentiment Analysis Toward Tourism Review in Indonesia
NASA Astrophysics Data System (ADS)
Putri, IR; Kusumaningrum, R.
2017-01-01
The tourism industry is one of foreign exchange sector, which has considerable potential development in Indonesia. Compared to other Southeast Asia countries such as Malaysia with 18 million tourists and Singapore 20 million tourists, Indonesia which is the largest Southeast Asia’s country have failed to attract higher tourist numbers compared to its regional peers. Indonesia only managed to attract 8,8 million foreign tourists in 2013, with the value of foreign tourists each year which is likely to decrease. Apart from the infrastructure problems, marketing and managing also form of obstacles for tourism growth. An evaluation and self-analysis should be done by the stakeholder to respond toward this problem and capture opportunities that related to tourism satisfaction from tourists review. Recently, one of technology to answer this problem only relying on the subjective of statistical data which collected by voting or grading from user randomly. So the result is still not to be accountable. Thus, we proposed sentiment analysis with probabilistic topic model using Latent Dirichlet Allocation (LDA) method to be applied for reading general tendency from tourist review into certain topics that can be classified toward positive and negative sentiment.
Application of the perfectly matched layer in 2.5D marine controlled-source electromagnetic modeling
NASA Astrophysics Data System (ADS)
Li, Gang; Han, Bo
2017-09-01
For the traditional framework of EM modeling algorithms, the Dirichlet boundary is usually used which assumes the field values are zero at the boundaries. This crude condition requires that the boundaries should be sufficiently far away from the area of interest. Although cell sizes could become larger toward the boundaries as electromagnetic wave is propagated diffusively, a large modeling area may still be necessary to mitigate the boundary artifacts. In this paper, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 2.5D frequency-domain marine controlled-source electromagnetic (CSEM) field modeling. By using this PML boundary, one can restrict the modeling area of interest to the target region. Only a few absorbing layers surrounding the computational area can effectively depress the artificial boundary effect without losing the numerical accuracy. A 2.5D marine CSEM modeling scheme with the CFS-PML is developed by using the staggered finite-difference discretization. This modeling algorithm using the CFS-PML is of high accuracy, and shows advantages in computational time and memory saving than that using the Dirichlet boundary. For 3D problem, this computation time and memory saving should be more significant.
The spectra of rectangular lattices of quantum waveguides
NASA Astrophysics Data System (ADS)
Nazarov, S. A.
2017-02-01
We obtain asymptotic formulae for the spectral segments of a thin (h\\ll 1) rectangular lattice of quantum waveguides which is described by a Dirichlet problem for the Laplacian. We establish that the structure of the spectrum of the lattice is incorrectly described by the commonly accepted quantum graph model with the traditional Kirchhoff conditions at the vertices. It turns out that the lengths of the spectral segments are infinitesimals of order O(e-δ/h), δ> 0, and O(h) as h\\to+0, and gaps of width O(h-2) and O(1) arise between them in the low- frequency and middle- frequency spectral ranges respectively. The first spectral segment is generated by the (unique) eigenvalue in the discrete spectrum of an infinite cross-shaped waveguide \\Theta. The absence of bounded solutions of the problem in \\Theta at the threshold frequency means that the correct model of the lattice is a graph with Dirichlet conditions at the vertices which splits into two infinite subsets of identical edges- intervals. By using perturbations of finitely many joints, we construct any given number of discrete spectrum points of the lattice below the essential spectrum as well as inside the gaps.
Extending information retrieval methods to personalized genomic-based studies of disease.
Ye, Shuyun; Dawson, John A; Kendziorski, Christina
2014-01-01
Genomic-based studies of disease now involve diverse types of data collected on large groups of patients. A major challenge facing statistical scientists is how best to combine the data, extract important features, and comprehensively characterize the ways in which they affect an individual's disease course and likelihood of response to treatment. We have developed a survival-supervised latent Dirichlet allocation (survLDA) modeling framework to address these challenges. Latent Dirichlet allocation (LDA) models have proven extremely effective at identifying themes common across large collections of text, but applications to genomics have been limited. Our framework extends LDA to the genome by considering each patient as a "document" with "text" detailing his/her clinical events and genomic state. We then further extend the framework to allow for supervision by a time-to-event response. The model enables the efficient identification of collections of clinical and genomic features that co-occur within patient subgroups, and then characterizes each patient by those features. An application of survLDA to The Cancer Genome Atlas ovarian project identifies informative patient subgroups showing differential response to treatment, and validation in an independent cohort demonstrates the potential for patient-specific inference.
Diffusion Processes Satisfying a Conservation Law Constraint
Bakosi, J.; Ristorcelli, J. R.
2014-03-04
We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less
Diffusion Processes Satisfying a Conservation Law Constraint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakosi, J.; Ristorcelli, J. R.
We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less
A Multilevel Multiset Time-Series Model for Describing Complex Developmental Processes
Ma, Xin; Shen, Jianping
2017-01-01
The authors sought to develop an analytical platform where multiple sets of time series can be examined simultaneously. This multivariate platform capable of testing interaction effects among multiple sets of time series can be very useful in empirical research. The authors demonstrated that the multilevel framework can readily accommodate this analytical capacity. Given their intention to use the multilevel multiset time-series model to pursue complicated research purposes, their resulting model is relatively simple to specify, to run, and to interpret. These advantages make the adoption of their model relatively effortless as long as researchers have the basic knowledge and skills in working with multilevel growth modeling. With multiple potential extensions of their model, the establishment of this analytical platform for analysis of multiple sets of time series can inspire researchers to pursue far more advanced research designs to address complex developmental processes in reality. PMID:29881094
Time Series Model Identification by Estimating Information.
1982-11-01
principle, Applications of Statistics, P. R. Krishnaiah , ed., North-Holland: Amsterdam, 27-41. Anderson, T. W. (1971). The Statistical Analysis of Time Series...E. (1969). Multiple Time Series Modeling, Multivariate Analysis II, edited by P. Krishnaiah , Academic Press: New York, 389-409. Parzen, E. (1981...Newton, H. J. (1980). Multiple Time Series Modeling, II Multivariate Analysis - V, edited by P. Krishnaiah , North Holland: Amsterdam, 181-197. Shibata, R
A covariant multiple scattering series for elastic projectile-target scattering
NASA Technical Reports Server (NTRS)
Gross, Franz; Maung-Maung, Khin
1989-01-01
A covariant formulation of the multiple scattering series for the optical potential is presented. The case of a scalar nucleon interacting with a spin zero isospin zero A-body target through meson exchange, is considered. It is shown that a covariant equation for the projectile-target t-matrix can be obtained which sums the ladder and crossed ladder diagrams efficiently. From this equation, a multiple scattering series for the optical potential is derived, and it is shown that in the impulse approximation, the two-body t-matrix associated with the first order optical potential is the one in which one particle is kept on mass-shell. The meaning of various terms in the multiple scattering series is given. The construction of the first-order optical potential for elastic scattering calculations is described.
Products of multiple Fourier series with application to the multiblade transformation
NASA Technical Reports Server (NTRS)
Kunz, D. L.
1981-01-01
A relatively simple and systematic method for forming the products of multiple Fourier series using tensor like operations is demonstrated. This symbolic multiplication can be performed for any arbitrary number of series, and the coefficients of a set of linear differential equations with periodic coefficients from a rotating coordinate system to a nonrotating system is also demonstrated. It is shown that using Fourier operations to perform this transformation make it easily understood, simple to apply, and generally applicable.
Falcon: A Temporal Visual Analysis System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A.
2016-09-05
Flexible visible exploration of long, high-resolution time series from multiple sensor streams is a challenge in several domains. Falcon is a visual analytics approach that helps researchers acquire a deep understanding of patterns in log and imagery data. Falcon allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations with multiple levels of detail. These capabilities are applicable to the analysis of any quantitative time series.
Multiclass Data Segmentation using Diffuse Interface Methods on Graphs
2014-01-01
37] that performs interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general...izations of the graph Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph...continuous setting carry over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence
Sine-gordon type field in spacetime of arbitrary dimension. II: Stochastic quantization
NASA Astrophysics Data System (ADS)
Kirillov, A. I.
1995-11-01
Using the theory of Dirichlet forms, we prove the existence of a distribution-valued diffusion process such that the Nelson measure of a field with a bounded interaction density is its invariant probability measure. A Langevin equation in mathematically correct form is formulated which is satisfied by the process. The drift term of the equation is interpreted as a renormalized Euclidean current operator.
NASA Technical Reports Server (NTRS)
Chiavassa, G.; Liandrat, J.
1996-01-01
We construct compactly supported wavelet bases satisfying homogeneous boundary conditions on the interval (0,1). The maximum features of multiresolution analysis on the line are retained, including polynomial approximation and tree algorithms. The case of H(sub 0)(sup 1)(0, 1)is detailed, and numerical values, required for the implementation, are provided for the Neumann and Dirichlet boundary conditions.
ERIC Educational Resources Information Center
Kjeldsen, Tinne Hoff; Lützen, Jesper
2015-01-01
In this paper, we discuss the history of the concept of function and emphasize in particular how problems in physics have led to essential changes in its definition and application in mathematical practices. Euler defined a function as an analytic expression, whereas Dirichlet defined it as a variable that depends in an arbitrary manner on another…
The accurate solution of Poisson's equation by expansion in Chebyshev polynomials
NASA Technical Reports Server (NTRS)
Haidvogel, D. B.; Zang, T.
1979-01-01
A Chebyshev expansion technique is applied to Poisson's equation on a square with homogeneous Dirichlet boundary conditions. The spectral equations are solved in two ways - by alternating direction and by matrix diagonalization methods. Solutions are sought to both oscillatory and mildly singular problems. The accuracy and efficiency of the Chebyshev approach compare favorably with those of standard second- and fourth-order finite-difference methods.
Manifold Matching: Joint Optimization of Fidelity and Commensurability
2011-11-12
identified separately in p◦m, will be geometrically incommensurate (see Figure 7). Thus the null distribution of the test statistic will be inflated...into the objective function obviates the geometric incommensurability phenomenon. Thus we can es- tablish that, for a range of Dirichlet product model...from the geometric incommensu- rability phenomenon. Then q p implies that cca suffers from the spurious correlation phe- nomenon with high probability
The tunneling effect for a class of difference operators
NASA Astrophysics Data System (ADS)
Klein, Markus; Rosenberger, Elke
We analyze a general class of self-adjoint difference operators H𝜀 = T𝜀 + V𝜀 on ℓ2((𝜀ℤ)d), where V𝜀 is a multi-well potential and 𝜀 is a small parameter. We give a coherent review of our results on tunneling up to new sharp results on the level of complete asymptotic expansions (see [30-35]).Our emphasis is on general ideas and strategy, possibly of interest for a broader range of readers, and less on detailed mathematical proofs. The wells are decoupled by introducing certain Dirichlet operators on regions containing only one potential well. Then the eigenvalue problem for the Hamiltonian H𝜀 is treated as a small perturbation of these comparison problems. After constructing a Finslerian distance d induced by H𝜀, we show that Dirichlet eigenfunctions decay exponentially with a rate controlled by this distance to the well. It follows with microlocal techniques that the first n eigenvalues of H𝜀 converge to the first n eigenvalues of the direct sum of harmonic oscillators on ℝd located at several wells. In a neighborhood of one well, we construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low-lying eigenvalues of H𝜀. These are obtained from eigenfunctions or quasimodes for the operator H𝜀, acting on L2(ℝd), via restriction to the lattice (𝜀ℤ)d. Tunneling is then described by a certain interaction matrix, similar to the analysis for the Schrödinger operator (see [22]), the remainder is exponentially small and roughly quadratic compared with the interaction matrix. We give weighted ℓ2-estimates for the difference of eigenfunctions of Dirichlet-operators in neighborhoods of the different wells and the associated WKB-expansions at the wells. In the last step, we derive full asymptotic expansions for interactions between two “wells” (minima) of the potential energy, in particular for the discrete tunneling effect. Here we essentially use analysis on phase space, complexified in the momentum variable. These results are as sharp as the classical results for the Schrödinger operator in [22].
Peng, Hao; Yang, Yifan; Zhe, Shandian; Wang, Jian; Gribskov, Michael; Qi, Yuan
2017-01-01
Abstract Motivation High-throughput mRNA sequencing (RNA-Seq) is a powerful tool for quantifying gene expression. Identification of transcript isoforms that are differentially expressed in different conditions, such as in patients and healthy subjects, can provide insights into the molecular basis of diseases. Current transcript quantification approaches, however, do not take advantage of the shared information in the biological replicates, potentially decreasing sensitivity and accuracy. Results We present a novel hierarchical Bayesian model called Differentially Expressed Isoform detection from Multiple biological replicates (DEIsoM) for identifying differentially expressed (DE) isoforms from multiple biological replicates representing two conditions, e.g. multiple samples from healthy and diseased subjects. DEIsoM first estimates isoform expression within each condition by (1) capturing common patterns from sample replicates while allowing individual differences, and (2) modeling the uncertainty introduced by ambiguous read mapping in each replicate. Specifically, we introduce a Dirichlet prior distribution to capture the common expression pattern of replicates from the same condition, and treat the isoform expression of individual replicates as samples from this distribution. Ambiguous read mapping is modeled as a multinomial distribution, and ambiguous reads are assigned to the most probable isoform in each replicate. Additionally, DEIsoM couples an efficient variational inference and a post-analysis method to improve the accuracy and speed of identification of DE isoforms over alternative methods. Application of DEIsoM to an hepatocellular carcinoma (HCC) dataset identifies biologically relevant DE isoforms. The relevance of these genes/isoforms to HCC are supported by principal component analysis (PCA), read coverage visualization, and the biological literature. Availability and implementation The software is available at https://github.com/hao-peng/DEIsoM Contact pengh@alumni.purdue.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:28595376
NASA Astrophysics Data System (ADS)
Marshall, J. S.
2016-12-01
We analytically construct solutions for the mean first-passage time and splitting probabilities for the escape problem of a particle moving with continuous Brownian motion in a confining planar disc with an arbitrary distribution (i.e., of any number, size and spacing) of exit holes/absorbing sections along its boundary. The governing equations for these quantities are Poisson's equation with a (non-zero) constant forcing term and Laplace's equation, respectively, and both are subject to a mixture of homogeneous Neumann and Dirichlet boundary conditions. Our solutions are expressed as explicit closed formulae written in terms of a parameterising variable via a conformal map, using special transcendental functions that are defined in terms of an associated Schottky group. They are derived by exploiting recent results for a related problem of fluid mechanics that describes a unidirectional flow over "no-slip/no-shear" surfaces, as well as results from potential theory, all of which were themselves derived using the same theory of Schottky groups. They are exact up to the determination of a finite set of mapping parameters, which is performed numerically. Their evaluation also requires the numerical inversion of the parameterising conformal map. Computations for a series of illustrative examples are also presented.
NASA Astrophysics Data System (ADS)
Huang, Ching-Sheng; Yeh, Hund-Der
2016-11-01
This study introduces an analytical approach to estimate drawdown induced by well extraction in a heterogeneous confined aquifer with an irregular outer boundary. The aquifer domain is divided into a number of zones according to the zonation method for representing the spatial distribution of a hydraulic parameter field. The lateral boundary of the aquifer can be considered under the Dirichlet, Neumann or Robin condition at different parts of the boundary. Flow across the interface between two zones satisfies the continuities of drawdown and flux. Source points, each of which has an unknown volumetric rate representing the boundary effect on the drawdown, are allocated around the boundary of each zone. The solution of drawdown in each zone is expressed as a series in terms of the Theis equation with unknown volumetric rates from the source points. The rates are then determined based on the aquifer boundary conditions and the continuity requirements. The estimated aquifer drawdown by the present approach agrees well with a finite element solution developed based on the Mathematica function NDSolve. As compared with the existing numerical approaches, the present approach has a merit of directly computing the drawdown at any given location and time and therefore takes much less computing time to obtain the required results in engineering applications.
Multiclass Data Segmentation Using Diffuse Interface Methods on Graphs
2014-01-01
interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general- izations of the graph...Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph-based energy function that...over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence results for two algorithms
1987-07-01
multinomial distribution as a magazine exposure model. J. of Marketing Research . 21, 100-106. Lehmann, E.L. (1983). Theory of Point Estimation. John Wiley and... Marketing Research . 21, 89-99. V I flWflW WflW~WWMWSS tWN ,rw fl rwwrwwr-w~ w-. ~. - - -- .~ 4’.) ~a 4’ ., . ’-4. .4.: .4~ I .4. ~J3iAf a,’ -a’ 4
Ages of Records in Random Walks
NASA Astrophysics Data System (ADS)
Szabó, Réka; Vető, Bálint
2016-12-01
We consider random walks with continuous and symmetric step distributions. We prove universal asymptotics for the average proportion of the age of the kth longest lasting record for k=1,2,ldots and for the probability that the record of the kth longest age is broken at step n. Due to the relation to the Chinese restaurant process, the ranked sequence of proportions of ages converges to the Poisson-Dirichlet distribution.
1985-05-01
non- zero Dirichlet boundary conditions and/or general mixed type boundary conditions. Note that Neumann type boundary condi- tion enters the problem by...Background ................................. ................... I 1.3 General Description ..... ............ ........... . ....... ...... 2 2. ANATOMICAL...human and varions loading conditions for the definition of a generalized safety guideline of blast exposure. To model the response of a sheep torso
A nonlinear ordinary differential equation associated with the quantum sojourn time
NASA Astrophysics Data System (ADS)
Benguria, Rafael D.; Duclos, Pierre; Fernández, Claudio; Sing-Long, Carlos
2010-11-01
We study a nonlinear ordinary differential equation on the half-line, with the Dirichlet boundary condition at the origin. This equation arises when studying the local maxima of the sojourn time for a free quantum particle whose states belong to an adequate subspace of the unit sphere of the corresponding Hilbert space. We establish several results concerning the existence and asymptotic behavior of the solutions.
An improved approximate-Bayesian model-choice method for estimating shared evolutionary history
2014-01-01
Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937
NASA Astrophysics Data System (ADS)
Smith, Keith; Ricaud, Benjamin; Shahid, Nauman; Rhodes, Stephen; Starr, John M.; Ibáñez, Augustin; Parra, Mario A.; Escudero, Javier; Vandergheynst, Pierre
2017-02-01
Visual short-term memory binding tasks are a promising early marker for Alzheimer’s disease (AD). To uncover functional deficits of AD in these tasks it is meaningful to first study unimpaired brain function. Electroencephalogram recordings were obtained from encoding and maintenance periods of tasks performed by healthy young volunteers. We probe the task’s transient physiological underpinnings by contrasting shape only (Shape) and shape-colour binding (Bind) conditions, displayed in the left and right sides of the screen, separately. Particularly, we introduce and implement a novel technique named Modular Dirichlet Energy (MDE) which allows robust and flexible analysis of the functional network with unprecedented temporal precision. We find that connectivity in the Bind condition is less integrated with the global network than in the Shape condition in occipital and frontal modules during the encoding period of the right screen condition. Using MDE we are able to discern driving effects in the occipital module between 100-140 ms, coinciding with the P100 visually evoked potential, followed by a driving effect in the frontal module between 140-180 ms, suggesting that the differences found constitute an information processing difference between these modules. This provides temporally precise information over a heterogeneous population in promising tasks for the detection of AD.
Lu, Yisu; Jiang, Jun; Yang, Wei; Feng, Qianjin; Chen, Wufan
2014-01-01
Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use.
Lu, Yisu; Jiang, Jun; Chen, Wufan
2014-01-01
Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use. PMID:25254064
Mappings of Least Dirichlet Energy and their Hopf Differentials
NASA Astrophysics Data System (ADS)
Iwaniec, Tadeusz; Onninen, Jani
2013-08-01
The paper is concerned with mappings {h \\colon {X}} {{begin{array}{ll} onto \\ longrightarrow }} {{Y}} between planar domains having least Dirichlet energy. The existence and uniqueness (up to a conformal change of variables in {{X}}) of the energy-minimal mappings is established within the class {overline{fancyscript{H}}_2({X}, {Y})} of strong limits of homeomorphisms in the Sobolev space {fancyscript{W}^{1,2}({X}, {Y})} , a result of considerable interest in the mathematical models of nonlinear elasticity. The inner variation of the independent variable in {{X}} leads to the Hopf differential {hz overline{h_{bar{z}}} dz ⊗ dz} and its trajectories. For a pair of doubly connected domains, in which {{X}} has finite conformal modulus, we establish the following principle: A mapping {h in overline{fancyscript{H}}2 ({X}, {Y})} is energy-minimal if and only if its Hopf-differential is analytic in {{X}} and real along {partial {X}} . In general, the energy-minimal mappings may not be injective, in which case one observes the occurrence of slits in {{X}} (cognate with cracks). Slits are triggered by points of concavity of {{Y}} . They originate from {partial {X}} and advance along vertical trajectories of the Hopf differential toward {{X}} where they eventually terminate, so no crosscuts are created.
Negative Binomial Process Count and Mixture Modeling.
Zhou, Mingyuan; Carin, Lawrence
2015-02-01
The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.
Gross, Alexander; Murthy, Dhiraj
2014-10-01
This paper explores a variety of methods for applying the Latent Dirichlet Allocation (LDA) automated topic modeling algorithm to the modeling of the structure and behavior of virtual organizations found within modern social media and social networking environments. As the field of Big Data reveals, an increase in the scale of social data available presents new challenges which are not tackled by merely scaling up hardware and software. Rather, they necessitate new methods and, indeed, new areas of expertise. Natural language processing provides one such method. This paper applies LDA to the study of scientific virtual organizations whose members employ social technologies. Because of the vast data footprint in these virtual platforms, we found that natural language processing was needed to 'unlock' and render visible latent, previously unseen conversational connections across large textual corpora (spanning profiles, discussion threads, forums, and other social media incarnations). We introduce variants of LDA and ultimately make the argument that natural language processing is a critical interdisciplinary methodology to make better sense of social 'Big Data' and we were able to successfully model nested discussion topics from forums and blog posts using LDA. Importantly, we found that LDA can move us beyond the state-of-the-art in conventional Social Network Analysis techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.
Reuter, Martin; Wolter, Franz-Erich; Shenton, Martha; Niethammer, Marc
2009-01-01
This paper proposes the use of the surface based Laplace-Beltrami and the volumetric Laplace eigenvalues and -functions as shape descriptors for the comparison and analysis of shapes. These spectral measures are isometry invariant and therefore allow for shape comparisons with minimal shape pre-processing. In particular, no registration, mapping, or remeshing is necessary. The discriminatory power of the 2D surface and 3D solid methods is demonstrated on a population of female caudate nuclei (a subcortical gray matter structure of the brain, involved in memory function, emotion processing, and learning) of normal control subjects and of subjects with schizotypal personality disorder. The behavior and properties of the Laplace-Beltrami eigenvalues and -functions are discussed extensively for both the Dirichlet and Neumann boundary condition showing advantages of the Neumann vs. the Dirichlet spectra in 3D. Furthermore, topological analyses employing the Morse-Smale complex (on the surfaces) and the Reeb graph (in the solids) are performed on selected eigenfunctions, yielding shape descriptors, that are capable of localizing geometric properties and detecting shape differences by indirectly registering topological features such as critical points, level sets and integral lines of the gradient field across subjects. The use of these topological features of the Laplace-Beltrami eigenfunctions in 2D and 3D for statistical shape analysis is novel. PMID:20161035
Materials Processing in Magnetic Fields
NASA Astrophysics Data System (ADS)
Schneider-Muntau, Hans J.; Wada, Hitoshi
The latest in lattice QCD -- Quark-gluon plasma physics -- String theory and exact results in quantum field theory -- The status of local supersymmetry.Supersymmetry in nuclei -- Inflation, dark matter, dark energy -- How many dimensions are really compactified? -- Horizons -- Neutrino oscillations physics -- Fundamental constants and their possible time dependence.Highlights from BNL. new phenomena at RHIC -- Highlights from BABAR -- Diffraction studied with a hard scale at HERA -- The large hadron collider: a status report -- Status of non-LHC experiments at CERN -- Highlights from Gran Sass.Fast automatic systems for nuclear emulsion scanning: technique and experiments -- Probing the QGP with charm at ALICE-LHC -- magnetic screening length in hot QCD -- Non-supersymmetric deformation of the Klebanov-Strassler model and the related plane wave theory -- Holographic renormalization made simple: an example -- The kamLAND impact on neutrino oscillations -- Particle identification with the ALIC TOF detector at very high multiplicity -- Superpotentials of N = 1 SUSY gauge theories -- Measurement of the proton structure function F2 in QED compton scattering at HERA -- Yang-Mills effective action at high temperature -- The time of flight (TOF) system of the ALICE experiment -- Almost product manifolds as the low energy geometry of Dirichlet Brane.
Stochastic search, optimization and regression with energy applications
NASA Astrophysics Data System (ADS)
Hannah, Lauren A.
Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.
Compression based entropy estimation of heart rate variability on multiple time scales.
Baumert, Mathias; Voss, Andreas; Javorka, Michal
2013-01-01
Heart rate fluctuates beat by beat in a complex manner. The aim of this study was to develop a framework for entropy assessment of heart rate fluctuations on multiple time scales. We employed the Lempel-Ziv algorithm for lossless data compression to investigate the compressibility of RR interval time series on different time scales, using a coarse-graining procedure. We estimated the entropy of RR interval time series of 20 young and 20 old subjects and also investigated the compressibility of randomly shuffled surrogate RR time series. The original RR time series displayed significantly smaller compression entropy values than randomized RR interval data. The RR interval time series of older subjects showed significantly different entropy characteristics over multiple time scales than those of younger subjects. In conclusion, data compression may be useful approach for multiscale entropy assessment of heart rate variability.
Testing and analysis of flat and curved panels with multiple cracks
DOT National Transportation Integrated Search
1994-08-01
An experimental and analytical investigation of multiple cracking in various types of test specimens is described in this paper. The testing phase is comprised of a flat unstiffened panel series and curved stiffened and unstiffened panel series. The ...
NASA Astrophysics Data System (ADS)
Zhang, Zhizheng; Wang, Tianze
2008-07-01
In this paper, we first give several operator identities involving the bivariate Rogers-Szegö polynomials. By applying the technique of parameter augmentation to the multiple q-binomial theorems given by Milne [S.C. Milne, Balanced summation theorems for U(n) basic hypergeometric series, AdvE Math. 131 (1997) 93-187], we obtain several new multiple q-series identities involving the bivariate Rogers-Szegö polynomials. These include multiple extensions of Mehler's formula and Rogers's formula. Our U(n+1) generalizations are quite natural as they are also a direct and immediate consequence of their (often classical) known one-variable cases and Milne's fundamental theorem for An or U(n+1) basic hypergeometric series in Theorem 1E49 of [S.C. Milne, An elementary proof of the Macdonald identities for , Adv. Math. 57 (1985) 34-70], as rewritten in Lemma 7.3 on p. 163 of [S.C. Milne, Balanced summation theorems for U(n) basic hypergeometric series, Adv. Math. 131 (1997) 93-187] or Corollary 4.4 on pp. 768-769 of [S.C. Milne, M. Schlosser, A new An extension of Ramanujan's summation with applications to multilateral An series, Rocky Mountain J. Math. 32 (2002) 759-792].
Miklius, Asta; Flower, M.F.J.; Huijsmans, J.P.P.; Mukasa, S.B.; Castillo, P.
1991-01-01
Taal lava series can be distinguished from each other by differences in major and trace element trends and trace element ratios, indicating multiple magmatic systems associated with discrete centers in time and space. On Volcano Island, contemporaneous lava series range from typically calc-alkaline to iron-enriched. Major and trace element variation in these series can be modelled by fractionation of similar assemblages, with early fractionation of titano-magnetite in less iron-enriched series. However, phase compositional and petrographic evidence of mineral-liquid disequilibrium suggests that magma mixing played an important role in the evolution of these series. -from Authors
UTD at TREC 2014: Query Expansion for Clinical Decision Support
2014-11-01
Description: A 62-year-old man sees a neurologist for progressive memory loss and jerking movements of the lower ex- tremities. Neurologic examination confirms...infiltration. Summary: 62-year-old man with progressive memory loss and in- voluntary leg movements. Brain MRI reveals cortical atrophy, and cortical...latent topics produced by the Latent Dirichlet Allocation (LDA) on the TREC-CDS corpus of scientific articles. The position of words “ loss ” and “ memory
Nondestructive Testing and Target Identification
2016-12-21
Dirichlet obstacle coated by a thin layer of non-absorbing media, IMA J. Appl. Math , 80, 1063-1098, (2015). Abstract: We consider the transmission...F. Cakoni, I. De Teresa, H. Haddar and P. Monk, Nondestructive testing of the delami- nated interface between two materials, SIAM J. Appl. Math ., 76...then they form a discrete set. 22. F. Cakoni, D. Colton, S. Meng and P. Monk, Steklov eigenvalues in inverse scattering, SIAM J. Appl. Math . 76, 1737
Single-grid spectral collocation for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Canuto, Claudio; Maday, Yvon; Metivet, Brigitte
1988-01-01
The aim of the paper is to study a collocation spectral method to approximate the Navier-Stokes equations: only one grid is used, which is built from the nodes of a Gauss-Lobatto quadrature formula, either of Legendre or of Chebyshev type. The convergence is proven for the Stokes problem provided with inhomogeneous Dirichlet conditions, then thoroughly analyzed for the Navier-Stokes equations. The practical implementation algorithm is presented, together with numerical results.
The Smoothed Dirichlet Distribution: Understanding Cross-Entropy Ranking in Information Retrieval
2006-07-01
reflect those of the spon- sor. viii ABSTRACT Unigram Language modeling is a successful probabilistic framework for Information Retrieval (IR) that uses...the Relevance model (RM), a state-of-the-art model for IR in the language modeling framework that uses the same cross-entropy as its ranking function...In addition, the SD based classifier provides more flexibility than RM in modeling documents owing to a consistent generative framework . We
Augmenting Latent Dirichlet Allocation and Rank Threshold Detection with Ontologies
2010-03-01
Probabilistic Latent Semantic Indexing (PLSI) is an automated indexing information retrieval model [20]. It is based on a statistical latent class model which is...uses a statistical foundation that is more accurate in finding hidden semantic relationships [20]. The model uses factor analysis of count data, number...principle of statistical infer- ence which asserts that all of the information in a sample is contained in the likelihood function [20]. The statistical
Unraveling multiple changes in complex climate time series using Bayesian inference
NASA Astrophysics Data System (ADS)
Berner, Nadine; Trauth, Martin H.; Holschneider, Matthias
2016-04-01
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of observations. Unraveling such transitions yields essential information for the understanding of the observed system. The precise detection and basic characterization of underlying changes is therefore of particular importance in environmental sciences. We present a kernel-based Bayesian inference approach to investigate direct as well as indirect climate observations for multiple generic transition events. In order to develop a diagnostic approach designed to capture a variety of natural processes, the basic statistical features of central tendency and dispersion are used to locally approximate a complex time series by a generic transition model. A Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of such a transition. To systematically investigate time series for multiple changes occurring at different temporal scales, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. Thus, based on a generic transition model a probability expression is derived that is capable to indicate multiple changes within a complex time series. We discuss the method's performance by investigating direct and indirect climate observations. The approach is applied to environmental time series (about 100 a), from the weather station in Tuscaloosa, Alabama, and confirms documented instrumentation changes. Moreover, the approach is used to investigate a set of complex terrigenous dust records from the ODP sites 659, 721/722 and 967 interpreted as climate indicators of the African region of the Plio-Pleistocene period (about 5 Ma). The detailed inference unravels multiple transitions underlying the indirect climate observations coinciding with established global climate events.
The Convergence Problems of Eigenfunction Expansions of Elliptic Differential Operators
NASA Astrophysics Data System (ADS)
Ahmedov, Anvarjon
2018-03-01
In the present research we investigate the problems concerning the almost everywhere convergence of multiple Fourier series summed over the elliptic levels in the classes of Liouville. The sufficient conditions for the almost everywhere convergence problems, which are most difficult problems in Harmonic analysis, are obtained. The methods of approximation by multiple Fourier series summed over elliptic curves are applied to obtain suitable estimations for the maximal operator of the spectral decompositions. Obtaining of such estimations involves very complicated calculations which depends on the functional structure of the classes of functions. The main idea on the proving the almost everywhere convergence of the eigenfunction expansions in the interpolation spaces is estimation of the maximal operator of the partial sums in the boundary classes and application of the interpolation Theorem of the family of linear operators. In the present work the maximal operator of the elliptic partial sums are estimated in the interpolation classes of Liouville and the almost everywhere convergence of the multiple Fourier series by elliptic summation methods are established. The considering multiple Fourier series as an eigenfunction expansions of the differential operators helps to translate the functional properties (for example smoothness) of the Liouville classes into Fourier coefficients of the functions which being expanded into such expansions. The sufficient conditions for convergence of the multiple Fourier series of functions from Liouville classes are obtained in terms of the smoothness and dimensions. Such results are highly effective in solving the boundary problems with periodic boundary conditions occurring in the spectral theory of differential operators. The investigations of multiple Fourier series in modern methods of harmonic analysis incorporates the wide use of methods from functional analysis, mathematical physics, modern operator theory and spectral decomposition. New method for the best approximation of the square-integrable function by multiple Fourier series summed over the elliptic levels are established. Using the best approximation, the Lebesgue constant corresponding to the elliptic partial sums is estimated. The latter is applied to obtain an estimation for the maximal operator in the classes of Liouville.
A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2010-08-01
A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.
Kim, Yee Suk; Lee, Sungin; Zong, Nansu; Kahng, Jimin
2017-01-01
The present study aimed to investigate differences in prognosis based on human papillomavirus (HPV) infection, persistent infection and genotype variations for patients exhibiting atypical squamous cells of undetermined significance (ASCUS) in their initial Papanicolaou (PAP) test results. A latent Dirichlet allocation (LDA)-based tool was developed that may offer a facilitated means of communication to be employed during patient-doctor consultations. The present study assessed 491 patients (139 HPV-positive and 352 HPV-negative cases) with a PAP test result of ASCUS with a follow-up period ≥2 years. Patients underwent PAP and HPV DNA chip tests between January 2006 and January 2009. The HPV-positive subjects were followed up with at least 2 instances of PAP and HPV DNA chip tests. The most common genotypes observed were HPV-16 (25.9%, 36/139), HPV-52 (14.4%, 20/139), HPV-58 (13.7%, 19/139), HPV-56 (11.5%, 16/139), HPV-51 (9.4%, 13/139) and HPV-18 (8.6%, 12/139). A total of 33.3% (12/36) patients positive for HPV-16 had cervical intraepithelial neoplasia (CIN)2 or a worse result, which was significantly higher than the prevalence of CIN2 of 1.8% (8/455) in patients negative for HPV-16 (P<0.001), while no significant association was identified for other genotypes in terms of genotype and clinical progress. There was a significant association between clearance and good prognosis (P<0.001). Persistent infection was higher in patients aged ≥51 years (38.7%) than in those aged ≤50 years (20.4%; P=0.036). Progression from persistent infection to CIN2 or worse (19/34, 55.9%) was higher than clearance (0/105, 0.0%; P<0.001). In the LDA analysis, using symmetric Dirichlet priors α=0.1 and β=0.01, and clusters (k)=5 or 10 provided the most meaningful groupings. Statistical and LDA analyses produced consistent results regarding the association between persistent infection of HPV-16, old age and long infection period with a clinical progression of CIN2 or worse. Therefore, LDA results may be presented as explanatory evidence during time-constrained patient-doctor consultations in order to deliver information regarding the patient's status. PMID:28587376
NASA Astrophysics Data System (ADS)
Bloshanskiĭ, I. L.
1984-02-01
The precise geometry is found of measurable sets in N-dimensional Euclidean space on which generalized localization almost everywhere holds for multiple Fourier series which are rectangularly summable.Bibliography: 14 titles.
Global Binary Optimization on Graphs for Classification of High Dimensional Data
2014-09-01
Buades et al . in [10] introduce a new non-local means algorithm for image denoising and compare it to some of the best methods. In [28], Grady de...scribes a random walk algorithm for image seg- mentation using the solution to a Dirichlet prob- lem. Elmoataz et al . present generalizations of the...graph Laplacian [19] for image denoising and man- ifold smoothing. Couprie et al . in [16] propose a parameterized graph-based energy function that unifies
1988-09-01
Institute for Physical Science and Teennology rUniversity of Maryland o College Park, MD 20742 B. Gix) Engineering Mechanics Research Corporation Troy...OF THE FINITE ELEMENT METHOD by Ivo Babuska Institute for Physical Science and Technology University of Maryland College Park, MD 20742 B. Guo 2...2Research partially supported by the National Science Foundation under Grant DMS-85-16191 during the stay at the Institute for Physical Science and
Lifshits Tails for Randomly Twisted Quantum Waveguides
NASA Astrophysics Data System (ADS)
Kirsch, Werner; Krejčiřík, David; Raikov, Georgi
2018-03-01
We consider the Dirichlet Laplacian H_γ on a 3D twisted waveguide with random Anderson-type twisting γ . We introduce the integrated density of states N_γ for the operator H_γ , and investigate the Lifshits tails of N_γ , i.e. the asymptotic behavior of N_γ (E) as E \\downarrow \\inf supp dN_γ . In particular, we study the dependence of the Lifshits exponent on the decay rate of the single-site twisting at infinity.
Evaluation of the path integral for flow through random porous media
NASA Astrophysics Data System (ADS)
Westbroek, Marise J. E.; Coche, Gil-Arnaud; King, Peter R.; Vvedensky, Dimitri D.
2018-04-01
We present a path integral formulation of Darcy's equation in one dimension with random permeability described by a correlated multivariate lognormal distribution. This path integral is evaluated with the Markov chain Monte Carlo method to obtain pressure distributions, which are shown to agree with the solutions of the corresponding stochastic differential equation for Dirichlet and Neumann boundary conditions. The extension of our approach to flow through random media in two and three dimensions is discussed.
Using phrases and document metadata to improve topic modeling of clinical reports.
Speier, William; Ong, Michael K; Arnold, Corey W
2016-06-01
Probabilistic topic models provide an unsupervised method for analyzing unstructured text, which have the potential to be integrated into clinical automatic summarization systems. Clinical documents are accompanied by metadata in a patient's medical history and frequently contains multiword concepts that can be valuable for accurately interpreting the included text. While existing methods have attempted to address these problems individually, we present a unified model for free-text clinical documents that integrates contextual patient- and document-level data, and discovers multi-word concepts. In the proposed model, phrases are represented by chained n-grams and a Dirichlet hyper-parameter is weighted by both document-level and patient-level context. This method and three other Latent Dirichlet allocation models were fit to a large collection of clinical reports. Examples of resulting topics demonstrate the results of the new model and the quality of the representations are evaluated using empirical log likelihood. The proposed model was able to create informative prior probabilities based on patient and document information, and captured phrases that represented various clinical concepts. The representation using the proposed model had a significantly higher empirical log likelihood than the compared methods. Integrating document metadata and capturing phrases in clinical text greatly improves the topic representation of clinical documents. The resulting clinically informative topics may effectively serve as the basis for an automatic summarization system for clinical reports. Copyright © 2016 Elsevier Inc. All rights reserved.
AdOn HDP-HMM: An Adaptive Online Model for Segmentation and Classification of Sequential Data.
Bargi, Ava; Xu, Richard Yi Da; Piccardi, Massimo
2017-09-21
Recent years have witnessed an increasing need for the automated classification of sequential data, such as activities of daily living, social media interactions, financial series, and others. With the continuous flow of new data, it is critical to classify the observations on-the-fly and without being limited by a predetermined number of classes. In addition, a model should be able to update its parameters in response to a possible evolution in the distributions of the classes. This compelling problem, however, does not seem to have been adequately addressed in the literature, since most studies focus on offline classification over predefined class sets. In this paper, we present a principled solution for this problem based on an adaptive online system leveraging Markov switching models and hierarchical Dirichlet process priors. This adaptive online approach is capable of classifying the sequential data over an unlimited number of classes while meeting the memory and delay constraints typical of streaming contexts. In this paper, we introduce an adaptive ''learning rate'' that is responsible for balancing the extent to which the model retains its previous parameters or adapts to new observations. Experimental results on stationary and evolving synthetic data and two video data sets, TUM Assistive Kitchen and collated Weizmann, show a remarkable performance in terms of segmentation and classification, particularly for sequences from evolutionary distributions and/or those containing previously unseen classes.
Definition and properties of the libera operator on mixed norm spaces.
Pavlovic, Miroslav
2014-01-01
We consider the action of the operator ℒg(z) = (1 - z)(-1)∫ z (1)f(ζ)dζ on a class of "mixed norm" spaces of analytic functions on the unit disk, X = H α,ν (p,q) , defined by the requirement g ∈ X ⇔ r ↦ (1 - r) (α) M p (r, g ((ν))) ∈ L (q) ([0,1], dr/(1 - r)), where 1 ≤ p ≤ ∞, 0 < q ≤ ∞, α > 0, and ν is a nonnegative integer. This class contains Besov spaces, weighted Bergman spaces, Dirichlet type spaces, Hardy-Sobolev spaces, and so forth. The expression ℒg need not be defined for g analytic in the unit disk, even for g ∈ X. A sufficient, but not necessary, condition is that Σ(n=0)|(∞)|ĝ(n)/(n + 1) < ∞. We identify the indices p, q, α, and ν for which 1°ℒ is well defined on X, 2 °ℒ acts from X to X, 3° the implication g ∈ X [Symbol: see text] Σ(n = 0)(∞) |/ĝ(n)|(n+1) < ∞ holds. Assertion 2° extends some known results, due to Siskakis and others, and contains some new ones. As an application of 3° we have a generalization of Bernstein's theorem on absolute convergence of power series that belong to a Hölder class.
Acoustic plane wave diffraction from a truncated semi-infinite cone in axial irradiation
NASA Astrophysics Data System (ADS)
Kuryliak, Dozyslav; Lysechko, Victor
2017-11-01
The diffraction problem of the plane acoustic wave on the semi-infinite truncated soft and rigid cones in the case of axial incidence is solved. The problem is formulated as a boundary-value problem in terms of Helmholtz equation, with Dirichlet and Neumann boundary conditions, for scattered velocity potential. The incident field is taken to be the total field of semi-infinite cone, the expression of which is obtained by solving the auxiliary diffraction problem by the use of Kontorovich-Lebedev integral transformation. The diffracted field is sought via the expansion in series of the eigenfunctions for subdomains of the Helmholtz equation taking into account the edge condition. The corresponding diffraction problem is reduced to infinite system of linear algebraic equations (ISLAE) making use of mode matching technique and orthogonality properties of the Legendre functions. The method of analytical regularization is applied in order to extract the singular part in ISLAE, invert it exactly and reduce the problem to ISLAE of the second kind, which is readily amenable to calculation. The numerical solution of this system relies on the reduction method; and its accuracy depends on the truncation order. The case of degeneration of the truncated semi-infinite cone into an aperture in infinite plane is considered. Characteristic features of diffracted field in near and far fields as functions of cone's parameters are examined.
Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.
Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo
Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.
Intensive (Daily) Behavior Therapy for School Refusal: A Multiple Baseline Case Series
ERIC Educational Resources Information Center
Tolin, David F.; Whiting, Sara; Maltby, Nicholas; Diefenbach, Gretchen J.; Lothstein, Mary Anne; Hardcastle, Surrey; Catalano, Amy; Gray, Krista
2009-01-01
The following multiple baseline case series examines school refusal behavior in 4 male adolescents. School refusal symptom presentation was ascertained utilizing a functional analysis from the School Refusal Assessment Scale (Kearney, 2002). For the majority of cases, treatment was conducted within a 15-session intensive format over a 3-week…
NASA Technical Reports Server (NTRS)
Van Dongen, H. P.; Olofsen, E.; VanHartevelt, J. H.; Kruyt, E. W.; Dinges, D. F. (Principal Investigator)
1999-01-01
Periodogram analysis of unequally spaced time-series, as part of many biological rhythm investigations, is complicated. The mathematical framework is scattered over the literature, and the interpretation of results is often debatable. In this paper, we show that the Lomb-Scargle method is the appropriate tool for periodogram analysis of unequally spaced data. A unique procedure of multiple period searching is derived, facilitating the assessment of the various rhythms that may be present in a time-series. All relevant mathematical and statistical aspects are considered in detail, and much attention is given to the correct interpretation of results. The use of the procedure is illustrated by examples, and problems that may be encountered are discussed. It is argued that, when following the procedure of multiple period searching, we can even benefit from the unequal spacing of a time-series in biological rhythm research.
NASA Astrophysics Data System (ADS)
Warren, Aaron R.
2009-11-01
Time-series designs are an alternative to pretest-posttest methods that are able to identify and measure the impacts of multiple educational interventions, even for small student populations. Here, we use an instrument employing standard multiple-choice conceptual questions to collect data from students at regular intervals. The questions are modified by asking students to distribute 100 Confidence Points among the options in order to indicate the perceived likelihood of each answer option being the correct one. Tracking the class-averaged ratings for each option produces a set of time-series. ARIMA (autoregressive integrated moving average) analysis is then used to test for, and measure, changes in each series. In particular, it is possible to discern which educational interventions produce significant changes in class performance. Cluster analysis can also identify groups of students whose ratings evolve in similar ways. A brief overview of our methods and an example are presented.
Low, Diana H P; Motakis, Efthymios
2013-10-01
Binding free energy calculations obtained through molecular dynamics simulations reflect intermolecular interaction states through a series of independent snapshots. Typically, the free energies of multiple simulated series (each with slightly different starting conditions) need to be estimated. Previous approaches carry out this task by moving averages at certain decorrelation times, assuming that the system comes from a single conformation description of binding events. Here, we discuss a more general approach that uses statistical modeling, wavelets denoising and hierarchical clustering to estimate the significance of multiple statistically distinct subpopulations, reflecting potential macrostates of the system. We present the deltaGseg R package that performs macrostate estimation from multiple replicated series and allows molecular biologists/chemists to gain physical insight into the molecular details that are not easily accessible by experimental techniques. deltaGseg is a Bioconductor R package available at http://bioconductor.org/packages/release/bioc/html/deltaGseg.html.
Functional level-set derivative for a polymer self consistent field theory Hamiltonian
NASA Astrophysics Data System (ADS)
Ouaknin, Gaddiel; Laachi, Nabil; Bochkov, Daniil; Delaney, Kris; Fredrickson, Glenn H.; Gibou, Frederic
2017-09-01
We derive functional level-set derivatives for the Hamiltonian arising in self-consistent field theory, which are required to solve free boundary problems in the self-assembly of polymeric systems such as block copolymer melts. In particular, we consider Dirichlet, Neumann and Robin boundary conditions. We provide numerical examples that illustrate how these shape derivatives can be used to find equilibrium and metastable structures of block copolymer melts with a free surface in both two and three spatial dimensions.
Thermodynamic Identities and Symmetry Breaking in Short-Range Spin Glasses
NASA Astrophysics Data System (ADS)
Arguin, L.-P.; Newman, C. M.; Stein, D. L.
2015-10-01
We present a technique to generate relations connecting pure state weights, overlaps, and correlation functions in short-range spin glasses. These are obtained directly from the unperturbed Hamiltonian and hold for general coupling distributions. All are satisfied in phases with simple thermodynamic structure, such as the droplet-scaling and chaotic pairs pictures. If instead nontrivial mixed-state pictures hold, the relations suggest that replica symmetry is broken as described by a Derrida-Ruelle cascade, with pure state weights distributed as a Poisson-Dirichlet process.
Image Annotation and Topic Extraction Using Super-Word Latent Dirichlet Allocation
2013-09-01
an image can be used to improve automated image annotation performance over existing generalized annotators. Second, image anno - 3 tations can be used...the other variables. The first ratio in the sampling Equation 2.18 uses word frequency by total words, φ̂ (w) j . The second ratio divides word...topics by total words in that document θ̂ (d) j . Both leave out the current assignment of zi and the results are used to randomly choose a new topic
Time-Bound Analytic Tasks on Large Data Sets Through Dynamic Configuration of Workflows
2013-11-01
Assessment and Efficient Retrieval of Semantic Workflows.” Information Systems Journal, . 2012. [2] Blei, D., Ng, A., and M . Jordan. “Latent Dirichlet...25 (561-567), 2009. [5] Furlani, T. R., Jones, M . D., Gallo, S. M ., Bruno, A. E., Lu, C., Ghadersohi, A., Gentner, R. J., Patra, A., DeLeon, R. L...Proceedings of the IEEE e- Science Conference, Oxford, UK, pages 244–351. 2009. [8] Gil, Y.; Deelman, E.; Ellisman, M . H.; Fahringer, T.; Fox, G.; Gannon, D
NASA Technical Reports Server (NTRS)
Gelinas, R. J.; Doss, S. K.; Vajk, J. P.; Djomehri, J.; Miller, K.
1983-01-01
The mathematical background regarding the moving finite element (MFE) method of Miller and Miller (1981) is discussed, taking into account a general system of partial differential equations (PDE) and the amenability of the MFE method in two dimensions to code modularization and to semiautomatic user-construction of numerous PDE systems for both Dirichlet and zero-Neumann boundary conditions. A description of test problem results is presented, giving attention to aspects of single square wave propagation, and a solution of the heat equation.
On the Boussinesq-Burgers equations driven by dynamic boundary conditions
NASA Astrophysics Data System (ADS)
Zhu, Neng; Liu, Zhengrong; Zhao, Kun
2018-02-01
We study the qualitative behavior of the Boussinesq-Burgers equations on a finite interval subject to the Dirichlet type dynamic boundary conditions. Assuming H1 ×H2 initial data which are compatible with boundary conditions and utilizing energy methods, we show that under appropriate conditions on the dynamic boundary data, there exist unique global-in-time solutions to the initial-boundary value problem, and the solutions converge to the boundary data as time goes to infinity, regardless of the magnitude of the initial data.
Quasi-periodic solutions of nonlinear beam equation with prescribed frequencies
NASA Astrophysics Data System (ADS)
Chang, Jing; Gao, Yixian; Li, Yong
2015-05-01
Consider the one dimensional nonlinear beam equation utt + uxxxx + mu + u3 = 0 under Dirichlet boundary conditions. We show that for any m > 0 but a set of small Lebesgue measure, the above equation admits a family of small-amplitude quasi-periodic solutions with n-dimensional Diophantine frequencies. These Diophantine frequencies are the small dilation of a prescribed Diophantine vector. The proofs are based on an infinite dimensional Kolmogorov-Arnold-Moser iteration procedure and a partial Birkhoff normal form.
Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Ditkowski, Adi
1996-01-01
An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.
Optimal decay rate for the wave equation on a square with constant damping on a strip
NASA Astrophysics Data System (ADS)
Stahn, Reinhard
2017-04-01
We consider the damped wave equation with Dirichlet boundary conditions on the unit square parametrized by Cartesian coordinates x and y. We assume the damping a to be strictly positive and constant for x<σ and zero for x>σ . We prove the exact t^{-4/3}-decay rate for the energy of classical solutions. Our main result (Theorem 1) answers question (1) of Anantharaman and Léautaud (Anal PDE 7(1):159-214, 2014, Section 2C).
NASA Astrophysics Data System (ADS)
Menne, Matthew J.; Williams, Claude N., Jr.
2005-10-01
An evaluation of three hypothesis test statistics that are commonly used in the detection of undocumented changepoints is described. The goal of the evaluation was to determine whether the use of multiple tests could improve undocumented, artificial changepoint detection skill in climate series. The use of successive hypothesis testing is compared to optimal approaches, both of which are designed for situations in which multiple undocumented changepoints may be present. In addition, the importance of the form of the composite climate reference series is evaluated, particularly with regard to the impact of undocumented changepoints in the various component series that are used to calculate the composite.In a comparison of single test changepoint detection skill, the composite reference series formulation is shown to be less important than the choice of the hypothesis test statistic, provided that the composite is calculated from the serially complete and homogeneous component series. However, each of the evaluated composite series is not equally susceptible to the presence of changepoints in its components, which may be erroneously attributed to the target series. Moreover, a reference formulation that is based on the averaging of the first-difference component series is susceptible to random walks when the composition of the component series changes through time (e.g., values are missing), and its use is, therefore, not recommended. When more than one test is required to reject the null hypothesis of no changepoint, the number of detected changepoints is reduced proportionately less than the number of false alarms in a wide variety of Monte Carlo simulations. Consequently, a consensus of hypothesis tests appears to improve undocumented changepoint detection skill, especially when reference series homogeneity is violated. A consensus of successive hypothesis tests using a semihierarchic splitting algorithm also compares favorably to optimal solutions, even when changepoints are not hierarchic.
Scale-dependent intrinsic entropies of complex time series.
Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E
2016-04-13
Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).
ERIC Educational Resources Information Center
Ngan, Chun-Kit
2013-01-01
Making decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts…
Pirani, Monica; Best, Nicky; Blangiardo, Marta; Liverani, Silvia; Atkinson, Richard W.; Fuller, Gary W.
2015-01-01
Background Airborne particles are a complex mix of organic and inorganic compounds, with a range of physical and chemical properties. Estimation of how simultaneous exposure to air particles affects the risk of adverse health response represents a challenge for scientific research and air quality management. In this paper, we present a Bayesian approach that can tackle this problem within the framework of time series analysis. Methods We used Dirichlet process mixture models to cluster time points with similar multipollutant and response profiles, while adjusting for seasonal cycles, trends and temporal components. Inference was carried out via Markov Chain Monte Carlo methods. We illustrated our approach using daily data of a range of particle metrics and respiratory mortality for London (UK) 2002–2005. To better quantify the average health impact of these particles, we measured the same set of metrics in 2012, and we computed and compared the posterior predictive distributions of mortality under the exposure scenario in 2012 vs 2005. Results The model resulted in a partition of the days into three clusters. We found a relative risk of 1.02 (95% credible intervals (CI): 1.00, 1.04) for respiratory mortality associated with days characterised by high posterior estimates of non-primary particles, especially nitrate and sulphate. We found a consistent reduction in the airborne particles in 2012 vs 2005 and the analysis of the posterior predictive distributions of respiratory mortality suggested an average annual decrease of − 3.5% (95% CI: − 0.12%, − 5.74%). Conclusions We proposed an effective approach that enabled the better understanding of hidden structures in multipollutant health effects within time series analysis. It allowed the identification of exposure metrics associated with respiratory mortality and provided a tool to assess the changes in health effects from various policies to control the ambient particle matter mixtures. PMID:25795926
Analyzing the field of bioinformatics with the multi-faceted topic modeling technique.
Heo, Go Eun; Kang, Keun Young; Song, Min; Lee, Jeong-Hoon
2017-05-31
Bioinformatics is an interdisciplinary field at the intersection of molecular biology and computing technology. To characterize the field as convergent domain, researchers have used bibliometrics, augmented with text-mining techniques for content analysis. In previous studies, Latent Dirichlet Allocation (LDA) was the most representative topic modeling technique for identifying topic structure of subject areas. However, as opposed to revealing the topic structure in relation to metadata such as authors, publication date, and journals, LDA only displays the simple topic structure. In this paper, we adopt the Tang et al.'s Author-Conference-Topic (ACT) model to study the field of bioinformatics from the perspective of keyphrases, authors, and journals. The ACT model is capable of incorporating the paper, author, and conference into the topic distribution simultaneously. To obtain more meaningful results, we use journals and keyphrases instead of conferences and bag-of-words.. For analysis, we use PubMed to collected forty-six bioinformatics journals from the MEDLINE database. We conducted time series topic analysis over four periods from 1996 to 2015 to further examine the interdisciplinary nature of bioinformatics. We analyze the ACT Model results in each period. Additionally, for further integrated analysis, we conduct a time series analysis among the top-ranked keyphrases, journals, and authors according to their frequency. We also examine the patterns in the top journals by simultaneously identifying the topical probability in each period, as well as the top authors and keyphrases. The results indicate that in recent years diversified topics have become more prevalent and convergent topics have become more clearly represented. The results of our analysis implies that overtime the field of bioinformatics becomes more interdisciplinary where there is a steady increase in peripheral fields such as conceptual, mathematical, and system biology. These results are confirmed by integrated analysis of topic distribution as well as top ranked keyphrases, authors, and journals.
Pirani, Monica; Best, Nicky; Blangiardo, Marta; Liverani, Silvia; Atkinson, Richard W; Fuller, Gary W
2015-06-01
Airborne particles are a complex mix of organic and inorganic compounds, with a range of physical and chemical properties. Estimation of how simultaneous exposure to air particles affects the risk of adverse health response represents a challenge for scientific research and air quality management. In this paper, we present a Bayesian approach that can tackle this problem within the framework of time series analysis. We used Dirichlet process mixture models to cluster time points with similar multipollutant and response profiles, while adjusting for seasonal cycles, trends and temporal components. Inference was carried out via Markov Chain Monte Carlo methods. We illustrated our approach using daily data of a range of particle metrics and respiratory mortality for London (UK) 2002-2005. To better quantify the average health impact of these particles, we measured the same set of metrics in 2012, and we computed and compared the posterior predictive distributions of mortality under the exposure scenario in 2012 vs 2005. The model resulted in a partition of the days into three clusters. We found a relative risk of 1.02 (95% credible intervals (CI): 1.00, 1.04) for respiratory mortality associated with days characterised by high posterior estimates of non-primary particles, especially nitrate and sulphate. We found a consistent reduction in the airborne particles in 2012 vs 2005 and the analysis of the posterior predictive distributions of respiratory mortality suggested an average annual decrease of -3.5% (95% CI: -0.12%, -5.74%). We proposed an effective approach that enabled the better understanding of hidden structures in multipollutant health effects within time series analysis. It allowed the identification of exposure metrics associated with respiratory mortality and provided a tool to assess the changes in health effects from various policies to control the ambient particle matter mixtures. Copyright © 2015. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneemann, Matthias; Carius, Reinhard; Rau, Uwe
2015-05-28
This paper studies the effective electrical size and carrier multiplication of breakdown sites in multi-crystalline silicon solar cells. The local series resistance limits the current of each breakdown site and is thereby linearizing the current-voltage characteristic. This fact allows the estimation of the effective electrical diameters to be as low as 100 nm. Using a laser beam induced current (LBIC) measurement with a high spatial resolution, we find carrier multiplication factors on the order of 30 (Zener-type breakdown) and 100 (avalanche breakdown) as new lower limits. Hence, we prove that also the so-called Zener-type breakdown is followed by avalanche multiplication. Wemore » explain that previous measurements of the carrier multiplication using thermography yield results higher than unity, only if the spatial defect density is high enough, and the illumination intensity is lower than what was used for the LBIC method. The individual series resistances of the breakdown sites limit the current through these breakdown sites. Therefore, the measured multiplication factors depend on the applied voltage as well as on the injected photocurrent. Both dependencies are successfully simulated using a series-resistance-limited diode model.« less
Clinical time series prediction: Toward a hierarchical dynamical system framework.
Liu, Zitao; Hauskrecht, Milos
2015-09-01
Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.
Wrong Answers on Multiple-Choice Achievement Tests: Blind Guesses or Systematic Choices?.
ERIC Educational Resources Information Center
Powell, J. C.
A multi-faceted model for the selection of answers for multiple-choice tests was developed from the findings of a series of exploratory studies. This model implies that answer selection should be curvilinear. A series of models were tested for fit using the chi square procedure. Data were collected from 359 elementary school students ages 9-12.…
The Representation of Multiple Intelligences Types in the Top-Notch Series: A Textbook Evaluation
ERIC Educational Resources Information Center
Razmjoo, Seyyed Ayatollah; Jozaghi, Zahra
2010-01-01
This study aims at evaluating Top-Notch series through a checklist devised by the researchers based on the elements of the Multiple Intelligences (MI) theory proposed by Gardner (1998). With the shift from teacher-centered classrooms to learner-centered one, more and more research is needed to be done in the realm of students' need analysis. One…
Yu, Peng; Shaw, Chad A
2014-06-01
The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Bloshanskiĭ, I. L.
1986-04-01
The concept of weak generalized localization almost everywhere is introduced. For the multiple Fourier series of a function f, weak generalized localization almost everywhere holds on the set E (E is an arbitrary set of positive measure E \\subset T^N = \\lbrack- \\pi, \\pi\\rbrack^N) if the condition f(x) \\in L_p(T^N), p \\ge 1, f = 0 on E implies that the indicated series converges almost everywhere on some subset E_1 \\subset E of positive measure. For a large class of sets \\{ E \\}, E \\subset T^N, a number of propositions are proved showing that weak localization of rectangular sums holds on the set E in the classes L_p, p \\ge 1, if and only if the set E has certain specific properties. In the course of the proof the precise geometry and structure of the subset E_1 of E on which the multiple Fourier series converges almost everywhere to zero are determined. Bibliography: 13 titles.
Stacked Transformer for Driver Gain and Receive Signal Splitting
NASA Technical Reports Server (NTRS)
Driscoll, Kevin R.
2013-01-01
In a high-speed signal transmission system that uses transformer coupling, there is a need to provide increased transmitted signal strength without adding active components. This invention uses additional transformers to achieve the needed gain. The prior art uses stronger drivers (which require an IC redesign and a higher power supply voltage), or the addition of another active component (which can decrease reliability, increase power consumption, reduce the beneficial effect of serializer/deserializer preemphasis or deemphasis, and/or interfere with fault containment mechanisms), or uses a different transformer winding ratio (which requires redesign of the transformer and may not be feasible with high-speed signals that require a 1:1 winding ratio). This invention achieves the required gain by connecting the secondaries of multiple transformers in series. The primaries of these transformers are currently either connected in parallel or are connected to multiple drivers. There is also a need to split a receive signal to multiple destinations with minimal signal loss. Additional transformers can achieve the split. The prior art uses impedance-matching series resistors that cause a loss of signal. Instead of causing a loss, most instantiations of this invention would actually provide gain. Multiple transformers are used instead of multiple windings on a single transformer because multiple windings on the same transformer would require a redesign of the transformer, and may not be feasible with high-speed transformers that usually require a bifilar winding with a 1:1 ratio. This invention creates the split by connecting the primaries of multiple transformers in series. The secondary of each transformer is connected to one of the intended destinations without the use of impedance-matching series resistors.
The detection of local irreversibility in time series based on segmentation
NASA Astrophysics Data System (ADS)
Teng, Yue; Shang, Pengjian
2018-06-01
We propose a strategy for the detection of local irreversibility in stationary time series based on multiple scale. The detection is beneficial to evaluate the displacement of irreversibility toward local skewness. By means of this method, we can availably discuss the local irreversible fluctuations of time series as the scale changes. The method was applied to simulated nonlinear signals generated by the ARFIMA process and logistic map to show how the irreversibility functions react to the increasing of the multiple scale. The method was applied also to series of financial markets i.e., American, Chinese and European markets. The local irreversibility for different markets demonstrate distinct characteristics. Simulations and real data support the need of exploring local irreversibility.
The forces on a single interacting Bose-Einstein condensate
NASA Astrophysics Data System (ADS)
Thu, Nguyen Van
2018-04-01
Using double parabola approximation for a single Bose-Einstein condensate confined between double slabs we proved that in grand canonical ensemble (GCE) the ground state with Robin boundary condition (BC) is favored, whereas in canonical ensemble (CE) our system undergoes from ground state with Robin BC to the one with Dirichlet BC in small-L region and vice versa for large-L region and phase transition in space of the ground state is the first order. The surface tension force and Casimir force are also considered in both CE and GCE in detail.
NASA Astrophysics Data System (ADS)
Cuahutenango-Barro, B.; Taneco-Hernández, M. A.; Gómez-Aguilar, J. F.
2017-12-01
Analytical solutions of the wave equation with bi-fractional-order and frictional memory kernel of Mittag-Leffler type are obtained via Caputo-Fabrizio fractional derivative in the Liouville-Caputo sense. Through the method of separation of variables and Laplace transform method we derive closed-form solutions and establish fundamental solutions. Special cases with homogeneous Dirichlet boundary conditions and nonhomogeneous initial conditions, as well as for the external force are considered. Numerical simulations of the special solutions were done and novel behaviors are obtained.
Hoppe, Fred M
2008-06-01
We show that the formula of Faà di Bruno for the derivative of a composite function gives, in special cases, the sampling distributions in population genetics that are due to Ewens and to Pitman. The composite function is the same in each case. Other sampling distributions also arise in this way, such as those arising from Dirichlet, multivariate hypergeometric, and multinomial models, special cases of which correspond to Bose-Einstein, Fermi-Dirac, and Maxwell-Boltzmann distributions in physics. Connections are made to compound sampling models.
Vacuum Energy Induced by AN Impenetrable Flux Tube of Finite Radius
NASA Astrophysics Data System (ADS)
Gorkavenko, V. M.; Sitenko, Yu. A.; Stepanov, O. B.
2011-06-01
We consider the effect of the magnetic field background in the form of a tube of the finite transverse size on the vacuum of the quantized charged massive scalar field which is subject to the Dirichlet boundary condition at the edge of the tube. The vacuum energy is induced, being periodic in the value of the magnetic flux enclosed in the tube. The dependence of the vacuum energy density on the distance from the tube and on the coupling to the space-time curvature scalar is comprehensively analyzed.
Vacuum Energy Induced by AN Impenetrable Flux Tube of Finite Radius
NASA Astrophysics Data System (ADS)
Gorkavenko, V. M.; Sitenko, Yu. A.; Stepanov, O. B.
We consider the effect of the magnetic field background in the form of a tube of the finite transverse size on the vacuum of the quantized charged massive scalar field which is subject to the Dirichlet boundary condition at the edge of the tube. The vacuum energy is induced, being periodic in the value of the magnetic flux enclosed in the tube. The dependence of the vacuum energy density on the distance from the tube and on the coupling to the space-time curvature scalar is comprehensively analyzed.
Conference on Ordinary and Partial Differential Equations, 29 March to 2 April 1982.
1982-04-02
Azztr. Boundary value problems for elliptic and parabolic equations in domains with corners The paper concerns initial - Dirichlet and initial - mixed...boundary value problems for parabolic equations. a ij(x,t)u x + ai(x,t)Ux. + a(x,t)u-u = f(x,t) i3 1 x Xl,...,Xn , n 2. We consider the case of...moment II Though it is well known, that the electron possesses an anomalous magnetic moment, this term has not been considered so far in the mathematical
Introduction to Real Orthogonal Polynomials
1992-06-01
uses Green’s functions. As motivation , consider the Dirichlet problem for the unit circle in the plane, which involves finding a harmonic function u(r...xv ; a, b ; q) - TO [q-N ab+’q ; q, xq b. Orthogoy RMotion O0 (bq :q)x p.(q* ; a, b ; q) pg(q’ ; a, b ; q) (q "q), (aq)x (q ; q), (I -abq) (bq ; q... motivation and justi- fication for continued study of the intrinsic structure of orthogonal polynomials. 99 LIST OF REFERENCES 1. Deyer, W. M., ed., CRC
On the existence of mosaic-skeleton approximations for discrete analogues of integral operators
NASA Astrophysics Data System (ADS)
Kashirin, A. A.; Taltykina, M. Yu.
2017-09-01
Exterior three-dimensional Dirichlet problems for the Laplace and Helmholtz equations are considered. By applying methods of potential theory, they are reduced to equivalent Fredholm boundary integral equations of the first kind, for which discrete analogues, i.e., systems of linear algebraic equations (SLAEs) are constructed. The existence of mosaic-skeleton approximations for the matrices of the indicated systems is proved. These approximations make it possible to reduce the computational complexity of an iterative solution of the SLAEs. Numerical experiments estimating the capabilities of the proposed approach are described.
The Theory and Practice of the h-p Version of Finite Element Method.
1987-04-01
1Wr-194 ’The problem with none-hmogeneous Dirichlet problem is to find the finite element solution u. £ data was studied by Babuika, Guo.im- 4401 The h...implemented in the coasmercial code PROOE . by Noetic Tech., St. Louis. See (27,281. The commer- IuS -u 01 1 C(SIS2)Z(u0,HI,S1) (2.3) cial program FIESTA...collaboration with govern- ment agencies such as the National Bureau of Standards. o To be an international center of study and research for foreign
Xue, Huan; Hu, Yuantai; Wang, Qing-Ming
2008-09-01
This paper presents a novel approach for designing broadband piezoelectric harvesters by integrating multiple piezoelectric bimorphs (PBs) with different aspect ratios into a system. The effect of 2 connecting patterns among PBs, in series and in parallel, on improving energy harvesting performance is discussed. It is found for multifrequency spectra ambient vibrations: 1) the operating frequency band (OFB) of a harvesting structure can be widened by connecting multiple PBs with different aspect ratios in series; 2) the OFB of a harvesting structure can be shifted to the dominant frequency domain of the ambient vibrations by increasing or decreasing the number of PBs in parallel. Numerical results show that the OFB of the piezoelectric energy harvesting devices can be tailored by the connection patterns (i.e., in series and in parallel) among PBs.
Krafty, Robert T; Rosen, Ori; Stoffer, David S; Buysse, Daniel J; Hall, Martica H
2017-01-01
This article considers the problem of analyzing associations between power spectra of multiple time series and cross-sectional outcomes when data are observed from multiple subjects. The motivating application comes from sleep medicine, where researchers are able to non-invasively record physiological time series signals during sleep. The frequency patterns of these signals, which can be quantified through the power spectrum, contain interpretable information about biological processes. An important problem in sleep research is drawing connections between power spectra of time series signals and clinical characteristics; these connections are key to understanding biological pathways through which sleep affects, and can be treated to improve, health. Such analyses are challenging as they must overcome the complicated structure of a power spectrum from multiple time series as a complex positive-definite matrix-valued function. This article proposes a new approach to such analyses based on a tensor-product spline model of Cholesky components of outcome-dependent power spectra. The approach exibly models power spectra as nonparametric functions of frequency and outcome while preserving geometric constraints. Formulated in a fully Bayesian framework, a Whittle likelihood based Markov chain Monte Carlo (MCMC) algorithm is developed for automated model fitting and for conducting inference on associations between outcomes and spectral measures. The method is used to analyze data from a study of sleep in older adults and uncovers new insights into how stress and arousal are connected to the amount of time one spends in bed.
Joint Seasonal ARMA Approach for Modeling of Load Forecast Errors in Planning Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Samaan, Nader A.; Makarov, Yuri V.
2014-04-14
To make informed and robust decisions in the probabilistic power system operation and planning process, it is critical to conduct multiple simulations of the generated combinations of wind and load parameters and their forecast errors to handle the variability and uncertainty of these time series. In order for the simulation results to be trustworthy, the simulated series must preserve the salient statistical characteristics of the real series. In this paper, we analyze day-ahead load forecast error data from multiple balancing authority locations and characterize statistical properties such as mean, standard deviation, autocorrelation, correlation between series, time-of-day bias, and time-of-day autocorrelation.more » We then construct and validate a seasonal autoregressive moving average (ARMA) model to model these characteristics, and use the model to jointly simulate day-ahead load forecast error series for all BAs.« less
Forecasting the Relative and Cumulative Effects of Multiple Stressors on At-risk Populations
2011-08-01
Vitals (observed vital rates), Movement, Ranges, Barriers (barrier interactions), Stochasticity (a time series of stochasticity indices...Simulation Viewer are themselves stochastic . They can change each time it is run. B. 196 Analysis If multiple Census events are present in the life...30-year period. A monthly time series was generated for the 20th-century using monthly anomalies for temperature, precipitation, and percent
Parsing Flowcharts and Series-Parallel Graphs
1978-11-01
descriptions of the graph. This possible multiplicity is undesirable in most practical applications, a fact that makes parti%:ularly useful reduction...to parse TT networks, some of the features that make this parsing method useful in other cases are more natually introduced in the context of this...as Figure 4.5 shows. This multiplicity is due to the associativity of consecutive Two Terminal Series and Two Terminal Parallel compositions. In spite
A note on an attempt at more efficient Poisson series evaluation. [for lunar libration
NASA Technical Reports Server (NTRS)
Shelus, P. J.; Jefferys, W. H., III
1975-01-01
A substantial reduction has been achieved in the time necessary to compute lunar libration series. The method involves eliminating many of the trigonometric function calls by a suitable transformation and applying a short SNOBOL processor to the FORTRAN coding of the transformed series, which obviates many of the multiplication operations during the course of series evaluation. It is possible to accomplish similar results quite easily with other Poisson series.
Clinical time series prediction: towards a hierarchical dynamical system framework
Liu, Zitao; Hauskrecht, Milos
2014-01-01
Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671
NASA Astrophysics Data System (ADS)
Almog, Assaf; Garlaschelli, Diego
2014-09-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.
Intermediate energy proton-deuteron elastic scattering
NASA Technical Reports Server (NTRS)
Wilson, J. W.
1973-01-01
A fully symmetrized multiple scattering series is considered for the description of proton-deuteron elastic scattering. An off-shell continuation of the experimentally known twobody amplitudes that retains the exchange symmeteries required for the calculation is presented. The one boson exchange terms of the two body amplitudes are evaluated exactly in this off-shell prescription. The first two terms of the multiple scattering series are calculated explicitly whereas multiple scattering effects are obtained as minimum variance estimates from the 146-MeV data of Postma and Wilson. The multiple scattering corrections indeed consist of low order partial waves as suggested by Sloan based on model studies with separable interactions. The Hamada-Johnston wave function is shown consistent with the data for internucleon distances greater than about 0.84 fm.
Wavelet regression model in forecasting crude oil price
NASA Astrophysics Data System (ADS)
Hamid, Mohd Helmie; Shabri, Ani
2017-05-01
This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.
Frazer, LilyAnn Novak; O'Keefe, Raymond T
2007-09-01
The availability of Saccharomyces cerevisiae yeast strains with multiple auxotrophic markers allows the stable introduction and selection of more than one yeast shuttle vector containing marker genes that complement the auxotrophic markers. In certain experimental situations there is a need to recover more than one shuttle vector from yeast. To facilitate the recovery and identification of multiple plasmids from S. cerevisiae, we have constructed a series of plasmids based on the pRS series of yeast shuttle vectors. Bacterial antibiotic resistance genes to chloramphenicol, kanamycin and zeocin have been combined with the yeast centromere sequence (CEN6), the autonomously replicating sequence (ARSH4) and one of the four yeast selectable marker genes (HIS3, TRP1, LEU2 or URA3) from the pRS series of vectors. The 12 plasmids produced differ in antibiotic resistance and yeast marker gene within the backbone of the multipurpose plasmid pBluescript II. The newly constructed vectors show similar mitotic stability to the original pRS vectors. In combination with the ampicillin-resistant pRS series of yeast shuttle vectors, these plasmids now allow the recovery and identification in bacteria of up to four different vectors from S. cerevisiae. Copyright (c) 2007 John Wiley & Sons, Ltd.
Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.
Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry
2016-09-01
Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Spradley, Jackson P; Pampush, James D; Morse, Paul E; Kay, Richard F
2017-05-01
Dirichlet normal energy (DNE) is a metric of surface topography that has been used to evaluate the relationship between the surface complexity of primate cheek teeth and dietary categories. This study examines the effects of different 3D mesh retriangulation protocols on DNE. We examine how different protocols influence the DNE of a simple geometric shape-a hemisphere-to gain a more thorough understanding than can be achieved by investigating a complex biological surface such as a tooth crown. We calculate DNE on 3D surface meshes of hemispheres and on primate molars subjected to various retriangulation protocols, including smoothing algorithms, smoothing amounts, target face counts, and criteria for boundary face exclusion. Software used includes R, MorphoTester, Avizo, and MeshLab. DNE was calculated using the R package "molaR." In all cases, smoothing as performed in Avizo sharply decreases DNE initially, after which DNE becomes stable. Using a broader boundary exclusion criterion or performing additional smoothing (using "mesh fairing" methods) further decreases DNE. Increasing the mesh face count also results in increased DNE on tooth surfaces. Different retriangulation protocols yield different DNE values for the same surfaces, and should not be combined in meta-analyses. Increasing face count will capture surface microfeatures, but at the expense of computational speed. More aggressive smoothing is more likely to alter the essential geometry of the surface. A protocol is proposed that limits potential artifacts created during surface production while preserving pertinent features on the occlusal surface. © 2017 Wiley Periodicals, Inc.
Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data
NASA Astrophysics Data System (ADS)
Glüsenkamp, Thorsten
2018-06-01
Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.
NASA Astrophysics Data System (ADS)
Macías-Díaz, J. E.
2018-06-01
In this work, we investigate numerically a model governed by a multidimensional nonlinear wave equation with damping and fractional diffusion. The governing partial differential equation considers the presence of Riesz space-fractional derivatives of orders in (1, 2], and homogeneous Dirichlet boundary data are imposed on a closed and bounded spatial domain. The model under investigation possesses an energy function which is preserved in the undamped regime. In the damped case, we establish the property of energy dissipation of the model using arguments from functional analysis. Motivated by these results, we propose an explicit finite-difference discretization of our fractional model based on the use of fractional centered differences. Associated to our discrete model, we also propose discretizations of the energy quantities. We establish that the discrete energy is conserved in the undamped regime, and that it dissipates in the damped scenario. Among the most important numerical features of our scheme, we show that the method has a consistency of second order, that it is stable and that it has a quadratic order of convergence. Some one- and two-dimensional simulations are shown in this work to illustrate the fact that the technique is capable of preserving the discrete energy in the undamped regime. For the sake of convenience, we provide a Matlab implementation of our method for the one-dimensional scenario.
Treatment of multiple recessions by means of a collagen matrix: a case series.
Schlee, Markus; Lex, Maria; Rathe, Florian; Kasaj, Adrian; Sader, Robert
2014-01-01
This case series evaluated the use of a collagen matrix with a coronally advanced flap procedure for the treatment of multiple recession defects. Fifteen patients with a total of 80 recession defects were included. Root coverage was 85% ± 13% at 6 months and 81% ± 22% at 12 months. Complete root coverage was achieved in 60% of the sites after 6 months and in 56% after 12 months. The percentage of sites with thick gingival morphotype increased significantly. The results indicated that the collagen matrix may be a useful alternative to the connective tissue graft in the treatment of multiple recession defects.
17 CFR 20.5 - Series S filings.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 17 Commodity and Securities Exchanges 1 2012-04-01 2012-04-01 false Series S filings. 20.5 Section... FOR PHYSICAL COMMODITY SWAPS § 20.5 Series S filings. (a) 102S filing. (1) When a counterparty... 102S filing only once for each counterparty, even if such persons at various times have multiple...
Introduction and application of the multiscale coefficient of variation analysis.
Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh
2017-10-01
Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.
On the Stationarity of Multiple Autoregressive Approximants: Theory and Algorithms
1976-08-01
a I (3.4) Hannan and Terrell (1972) consider problems of a similar nature. Efficient estimates A(1),... , A(p) , and i of A(1)... ,A(p) and...34Autoregressive model fitting for control, Ann . Inst. Statist. Math., 23, 163-180. Hannan, E. J. (1970), Multiple Time Series, New York, John Wiley...Hannan, E. J. and Terrell , R. D. (1972), "Time series regression with linear constraints, " International Economic Review, 13, 189-200. Masani, P
A PC-based telemetry system for acquiring and reducing data from multiple PCM streams
NASA Astrophysics Data System (ADS)
Simms, D. A.; Butterfield, C. P.
1991-07-01
The Solar Energy Research Institute's (SERI) Wind Research Program is using Pulse Code Modulation (PCM) Telemetry Data-Acquisition Systems to study horizontal-axis wind turbines. Many PCM systems are combined for use in test installations that require accurate measurements from a variety of different locations. SERI has found them ideal for data-acquisition from multiple wind turbines and meteorological towers in wind parks. A major problem has been in providing the capability to quickly combine and examine incoming data from multiple PCM sources in the field. To solve this problem, SERI has developed a low-cost PC-based PCM Telemetry Data-Reduction System (PC-PCM System) to facilitate quick, in-the-field multiple-channel data analysis. The PC-PCM System consists of two basic components. First, PC-compatible hardware boards are used to decode and combine multiple PCM data streams. Up to four hardware boards can be installed in a single PC, which provides the capability to combine data from four PCM streams directly to PC disk or memory. Each stream can have up to 62 data channels. Second, a software package written for use under DOS was developed to simplify data-acquisition control and management. The software, called the Quick-Look Data Management Program, provides a quick, easy-to-use interface between the PC and multiple PCM data streams. The Quick-Look Data Management Program is a comprehensive menu-driven package used to organize, acquire, process, and display information from incoming PCM data streams. The paper describes both hardware and software aspects of the SERI PC-PCM system, concentrating on features that make it useful in an experiment test environment to quickly examine and verify incoming data from multiple PCM streams. Also discussed are problems and techniques associated with PC-based telemetry data-acquisition, processing, and real-time display.
Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro
2017-04-01
We present an efficient implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The I2SPH's accuracy and convergence are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.
Positivity results for indefinite sublinear elliptic problems via a continuity argument
NASA Astrophysics Data System (ADS)
Kaufmann, U.; Ramos Quoirin, H.; Umezu, K.
2017-10-01
We establish a positivity property for a class of semilinear elliptic problems involving indefinite sublinear nonlinearities. Namely, we show that any nontrivial nonnegative solution is positive for a class of problems the strong maximum principle does not apply to. Our approach is based on a continuity argument combined with variational techniques, the sub and supersolutions method and some a priori bounds. Both Dirichlet and Neumann homogeneous boundary conditions are considered. As a byproduct, we deduce some existence and uniqueness results. Finally, as an application, we derive some positivity results for indefinite concave-convex type problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlenko, V N; Potapov, D K
2015-09-30
This paper is concerned with the existence of semiregular solutions to the Dirichlet problem for an equation of elliptic type with discontinuous nonlinearity and when the differential operator is not assumed to be formally self-adjoint. Theorems on the existence of semiregular (positive and negative) solutions for the problem under consideration are given, and a principle of upper and lower solutions giving the existence of semiregular solutions is established. For positive values of the spectral parameter, elliptic spectral problems with discontinuous nonlinearities are shown to have nontrivial semiregular (positive and negative) solutions. Bibliography: 32 titles.
Regular Inversion of the Divergence Operator with Dirichlet Boundary Conditions on a Polygon,
1987-04-01
E c- xC 0 Czt C- -- &C -nC CL C~ E C - U U C U C0 V C ( C CC C L 6- - C C- 1 -CLL r = .c L C A C *C CCC F 4 C CC> C C 4D C3 1 ZC -’ c OC.LL fUC I...Iil Moreover by Lemmna 2.1, there is a single cons aiit C such that IIIIIPpV Chi 1 /, p e < CII, 1 2/P p. holds for all such 9. Thus / . af l( I-,0)1
Second-Order Two-Sided Estimates in Nonlinear Elliptic Problems
NASA Astrophysics Data System (ADS)
Cianchi, Andrea; Maz'ya, Vladimir G.
2018-05-01
Best possible second-order regularity is established for solutions to p-Laplacian type equations with {p \\in (1, ∞)} and a square-integrable right-hand side. Our results provide a nonlinear counterpart of the classical L 2-coercivity theory for linear problems, which is missing in the existing literature. Both local and global estimates are obtained. The latter apply to solutions to either Dirichlet or Neumann boundary value problems. Minimal regularity on the boundary of the domain is required, although our conclusions are new even for smooth domains. If the domain is convex, no regularity of its boundary is needed at all.
NASA Astrophysics Data System (ADS)
Palombi, Filippo; Toti, Simona
2015-05-01
Approximate weak solutions of the Fokker-Planck equation represent a useful tool to analyze the equilibrium fluctuations of birth-death systems, as they provide a quantitative knowledge lying in between numerical simulations and exact analytic arguments. In this paper, we adapt the general mathematical formalism known as the Ritz-Galerkin method for partial differential equations to the Fokker-Planck equation with time-independent polynomial drift and diffusion coefficients on the simplex. Then, we show how the method works in two examples, namely the binary and multi-state voter models with zealots.
Flattening maps for the visualization of multibranched vessels.
Zhu, Lei; Haker, Steven; Tannenbaum, Allen
2005-02-01
In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided.
Flattening Maps for the Visualization of Multibranched Vessels
Zhu, Lei; Haker, Steven; Tannenbaum, Allen
2013-01-01
In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided. PMID:15707245
2010-04-27
Dirichlet boundary data DP̃ (x, y) at the entire plane P̃ . Then one can solve the following boundary value problem in the half space below P̃ ∆w − s2w...which we wanted to be a plane wave when reaching the bottom side of the prism of Figure 1, where measurements were conducted. But actually this 14 was a...initializing wave field is a plane wave. On the other hand, a visual inspection of the output experimental data has revealed to us that actually we had a
Gorlin-Goltz syndrome: A series of three cases.
Patankar, Amod P; Kshirsagar, Rajesh A; Dugal, Arun; Mishra, Akshay; Ram, Hari
2014-01-01
The Gorlin-Goltz syndrome (GGS) is also known as nevoid basal cell carcinoma syndrome. It is characterized by multiple keratocystic odontogenic tumors (KCOTs) in the jaw, multiple basal cell nevi carcinomas and skeletal abnormities. The syndrome may be diagnosed early by a dentist during the routine radiographic exams in the first decade of life, since the KCOTs are usually one of the first manifestations of the syndrome. This article reports the series of 3 cases, emphasizing its clinical and radiographic manifestations of GGS.
Gorlin-Goltz syndrome: A series of three cases
Patankar, Amod P.; Kshirsagar, Rajesh A.; Dugal, Arun; Mishra, Akshay; Ram, Hari
2014-01-01
The Gorlin-Goltz syndrome (GGS) is also known as nevoid basal cell carcinoma syndrome. It is characterized by multiple keratocystic odontogenic tumors (KCOTs) in the jaw, multiple basal cell nevi carcinomas and skeletal abnormities. The syndrome may be diagnosed early by a dentist during the routine radiographic exams in the first decade of life, since the KCOTs are usually one of the first manifestations of the syndrome. This article reports the series of 3 cases, emphasizing its clinical and radiographic manifestations of GGS. PMID:25937738
Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Hou, Zhangshuan; Meng, Da
2016-07-17
In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.
NASA Technical Reports Server (NTRS)
Lieneweg, Udo (Inventor)
1988-01-01
A system is provided for use with wafers that include multiple integrated circuits that include two conductive layers in contact at multiple interfaces. Contact chains are formed beside the integrated circuits, each contact chain formed of the same two layers as the circuits, in the form of conductive segments alternating between the upper and lower layers and with the ends of the segments connected in series through interfaces. A current source passes a current through the series-connected segments, by way of a pair of current tabs connected to opposite ends of the series of segments. While the current flows, voltage measurements are taken between each of a plurality of pairs of voltage tabs, the two tabs of each pair connected to opposite ends of an interface that lies along the series-connected segments. A plot of interface conductances on a normal probability chart, enables prediction of the yield of good integrated circuits from the wafer.
NASA Technical Reports Server (NTRS)
Lieneweg, U. (Inventor)
1986-01-01
A system is provided for use with wafers that include multiple integrated circuits that include two conductive layers in contact at multiple interfaces. Contact chains are formed beside the integrated circuits, each contact chain formed of the same two layers as the circuits, in the form of conductive segments alternating between the upper and lower layers and with the ends of the segments connected in series through interfaces. A current source passes a current through the series-connected segments, by way of a pair of current tabs connected to opposite ends of the series of segments. While the current flows, voltage measurements are taken between each of a plurality of pairs of voltage tabs, the two tabs of each pair connected to opposite ends of an interface that lies along the series-connected segments. A plot of interface conductances on normal probability chart enables prediction of the yield of good integrated circuits from the wafer.
A systems approach for analysis of high content screening assay data with topic modeling.
Bisgin, Halil; Chen, Minjun; Wang, Yuping; Kelly, Reagan; Fang, Hong; Xu, Xiaowei; Tong, Weida
2013-01-01
High Content Screening (HCS) has become an important tool for toxicity assessment, partly due to its advantage of handling multiple measurements simultaneously. This approach has provided insight and contributed to the understanding of systems biology at cellular level. To fully realize this potential, the simultaneously measured multiple endpoints from a live cell should be considered in a probabilistic relationship to assess the cell's condition to response stress from a treatment, which poses a great challenge to extract hidden knowledge and relationships from these measurements. In this work, we applied a text mining method of Latent Dirichlet Allocation (LDA) to analyze cellular endpoints from in vitro HCS assays and related to the findings to in vivo histopathological observations. We measured multiple HCS assay endpoints for 122 drugs. Since LDA requires the data to be represented in document-term format, we first converted the continuous value of the measurements to the word frequency that can processed by the text mining tool. For each of the drugs, we generated a document for each of the 4 time points. Thus, we ended with 488 documents (drug-hour) each having different values for the 10 endpoints which are treated as words. We extracted three topics using LDA and examined these to identify diagnostic topics for 45 common drugs located in vivo experiments from the Japanese Toxicogenomics Project (TGP) observing their necrosis findings at 6 and 24 hours after treatment. We found that assay endpoints assigned to particular topics were in concordance with the histopathology observed. Drugs showing necrosis at 6 hour were linked to severe damage events such as Steatosis, DNA Fragmentation, Mitochondrial Potential, and Lysosome Mass. DNA Damage and Apoptosis were associated with drugs causing necrosis at 24 hours, suggesting an interplay of the two pathways in these drugs. Drugs with no sign of necrosis we related to the Cell Loss and Nuclear Size assays, which is suggestive of hepatocyte regeneration. The evidence from this study suggests that topic modeling with LDA can enable us to interpret relationships of endpoints of in vitro assays along with an in vivo histological finding, necrosis. Effectiveness of this approach may add substantially to our understanding of systems biology.
Deconstructing Calculation Methods, Part 3: Multiplication
ERIC Educational Resources Information Center
Thompson, Ian
2008-01-01
In this third of a series of four articles, the author deconstructs the primary national strategy's approach to written multiplication. The approach to multiplication, as set out on pages 12 to 15 of the primary national strategy's "Guidance paper" "Calculation" (DfES, 2007), is divided into six stages: (1) mental…
Bifurcation of solutions to Hamiltonian boundary value problems
NASA Astrophysics Data System (ADS)
McLachlan, R. I.; Offen, C.
2018-06-01
A bifurcation is a qualitative change in a family of solutions to an equation produced by varying parameters. In contrast to the local bifurcations of dynamical systems that are often related to a change in the number or stability of equilibria, bifurcations of boundary value problems are global in nature and may not be related to any obvious change in dynamical behaviour. Catastrophe theory is a well-developed framework which studies the bifurcations of critical points of functions. In this paper we study the bifurcations of solutions of boundary-value problems for symplectic maps, using the language of (finite-dimensional) singularity theory. We associate certain such problems with a geometric picture involving the intersection of Lagrangian submanifolds, and hence with the critical points of a suitable generating function. Within this framework, we then study the effect of three special cases: (i) some common boundary conditions, such as Dirichlet boundary conditions for second-order systems, restrict the possible types of bifurcations (for example, in generic planar systems only the A-series beginning with folds and cusps can occur); (ii) integrable systems, such as planar Hamiltonian systems, can exhibit a novel periodic pitchfork bifurcation; and (iii) systems with Hamiltonian symmetries or reversing symmetries can exhibit restricted bifurcations associated with the symmetry. This approach offers an alternative to the analysis of critical points in function spaces, typically used in the study of bifurcation of variational problems, and opens the way to the detection of more exotic bifurcations than the simple folds and cusps that are often found in examples.
Backenroth, Daniel; He, Zihuai; Kiryluk, Krzysztof; Boeva, Valentina; Pethukova, Lynn; Khurana, Ekta; Christiano, Angela; Buxbaum, Joseph D; Ionita-Laza, Iuliana
2018-05-03
We describe a method based on a latent Dirichlet allocation model for predicting functional effects of noncoding genetic variants in a cell-type- and/or tissue-specific way (FUN-LDA). Using this unsupervised approach, we predict tissue-specific functional effects for every position in the human genome in 127 different tissues and cell types. We demonstrate the usefulness of our predictions by using several validation experiments. Using eQTL data from several sources, including the GTEx project, Geuvadis project, and TwinsUK cohort, we show that eQTLs in specific tissues tend to be most enriched among the predicted functional variants in relevant tissues in Roadmap. We further show how these integrated functional scores can be used for (1) deriving the most likely cell or tissue type causally implicated for a complex trait by using summary statistics from genome-wide association studies and (2) estimating a tissue-based correlation matrix of various complex traits. We found large enrichment of heritability in functional components of relevant tissues for various complex traits, and FUN-LDA yielded higher enrichment estimates than existing methods. Finally, using experimentally validated functional variants from the literature and variants possibly implicated in disease by previous studies, we rigorously compare FUN-LDA with state-of-the-art functional annotation methods and show that FUN-LDA has better prediction accuracy and higher resolution than these methods. In particular, our results suggest that tissue- and cell-type-specific functional prediction methods tend to have substantially better prediction accuracy than organism-level prediction methods. Scores for each position in the human genome and for each ENCODE and Roadmap tissue are available online (see Web Resources). Copyright © 2018 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
On the Aharonov-Bohm Operators with Varying Poles: The Boundary Behavior of Eigenvalues
NASA Astrophysics Data System (ADS)
Noris, Benedetta; Nys, Manon; Terracini, Susanna
2015-11-01
We consider a magnetic Schrödinger operator with magnetic field concentrated at one point (the pole) of a domain and half integer circulation, and we focus on the behavior of Dirichlet eigenvalues as functions of the pole. Although the magnetic field vanishes almost everywhere, it is well known that it affects the operator at the spectral level (the Aharonov-Bohm effect, Phys Rev (2) 115:485-491, 1959). Moreover, the numerical computations performed in (Bonnaillie-Noël et al., Anal PDE 7(6):1365-1395, 2014; Noris and Terracini, Indiana Univ Math J 59(4):1361-1403, 2010) show a rather complex behavior of the eigenvalues as the pole varies in a planar domain. In this paper, in continuation of the analysis started in (Bonnaillie-Noël et al., Anal PDE 7(6):1365-1395, 2014; Noris and Terracini, Indiana Univ Math J 59(4):1361-1403, 2010), we analyze the relation between the variation of the eigenvalue and the nodal structure of the associated eigenfunctions. We deal with planar domains with Dirichlet boundary conditions and we focus on the case when the singular pole approaches the boundary of the domain: then, the operator loses its singular character and the k-th magnetic eigenvalue converges to that of the standard Laplacian. We can predict both the rate of convergence and whether the convergence happens from above or from below, in relation with the number of nodal lines of the k-th eigenfunction of the Laplacian. The proof relies on the variational characterization of eigenvalues, together with a detailed asymptotic analysis of the eigenfunctions, based on an Almgren-type frequency formula for magnetic eigenfunctions and on the blow-up technique.
Extant ape dental topography and its implications for reconstructing the emergence of early Homo.
Berthaume, Michael A; Schroer, Kes
2017-11-01
Dental topography reflects diet accurately in several extant and extinct mammalian clades. However, dental topographic dietary reconstructions have high success rates only when closely related taxa are compared. Given the dietary breadth that exists among extant apes and likely existed among fossil hominins, dental topographic values from many species and subspecies of great apes are necessary for making dietary inferences about the hominin fossil record. Here, we present the results of one metric of dental topography, Dirichlet normal energy (DNE), for seven groups of great apes (Pongo pygmaeus pygmaeus, Pan paniscus, Pan troglodytes troglodytes and schweinfurthii, Gorilla gorilla gorilla, Gorilla beringei graueri and beringei). Dirichlet normal energy was inadequate at differentiating folivores from frugivores, but was adequate at predicting which groups had more fibrous diets among sympatric African apes. Character displacement analyses confirmed there is substantial dental topographic and relative molar size (M 1 :M 2 ratio; length, width, and area) divergence in sympatric apes when compared to their allopatric counterparts, but character displacement is only present in relative molar size when DNE is also considered. Presence of character displacement is likely due to indirect competition over similar food resources. Assuming similar ecological conditions in the Plio-Pleistocene, the derived masticatory apparatuses of the robust australopiths and early Homo may be due to indirect competition over dietary resources between the taxa, causing dietary niche partitioning. Our results imply that dental topography cannot be used to predict dietary categories in fossil hominins without consideration of ecological factors, such as dietary and geographic overlap. In addition, our results may open new avenues for understanding the community compositions of early hominins and the formation of specific ecological niches among hominin taxa. Copyright © 2017 Elsevier Ltd. All rights reserved.
Resolving an ostensible inconsistency in calculating the evaporation rate of sessile drops.
Chini, S F; Amirfazli, A
2017-05-01
This paper resolves an ostensible inconsistency in the literature in calculating the evaporation rate for sessile drops in a quiescent environment. The earlier models in the literature have shown that adapting the evaporation flux model for a suspended spherical drop to calculate the evaporation rate of a sessile drop needs a correction factor; the correction factor was shown to be a function of the drop contact angle, i.e. f(θ). However, there seemed to be a problem as none of the earlier models explicitly or implicitly mentioned the evaporation flux variations along the surface of a sessile drop. The more recent evaporation models include this variation using an electrostatic analogy, i.e. the Laplace equation (steady-state continuity) in a domain with a known boundary condition value, or known as the Dirichlet problem for Laplace's equation. The challenge is that the calculated evaporation rates using the earlier models seemed to differ from that of the recent models (note both types of models were validated in the literature by experiments). We have reinvestigated the recent models and found that the mathematical simplifications in solving the Dirichlet problem in toroidal coordinates have created the inconsistency. We also proposed a closed form approximation for f(θ) which is valid in a wide range, i.e. 8°≤θ≤131°. Using the proposed model in this study, theoretically, it was shown that the evaporation rate in the CWA (constant wetted area) mode is faster than the evaporation rate in the CCA (constant contact angle) mode for a sessile drop. Copyright © 2016 Elsevier B.V. All rights reserved.
What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.
Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.
What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm
Baig, Fahd; Little, Max A.
2016-01-01
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism. PMID:27669525
Casimir effect due to a single boundary as a manifestation of the Weyl problem
NASA Astrophysics Data System (ADS)
Kolomeisky, Eugene B.; Straley, Joseph P.; Langsjoen, Luke S.; Zaidi, Hussain
2010-09-01
The Casimir self-energy of a boundary is ultraviolet-divergent. In many cases, the divergences can be eliminated by methods such as zeta-function regularization or through physical arguments (ultraviolet transparency of the boundary would provide a cutoff). Using the example of a massless scalar field theory with a single Dirichlet boundary, we explore the relationship between such approaches, with the goal of better understanding of the origin of the divergences. We are guided by the insight due to Dowker and Kennedy (1978 J. Phys. A: Math. Gen. 11 895) and Deutsch and Candelas (1979 Phys. Rev. D 20 3063) that the divergences represent measurable effects that can be interpreted with the aid of the theory of the asymptotic distribution of eigenvalues of the Laplacian discussed by Weyl. In many cases, the Casimir self-energy is the sum of cutoff-dependent (Weyl) terms having a geometrical origin, and an 'intrinsic' term that is independent of the cutoff. The Weyl terms make a measurable contribution to the physical situation even when regularization methods succeed in isolating the intrinsic part. Regularization methods fail when the Weyl terms and intrinsic parts of the Casimir effect cannot be clearly separated. Specifically, we demonstrate that the Casimir self-energy of a smooth boundary in two dimensions is a sum of two Weyl terms (exhibiting quadratic and logarithmic cutoff dependence), a geometrical term that is independent of cutoff and a non-geometrical intrinsic term. As by-products, we resolve the puzzle of the divergent Casimir force on a ring and correct the sign of the coefficient of linear tension of the Dirichlet line predicted in earlier treatments.
Sing, David C; Metz, Lionel N; Dudli, Stefan
2017-06-01
Retrospective review. To identify the top 100 spine research topics. Recent advances in "machine learning," or computers learning without explicit instructions, have yielded broad technological advances. Topic modeling algorithms can be applied to large volumes of text to discover quantifiable themes and trends. Abstracts were extracted from the National Library of Medicine PubMed database from five prominent peer-reviewed spine journals (European Spine Journal [ESJ], The Spine Journal [SpineJ], Spine, Journal of Spinal Disorders and Techniques [JSDT], Journal of Neurosurgery: Spine [JNS]). Each abstract was entered into a latent Dirichlet allocation model specified to discover 100 topics, resulting in each abstract being assigned a probability of belonging in a topic. Topics were named using the five most frequently appearing terms within that topic. Significance of increasing ("hot") or decreasing ("cold") topic popularity over time was evaluated with simple linear regression. From 1978 to 2015, 25,805 spine-related research articles were extracted and classified into 100 topics. Top two most published topics included "clinical, surgeons, guidelines, information, care" (n = 496 articles) and "pain, back, low, treatment, chronic" (424). Top two hot trends included "disc, cervical, replacement, level, arthroplasty" (+0.05%/yr, P < 0.001), and "minimally, invasive, approach, technique" (+0.05%/yr, P < 0.001). By journal, the most published topics were ESJ-"operative, surgery, postoperative, underwent, preoperative"; SpineJ-"clinical, surgeons, guidelines, information, care"; Spine-"pain, back, low, treatment, chronic"; JNS- "tumor, lesions, rare, present, diagnosis"; JSDT-"cervical, anterior, plate, fusion, ACDF." Topics discovered through latent Dirichlet allocation modeling represent unbiased meaningful themes relevant to spine care. Topic dynamics can provide historical context and direction for future research for aspiring investigators and trainees interested in spine careers. Please explore https://singdc.shinyapps.io/spinetopics. N A.
NASA Astrophysics Data System (ADS)
Kjeldsen, Tinne Hoff; Lützen, Jesper
2015-07-01
In this paper, we discuss the history of the concept of function and emphasize in particular how problems in physics have led to essential changes in its definition and application in mathematical practices. Euler defined a function as an analytic expression, whereas Dirichlet defined it as a variable that depends in an arbitrary manner on another variable. The change was required when mathematicians discovered that analytic expressions were not sufficient to represent physical phenomena such as the vibration of a string (Euler) and heat conduction (Fourier and Dirichlet). The introduction of generalized functions or distributions is shown to stem partly from the development of new theories of physics such as electrical engineering and quantum mechanics that led to the use of improper functions such as the delta function that demanded a proper foundation. We argue that the development of student understanding of mathematics and its nature is enhanced by embedding mathematical concepts and theories, within an explicit-reflective framework, into a rich historical context emphasizing its interaction with other disciplines such as physics. Students recognize and become engaged with meta-discursive rules governing mathematics. Mathematics teachers can thereby teach inquiry in mathematics as it occurs in the sciences, as mathematical practice aimed at obtaining new mathematical knowledge. We illustrate such a historical teaching and learning of mathematics within an explicit and reflective framework by two examples of student-directed, problem-oriented project work following the Roskilde Model, in which the connection to physics is explicit and provides a learning space where the nature of mathematics and mathematical practices are linked to natural science.
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2010-12-01
This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.
Testing and analysis of flat and curved panels with multiple cracks
NASA Technical Reports Server (NTRS)
Broek, David; Jeong, David Y.; Thomson, Douglas
1994-01-01
An experimental and analytical investigation of multiple cracking in various types of test specimens is described in this paper. The testing phase is comprised of a flat unstiffened panel series and curved stiffened and unstiffened panel series. The test specimens contained various configurations for initial damage. Static loading was applied to these specimens until ultimate failure, while loads and crack propagation were recorded. This data provides the basis for developing and validating methodologies for predicting linkup of multiple cracks, progression to failure, and overall residual strength. The results from twelve flat coupon and ten full scale curved panel tests are presented. In addition, an engineering analysis procedure was developed to predict multiple crack linkup. Reasonable agreement was found between predictions and actual test results for linkup and residual strength for both flat and curved panels. The results indicate that an engineering analysis approach has the potential to quantitatively assess the effect of multiple cracks in the arrest capability of an aircraft fuselage structure.
Kapteyn series arising in radiation problems
NASA Astrophysics Data System (ADS)
Lerche, I.; Tautz, R. C.
2008-01-01
In discussing radiation from multiple point charges or magnetic dipoles, moving in circles or ellipses, a variety of Kapteyn series of the second kind arises. Some of the series have been known in closed form for a hundred years or more, others appear not to be available to analytic persuasion. This paper shows how 12 such generic series can be developed to produce either closed analytic expressions or integrals that are not analytically tractable. In addition, the method presented here may be of benefit when one has other Kapteyn series of the second kind to consider, thereby providing an additional reason to consider such series anew.
ERIC Educational Resources Information Center
Bolman, Lee G.; Deal, Terrence E.
This book shows how educators can become more versatile managers and more artistic leaders. In part 1, chapter 1 shows why reframing--the use of multiple lenses--is vital to effective leadership and management. It introduces the four basic lenses for organizational analysis--the structural, human resource, political, and symbolic frames--and show…
Multiple Sclerosis, Personal Stories | NIH MedlinePlus the Magazine
... please turn Javascript on. Feature: Multiple Sclerosis Personal Stories: Nicole Lemelle, Iris Young, Michael Anthony, John Cantú ... Better," an Internet video series that brings the story of MS to life through the eyes of ...
Cross-Sectional Time Series Designs: A General Transformation Approach.
ERIC Educational Resources Information Center
Velicer, Wayne F.; McDonald, Roderick P.
1991-01-01
The general transformation approach to time series analysis is extended to the analysis of multiple unit data by the development of a patterned transformation matrix. The procedure includes alternatives for special cases and requires only minor revisions in existing computer software. (SLD)
FY13 Annual Report: PHEV Advanced Series Gen-set Development/Demonstration Activity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambon, Paul H.
2013-12-01
The objective of this project is to integrate ORNL advancements in vehicle technologies to properly design, and size a gen-set for various vehicle applications and then simulate multiple advanced series hybrid (HEV/PHEV) vehicles with the genset models.
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
Estimation from incomplete multinomial data. Ph.D. Thesis - Harvard Univ.
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1978-01-01
The vector of multinomial cell probabilities was estimated from incomplete data, incomplete in that it contains partially classified observations. Each such partially classified observation was observed to fall in one of two or more selected categories but was not classified further into a single category. The data were assumed to be incomplete at random. The estimation criterion was minimization of risk for quadratic loss. The estimators were the classical maximum likelihood estimate, the Bayesian posterior mode, and the posterior mean. An approximation was developed for the posterior mean. The Dirichlet, the conjugate prior for the multinomial distribution, was assumed for the prior distribution.
Improved definition of crustal magnetic anomalies for MAGSAT data
NASA Technical Reports Server (NTRS)
Brown, R. D.; Frawley, J. F.; Davis, W. M.; Ray, R. D.; Didwall, E.; Regan, R. D. (Principal Investigator)
1982-01-01
The routine correction of MAGSAT vector magnetometer data for external field effects such as the ring current and the daily variation by filtering long wavelength harmonics from the data is described. Separation of fields due to low altitude sources from those caused by high altitude sources is affected by means of dual harmonic expansions in the solution of Dirichlet's problem. This regression/harmonic filter procedure is applied on an orbit by orbit basis, and initial tests on MAGSAT data from orbit 1176 show reduction in external field residuals by 24.33 nT RMS in the horizontal component, and 10.95 nT RMS in the radial component.
Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics
Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; ...
2017-01-03
In this paper, we present a consistent implicit incompressible smoothed particle hydrodynamics (I 2SPH) discretization of Navier–Stokes, Poisson–Boltzmann, and advection–diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The accuracy and convergence of the consistent I 2SPH are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. Lastly, the new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.
Repulsive Casimir force in Bose–Einstein Condensate
NASA Astrophysics Data System (ADS)
Mehedi Faruk, Mir; Biswas, Shovon
2018-04-01
We study the Casimir effect for a three dimensional system of ideal free massive Bose gas in a slab geometry with Zaremba and anti-periodic boundary conditions. It is found that for these type of boundary conditions the resulting Casimir force is repulsive in nature, in contrast with usual periodic, Dirichlet or Neumann boundary condition where the Casimir force is attractive (Martin and Zagrebnov 2006 Europhys. Lett. 73 15). Casimir forces in these boundary conditions also maintain a power law decay function below condensation temperature and exponential decay function above the condensation temperature albeit with a positive sign, identifying the repulsive nature of the force.
A Duality Theory for Non-convex Problems in the Calculus of Variations
NASA Astrophysics Data System (ADS)
Bouchitté, Guy; Fragalà, Ilaria
2018-07-01
We present a new duality theory for non-convex variational problems, under possibly mixed Dirichlet and Neumann boundary conditions. The dual problem reads nicely as a linear programming problem, and our main result states that there is no duality gap. Further, we provide necessary and sufficient optimality conditions, and we show that our duality principle can be reformulated as a min-max result which is quite useful for numerical implementations. As an example, we illustrate the application of our method to a celebrated free boundary problem. The results were announced in Bouchitté and Fragalà (C R Math Acad Sci Paris 353(4):375-379, 2015).
Hilbert complexes of nonlinear elasticity
NASA Astrophysics Data System (ADS)
Angoshtari, Arzhang; Yavari, Arash
2016-12-01
We introduce some Hilbert complexes involving second-order tensors on flat compact manifolds with boundary that describe the kinematics and the kinetics of motion in nonlinear elasticity. We then use the general framework of Hilbert complexes to write Hodge-type and Helmholtz-type orthogonal decompositions for second-order tensors. As some applications of these decompositions in nonlinear elasticity, we study the strain compatibility equations of linear and nonlinear elasticity in the presence of Dirichlet boundary conditions and the existence of stress functions on non-contractible bodies. As an application of these Hilbert complexes in computational mechanics, we briefly discuss the derivation of a new class of mixed finite element methods for nonlinear elasticity.
Interaction of a conductive crack and of an electrode at a piezoelectric bimaterial interface
NASA Astrophysics Data System (ADS)
Onopriienko, Oleg; Loboda, Volodymyr; Sheveleva, Alla; Lapusta, Yuri
2018-06-01
The interaction of a conductive crack and an electrode at a piezoelectric bi-material interface is studied. The bimaterial is subjected to an in-plane electrical field parallel to the interface and an anti-plane mechanical loading. The problem is formulated and reduced, via the application of sectionally analytic vector functions, to a combined Dirichlet-Riemann boundary value problem. Simple analytical expressions for the stress, the electric field, and their intensity factors as well as for the crack faces' displacement jump are derived. Our numerical results illustrate the proposed approach and permit to draw some conclusions on the crack-electrode interaction.
Acoustic response of a rectangular levitator with orifices
NASA Technical Reports Server (NTRS)
El-Raheb, Michael; Wagner, Paul
1990-01-01
The acoustic response of a rectangular cavity to speaker-generated excitation through waveguides terminating at orifices in the cavity walls is analyzed. To find the effects of orifices, acoustic pressure is expressed by eigenfunctions satisfying Neumann boundary conditions as well as by those satisfying Dirichlet ones. Some of the excess unknowns can be eliminated by point constraints set over the boundary, by appeal to Lagrange undetermined multipliers. The resulting transfer matrix must be further reduced by partial condensation to the order of a matrix describing unmixed boundary conditions. If the cavity is subjected to an axial temperature dependence, the transfer matrix is determined numerically.
Recognition of a person named entity from the text written in a natural language
NASA Astrophysics Data System (ADS)
Dolbin, A. V.; Rozaliev, V. L.; Orlova, Y. A.
2017-01-01
This work is devoted to the semantic analysis of texts, which were written in a natural language. The main goal of the research was to compare latent Dirichlet allocation and latent semantic analysis to identify elements of the human appearance in the text. The completeness of information retrieval was chosen as the efficiency criteria for methods comparison. However, it was insufficient to choose only one method for achieving high recognition rates. Thus, additional methods were used for finding references to the personality in the text. All these methods are based on the created information model, which represents person’s appearance.
The Calderón problem with corrupted data
NASA Astrophysics Data System (ADS)
Caro, Pedro; Garcia, Andoni
2017-08-01
We consider the inverse Calderón problem consisting of determining the conductivity inside a medium by electrical measurements on its surface. Ideally, these measurements determine the Dirichlet-to-Neumann map and, therefore, one usually assumes the data to be given by such a map. This situation corresponds to having access to infinite-precision measurements, which is totally unrealistic. In this paper, we study the Calderón problem assuming the data to contain measurement errors and provide formulas to reconstruct the conductivity and its normal derivative on the surface. Additionally, we state the rate convergence of the method. Our approach is theoretical and has a stochastic flavour.
Effects of degeneracy and response function in a diffusion predator-prey model
NASA Astrophysics Data System (ADS)
Li, Shanbing; Wu, Jianhua; Dong, Yaying
2018-04-01
In this paper, we consider positive solutions of a diffusion predator-prey model with a degeneracy under the Dirichlet boundary conditions. We first obtain sufficient conditions of the existence of positive solutions by the Leray-Schauder degree theory, and then analyze the limiting behavior of positive solutions as the growth rate of the predator goes to infinity and the conversion rates of the predator goes to zero, respectively. It is shown that these results for Holling II response function (i.e. m > 0) reveal interesting contrast with that for the classical Lotka-Volterra predator-prey model (i.e. m = 0).
A Duality Theory for Non-convex Problems in the Calculus of Variations
NASA Astrophysics Data System (ADS)
Bouchitté, Guy; Fragalà, Ilaria
2018-02-01
We present a new duality theory for non-convex variational problems, under possibly mixed Dirichlet and Neumann boundary conditions. The dual problem reads nicely as a linear programming problem, and our main result states that there is no duality gap. Further, we provide necessary and sufficient optimality conditions, and we show that our duality principle can be reformulated as a min-max result which is quite useful for numerical implementations. As an example, we illustrate the application of our method to a celebrated free boundary problem. The results were announced in Bouchitté and Fragalà (C R Math Acad Sci Paris 353(4):375-379, 2015).
Smuk, M; Carpenter, J R; Morris, T P
2017-02-06
Within epidemiological and clinical research, missing data are a common issue and often over looked in publications. When the issue of missing observations is addressed it is usually assumed that the missing data are 'missing at random' (MAR). This assumption should be checked for plausibility, however it is untestable, thus inferences should be assessed for robustness to departures from missing at random. We highlight the method of pattern mixture sensitivity analysis after multiple imputation using colorectal cancer data as an example. We focus on the Dukes' stage variable which has the highest proportion of missing observations. First, we find the probability of being in each Dukes' stage given the MAR imputed dataset. We use these probabilities in a questionnaire to elicit prior beliefs from experts on what they believe the probability would be in the missing data. The questionnaire responses are then used in a Dirichlet draw to create a Bayesian 'missing not at random' (MNAR) prior to impute the missing observations. The model of interest is applied and inferences are compared to those from the MAR imputed data. The inferences were largely insensitive to departure from MAR. Inferences under MNAR suggested a smaller association between Dukes' stage and death, though the association remained positive and with similarly low p values. We conclude by discussing the positives and negatives of our method and highlight the importance of making people aware of the need to test the MAR assumption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronfman, B. H.
Time-series analysis provides a useful tool in the evaluation of public policy outputs. It is shown that the general Box and Jenkins method, when extended to allow for multiple interrupts, enables researchers simultaneously to examine changes in drift and level of a series, and to select the best fit model for the series. As applied to urban renewal allocations, results show significant changes in the level of the series, corresponding to changes in party control of the Executive. No support is given to the ''incrementalism'' hypotheses as no significant changes in drift are found.
Multifractal analysis of visibility graph-based Ito-related connectivity time series.
Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano
2016-02-01
In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.
Detection of bifurcations in noisy coupled systems from multiple time series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Mark S., E-mail: m.s.williamson@exeter.ac.uk; Lenton, Timothy M.
We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, themore » possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system.« less
Detection of bifurcations in noisy coupled systems from multiple time series
NASA Astrophysics Data System (ADS)
Williamson, Mark S.; Lenton, Timothy M.
2015-03-01
We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, the possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system.
PSO-MISMO modeling strategy for multistep-ahead time series prediction.
Bao, Yukun; Xiong, Tao; Hu, Zhongyi
2014-05-01
Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.
Multiple Questions Require Multiple Designs: An Evaluation of the 1981 Changes to the AFDC Program.
ERIC Educational Resources Information Center
Hedrick, Terry E.; Shipman, Stephanie L.
1988-01-01
Changes made in 1981 to the Aid to Families with Dependent Children (AFDC) program under the Omnibus Budget Reconciliation Act were evaluated. Multiple quasi-experimental designs (interrupted time series, non-equivalent comparison groups, and simple pre-post designs) used to address evaluation questions illustrate the issues faced by evaluators in…
Development of a Prototype System for Accessing Linked NCES Data. Working Paper Series.
ERIC Educational Resources Information Center
Salvucci, Sameena; Wenck, Stephen; Tyson, James
A project has been developed to advance the capabilities of the National Center for Education Statistics (NCES) to support the dissemination of linked data from multiple surveys, multiple components within a survey, and multiple time points. An essential element of this study is the development of a software prototype system to facilitate NCES…
Refined composite multiscale weighted-permutation entropy of financial time series
NASA Astrophysics Data System (ADS)
Zhang, Yongping; Shang, Pengjian
2018-04-01
For quantifying the complexity of nonlinear systems, multiscale weighted-permutation entropy (MWPE) has recently been proposed. MWPE has incorporated amplitude information and been applied to account for the multiple inherent dynamics of time series. However, MWPE may be unreliable, because its estimated values show large fluctuation for slight variation of the data locations, and a significant distinction only for the different length of time series. Therefore, we propose the refined composite multiscale weighted-permutation entropy (RCMWPE). By comparing the RCMWPE results with other methods' results on both synthetic data and financial time series, RCMWPE method shows not only the advantages inherited from MWPE but also lower sensitivity to the data locations, more stable and much less dependent on the length of time series. Moreover, we present and discuss the results of RCMWPE method on the daily price return series from Asian and European stock markets. There are significant differences between Asian markets and European markets, and the entropy values of Hang Seng Index (HSI) are close to but higher than those of European markets. The reliability of the proposed RCMWPE method has been supported by simulations on generated and real data. It could be applied to a variety of fields to quantify the complexity of the systems over multiple scales more accurately.
Writing and applications of fiber Bragg grating arrays
NASA Astrophysics Data System (ADS)
LaRochelle, Sophie; Cortes, Pierre-Yves; Fathallah, H.; Rusch, Leslie A.; Jaafar, H. B.
2000-12-01
Multiple Bragg gratings are written in a single fibre strand with accurate positioning to achieve predetermined time delays between optical channels. Applications of fibre Bragg grating arrays include encoders/decoders with series of identical gratings for optical code-division multiple access.
Comparing multi-module connections in membrane chromatography scale-up.
Yu, Zhou; Karkaria, Tishtar; Espina, Marianela; Hunjun, Manjeet; Surendran, Abera; Luu, Tina; Telychko, Julia; Yang, Yan-Ping
2015-07-20
Membrane chromatography is increasingly used for protein purification in the biopharmaceutical industry. Membrane adsorbers are often pre-assembled by manufacturers as ready-to-use modules. In large-scale protein manufacturing settings, the use of multiple membrane modules for a single batch is often required due to the large quantity of feed material. The question as to how multiple modules can be connected to achieve optimum separation and productivity has been previously approached using model proteins and mass transport theories. In this study, we compare the performance of multiple membrane modules in series and in parallel in the production of a protein antigen. Series connection was shown to provide superior separation compared to parallel connection in the context of competitive adsorption. Copyright © 2015 Elsevier B.V. All rights reserved.
Kernel canonical-correlation Granger causality for multiple time series
NASA Astrophysics Data System (ADS)
Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu
2011-04-01
Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.
NASA Astrophysics Data System (ADS)
Ozawa, Taku; Ueda, Hideki
2011-12-01
InSAR time series analysis is an effective tool for detecting spatially and temporally complicated volcanic deformation. To obtain details of such deformation, we developed an advanced InSAR time series analysis using interferograms of multiple-orbit tracks. Considering only right- (or only left-) looking SAR observations, incidence directions for different orbit tracks are mostly included in a common plane. Therefore, slant-range changes in their interferograms can be expressed by two components in the plane. This approach estimates the time series of their components from interferograms of multiple-orbit tracks by the least squares analysis, and higher accuracy is obtained if many interferograms of different orbit tracks are available. Additionally, this analysis can combine interferograms for different incidence angles. In a case study on Miyake-jima, we obtained a deformation time series corresponding to GPS observations from PALSAR interferograms of six orbit tracks. The obtained accuracy was better than that with the SBAS approach, demonstrating its effectiveness. Furthermore, it is expected that higher accuracy would be obtained if SAR observations were carried out more frequently in all orbit tracks. The deformation obtained in the case study indicates uplift along the west coast and subsidence with contraction around the caldera. The speed of the uplift was almost constant, but the subsidence around the caldera decelerated from 2009. A flat deformation source was estimated near sea level under the caldera, implying that deceleration of subsidence was related to interaction between volcanic thermal activity and the aquifer.
40 CFR 63.10010 - What are my monitoring, installation, operation, and maintenance requirements?
Code of Federal Regulations, 2013 CFR
2013-07-01
... that emissions are controlled with a common control device or series of control devices, are discharged... parallel control devices or multiple series of control devices are discharged to the atmosphere through... quality control activities (including, as applicable, calibration checks and required zero and span...
40 CFR 63.10010 - What are my monitoring, installation, operation, and maintenance requirements?
Code of Federal Regulations, 2014 CFR
2014-07-01
... that emissions are controlled with a common control device or series of control devices, are discharged... parallel control devices or multiple series of control devices are discharged to the atmosphere through... quality control activities (including, as applicable, calibration checks and required zero and span...
Induction heating using induction coils in series-parallel circuits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsen, Marc Rollo; Geren, William Preston; Miller, Robert James
A part is inductively heated by multiple, self-regulating induction coil circuits having susceptors, coupled together in parallel and in series with an AC power supply. Each of the circuits includes a tuning capacitor that tunes the circuit to resonate at the frequency of AC power supply.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-19
... real-time, multiple-strategy approach (i.e., appropriate grout design and installation, installed... is available in ADAMS) is provided the first time that a document is referenced. Revision 2 of... ``Regulatory Guide'' series. This series was developed to describe and make available to the public information...
Continuing Environmental Health Education: A Course for Environmental Health Personnel.
ERIC Educational Resources Information Center
Mill, Raymond A.; Walter, William G.
1979-01-01
This lesson is the third of a series of six lessons on general environmental health. The series of multiple choice tests covers administration, food sanitation, vector control, housing, radiation, accident prevention, water supplies, waste disposal, air pollution, noise pollution, occupational health, recreation facilities, and water pollution.…
Most analyses of daily time series epidemiology data relate mortality or morbidity counts to PM and other air pollutants by means of single-outcome regression models using multiple predictors, without taking into account the complex statistical structure of the predictor variable...
The Nagaoka International Corporation CHEMILES NCL Series system was tested to verify its performance for the reduction of multiple contaminants including: arsenic, ammonia, iron, and manganese. The objectives of this verification, as operated under the conditions at the test si...
31 CFR 352.7 - Issues on exchange.
Code of Federal Regulations, 2010 CFR
2010-07-01
... States Savings Notes (Freedom Shares) at their current redemption values for Series HH bonds. Series E.... The total current redemption value of the eligible securities submitted for exchange in any one transaction was required to be $500 or more. If the current redemption value was an even multiple of $500...
31 CFR 352.7 - Issues on exchange.
Code of Federal Regulations, 2013 CFR
2013-07-01
... States Savings Notes (Freedom Shares) at their current redemption values for Series HH bonds. Series E.... The total current redemption value of the eligible securities submitted for exchange in any one transaction was required to be $500 or more. If the current redemption value was an even multiple of $500...
31 CFR 352.7 - Issues on exchange.
Code of Federal Regulations, 2011 CFR
2011-07-01
... States Savings Notes (Freedom Shares) at their current redemption values for Series HH bonds. Series E.... The total current redemption value of the eligible securities submitted for exchange in any one transaction was required to be $500 or more. If the current redemption value was an even multiple of $500...
31 CFR 352.7 - Issues on exchange.
Code of Federal Regulations, 2012 CFR
2012-07-01
... States Savings Notes (Freedom Shares) at their current redemption values for Series HH bonds. Series E.... The total current redemption value of the eligible securities submitted for exchange in any one transaction was required to be $500 or more. If the current redemption value was an even multiple of $500...
Supplementary and Enrichment Series: Mathematical Systems. Teachers' Commentary. SP-20.
ERIC Educational Resources Information Center
Syer, Henry W., Ed.
This is one in a series of manuals for teachers using SMSG high school supplementary materials. The pamphlet includes commentaries on the sections of the student's booklet, answers to the exercises, and sample test questions. Topics covered include addition, multiplication, operations, closure, identity element, mathematical systems, mathematical…
Multiple Time Series Node Synchronization Utilizing Ambient Reference
2014-12-31
assessment, is the need for fine scale synchronization among communicating nodes and across multiple domains. The severe requirements that Special...processing targeted to performance assessment, is the need for fine scale synchronization among communicating nodes and across multiple domains. The...research community and it is well documented and characterized. The datasets considered from this project (listed below) were used to derive the
ERIC Educational Resources Information Center
Bliss, Stacy L.; Skinner, Christopher H.; McCallum, Elizabeth; Saecker, Lee B.; Rowland-Bryant, Emily; Brown, Katie S.
2010-01-01
An adapted alternating treatments design was used to compare the effectiveness of a taped-problems (TP) intervention with TP and an additional immediate assessment (TP + AIA) on the multiplication fluency of six fifth-grade students. During TP, the students listened to a tape playing a series of multiplication problems and answers three times.…
Is Stacking Intervention Components Cost-Effective? An Analysis of the Incredible Years Program
ERIC Educational Resources Information Center
Foster, E. Michael; Olchowski, Allison E.; Webster-Stratton, Carolyn H.
2007-01-01
The cost-effectiveness of delivering stacked multiple intervention components for children is compared to implementing single intervention by analyzing the Incredible Years Series program. The result suggests multiple intervention components are more cost-effective than single intervention components.
Time Series Model Identification by Estimating Information, Memory, and Quantiles.
1983-07-01
Standards, Sect. D, 68D, 937-951. Parzen, Emanuel (1969) "Multiple time series modeling" Multivariate Analysis - II, edited by P. Krishnaiah , Academic... Krishnaiah , North Holland: Amsterdam, 283-295. Parzen, Emanuel (1979) "Forecasting and Whitening Filter Estimation" TIMS Studies in the Management...principle. Applications of Statistics, P. R. Krishnaiah , ed. North Holland: Amsterdam, 27-41. Box, G. E. P. and Jenkins, G. M. (1970) Time Series Analysis
ERIC Educational Resources Information Center
I, Ji Yeong; Dougherty, Barbara J.; Berkaliev, Zaur
2015-01-01
Young children spend a much greater amount of time on practicing multiplication facts compared to understanding the concept of multiplication. When students have long-term, foundational concepts rather than a series of fragmented algorithms or facts, they are more likely to understand and generalize the mathematics. Using generalized models that…
This presentation, Linking Regional Aerosol Emission Changes with Multiple Impact Measures through Direct and Cloud-Related Forcing Estimates, was given at the STAR Black Carbon 2016 Webinar Series: Accounting for Impact, Emissions, and Uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurley, R. C.; Vorobiev, O. Y.; Ezzedine, S. M.
Here, we present a numerical method for modeling the mechanical effects of nonlinearly-compliant joints in elasto-plastic media. The method uses a series of strain-rate and stress update algorithms to determine joint closure, slip, and solid stress within computational cells containing multiple “embedded” joints. This work facilitates efficient modeling of nonlinear wave propagation in large spatial domains containing a large number of joints that affect bulk mechanical properties. We implement the method within the massively parallel Lagrangian code GEODYN-L and provide verification and examples. We highlight the ability of our algorithms to capture joint interactions and multiple weakness planes within individualmore » computational cells, as well as its computational efficiency. We also discuss the motivation for developing the proposed technique: to simulate large-scale wave propagation during the Source Physics Experiments (SPE), a series of underground explosions conducted at the Nevada National Security Site (NNSS).« less
Hurley, R. C.; Vorobiev, O. Y.; Ezzedine, S. M.
2017-04-06
Here, we present a numerical method for modeling the mechanical effects of nonlinearly-compliant joints in elasto-plastic media. The method uses a series of strain-rate and stress update algorithms to determine joint closure, slip, and solid stress within computational cells containing multiple “embedded” joints. This work facilitates efficient modeling of nonlinear wave propagation in large spatial domains containing a large number of joints that affect bulk mechanical properties. We implement the method within the massively parallel Lagrangian code GEODYN-L and provide verification and examples. We highlight the ability of our algorithms to capture joint interactions and multiple weakness planes within individualmore » computational cells, as well as its computational efficiency. We also discuss the motivation for developing the proposed technique: to simulate large-scale wave propagation during the Source Physics Experiments (SPE), a series of underground explosions conducted at the Nevada National Security Site (NNSS).« less
Geometric comparison of popular mixture-model distances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Scott A.
2010-09-01
Statistical Latent Dirichlet Analysis produces mixture model data that are geometrically equivalent to points lying on a regular simplex in moderate to high dimensions. Numerous other statistical models and techniques also produce data in this geometric category, even though the meaning of the axes and coordinate values differs significantly. A distance function is used to further analyze these points, for example to cluster them. Several different distance functions are popular amongst statisticians; which distance function is chosen is usually driven by the historical preference of the application domain, information-theoretic considerations, or by the desirability of the clustering results. Relatively littlemore » consideration is usually given to how distance functions geometrically transform data, or the distances algebraic properties. Here we take a look at these issues, in the hope of providing complementary insight and inspiring further geometric thought. Several popular distances, {chi}{sup 2}, Jensen - Shannon divergence, and the square of the Hellinger distance, are shown to be nearly equivalent; in terms of functional forms after transformations, factorizations, and series expansions; and in terms of the shape and proximity of constant-value contours. This is somewhat surprising given that their original functional forms look quite different. Cosine similarity is the square of the Euclidean distance, and a similar geometric relationship is shown with Hellinger and another cosine. We suggest a geodesic variation of Hellinger. The square-root projection that arises in Hellinger distance is briefly compared to standard normalization for Euclidean distance. We include detailed derivations of some ratio and difference bounds for illustrative purposes. We provide some constructions that nearly achieve the worst-case ratios, relevant for contours.« less
NASA Astrophysics Data System (ADS)
Holman, Benjamin R.
In recent years, revolutionary "hybrid" or "multi-physics" methods of medical imaging have emerged. By combining two or three different types of waves these methods overcome limitations of classical tomography techniques and deliver otherwise unavailable, potentially life-saving diagnostic information. Thermoacoustic (and photoacoustic) tomography is the most developed multi-physics imaging modality. Thermo- and photo- acoustic tomography require reconstructing initial acoustic pressure in a body from time series of pressure measured on a surface surrounding the body. For the classical case of free space wave propagation, various reconstruction techniques are well known. However, some novel measurement schemes place the object of interest between reflecting walls that form a de facto resonant cavity. In this case, known methods cannot be used. In chapter 2 we present a fast iterative reconstruction algorithm for measurements made at the walls of a rectangular reverberant cavity with a constant speed of sound. We prove the convergence of the iterations under a certain sufficient condition, and demonstrate the effectiveness and efficiency of the algorithm in numerical simulations. In chapter 3 we consider the more general problem of an arbitrarily shaped resonant cavity with a non constant speed of sound and present the gradual time reversal method for computing solutions to the inverse source problem. It consists in solving back in time on the interval [0, T] the initial/boundary value problem for the wave equation, with the Dirichlet boundary data multiplied by a smooth cutoff function. If T is sufficiently large one obtains a good approximation to the initial pressure; in the limit of large T such an approximation converges (under certain conditions) to the exact solution.
Empirical performance of the multivariate normal universal portfolio
NASA Astrophysics Data System (ADS)
Tan, Choon Peng; Pang, Sook Theng
2013-09-01
Universal portfolios generated by the multivariate normal distribution are studied with emphasis on the case where variables are dependent, namely, the covariance matrix is not diagonal. The moving-order multivariate normal universal portfolio requires very long implementation time and large computer memory in its implementation. With the objective of reducing memory and implementation time, the finite-order universal portfolio is introduced. Some stock-price data sets are selected from the local stock exchange and the finite-order universal portfolio is run on the data sets, for small finite order. Empirically, it is shown that the portfolio can outperform the moving-order Dirichlet universal portfolio of Cover and Ordentlich[2] for certain parameters in the selected data sets.
Mechanisms for the target patterns formation in a stochastic bistable excitable medium
NASA Astrophysics Data System (ADS)
Verisokin, Andrey Yu.; Verveyko, Darya V.; Postnov, Dmitry E.
2018-04-01
We study the features of formation and evolution of spatiotemporal chaotic regime generated by autonomous pacemakers in excitable deterministic and stochastic bistable active media using the example of the FitzHugh - Nagumo biological neuron model under discrete medium conditions. The following possible mechanisms for the formation of autonomous pacemakers have been studied: 1) a temporal external force applied to a small region of the medium, 2) geometry of the solution region (the medium contains regions with Dirichlet or Neumann boundaries). In our work we explore the conditions for the emergence of pacemakers inducing target patterns in a stochastic bistable excitable system and propose the algorithm for their analysis.
Impulsive synchronization of stochastic reaction-diffusion neural networks with mixed time delays.
Sheng, Yin; Zeng, Zhigang
2018-07-01
This paper discusses impulsive synchronization of stochastic reaction-diffusion neural networks with Dirichlet boundary conditions and hybrid time delays. By virtue of inequality techniques, theories of stochastic analysis, linear matrix inequalities, and the contradiction method, sufficient criteria are proposed to ensure exponential synchronization of the addressed stochastic reaction-diffusion neural networks with mixed time delays via a designed impulsive controller. Compared with some recent studies, the neural network models herein are more general, some restrictions are relaxed, and the obtained conditions enhance and generalize some published ones. Finally, two numerical simulations are performed to substantiate the validity and merits of the developed theoretical analysis. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kinetic and dynamic Delaunay tetrahedralizations in three dimensions
NASA Astrophysics Data System (ADS)
Schaller, Gernot; Meyer-Hermann, Michael
2004-09-01
We describe algorithms to implement fully dynamic and kinetic three-dimensional unconstrained Delaunay triangulations, where the time evolution of the triangulation is not only governed by moving vertices but also by a changing number of vertices. We use three-dimensional simplex flip algorithms, a stochastic visibility walk algorithm for point location and in addition, we propose a new simple method of deleting vertices from an existing three-dimensional Delaunay triangulation while maintaining the Delaunay property. As an example, we analyse the performance in various cases of practical relevance. The dual Dirichlet tessellation can be used to solve differential equations on an irregular grid, to define partitions in cell tissue simulations, for collision detection etc.
On degenerate coupled transport processes in porous media with memory phenomena
NASA Astrophysics Data System (ADS)
Beneš, Michal; Pažanin, Igor
2018-06-01
In this paper we prove the existence of weak solutions to degenerate parabolic systems arising from the fully coupled moisture movement, solute transport of dissolved species and heat transfer through porous materials. Physically relevant mixed Dirichlet-Neumann boundary conditions and initial conditions are considered. Existence of a global weak solution of the problem is proved by means of semidiscretization in time, proving necessary uniform estimates and by passing to the limit from discrete approximations. Degeneration occurs in the nonlinear transport coefficients which are not assumed to be bounded below and above by positive constants. Degeneracies in transport coefficients are overcome by proving suitable a-priori $L^{\\infty}$-estimates based on De Giorgi and Moser iteration technique.
Bounded Error Schemes for the Wave Equation on Complex Domains
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Ditkowski, Adi; Yefet, Amir
1998-01-01
This paper considers the application of the method of boundary penalty terms ("SAT") to the numerical solution of the wave equation on complex shapes with Dirichlet boundary conditions. A theory is developed, in a semi-discrete setting, that allows the use of a Cartesian grid on complex geometries, yet maintains the order of accuracy with only a linear temporal error-bound. A numerical example, involving the solution of Maxwell's equations inside a 2-D circular wave-guide demonstrates the efficacy of this method in comparison to others (e.g. the staggered Yee scheme) - we achieve a decrease of two orders of magnitude in the level of the L2-error.
Supervised Semantic Classification for Nuclear Proliferation Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju; Cheriyadat, Anil M; Gleason, Shaun Scott
2010-01-01
Existing feature extraction and classification approaches are not suitable for monitoring proliferation activity using high-resolution multi-temporal remote sensing imagery. In this paper we present a supervised semantic labeling framework based on the Latent Dirichlet Allocation method. This framework is used to analyze over 120 images collected under different spatial and temporal settings over the globe representing three major semantic categories: airports, nuclear, and coal power plants. Initial experimental results show a reasonable discrimination of these three categories even though coal and nuclear images share highly common and overlapping objects. This research also identified several research challenges associated with nuclear proliferationmore » monitoring using high resolution remote sensing images.« less
Hamiltonian models for the propagation of irrotational surface gravity waves over a variable bottom
NASA Astrophysics Data System (ADS)
Compelli, A.; Ivanov, R.; Todorov, M.
2017-12-01
A single incompressible, inviscid, irrotational fluid medium bounded by a free surface and varying bottom is considered. The Hamiltonian of the system is expressed in terms of the so-called Dirichlet-Neumann operators. The equations for the surface waves are presented in Hamiltonian form. Specific scaling of the variables is selected which leads to approximations of Boussinesq and Korteweg-de Vries (KdV) types, taking into account the effect of the slowly varying bottom. The arising KdV equation with variable coefficients is studied numerically when the initial condition is in the form of the one-soliton solution for the initial depth. This article is part of the theme issue 'Nonlinear water waves'.
Radial rescaling approach for the eigenvalue problem of a particle in an arbitrarily shaped box.
Lijnen, Erwin; Chibotaru, Liviu F; Ceulemans, Arnout
2008-01-01
In the present work we introduce a methodology for solving a quantum billiard with Dirichlet boundary conditions. The procedure starts from the exactly known solutions for the particle in a circular disk, which are subsequently radially rescaled in such a way that they obey the new boundary conditions. In this way one constructs a complete basis set which can be used to obtain the eigenstates and eigenenergies of the corresponding quantum billiard to a high level of precision. Test calculations for several regular polygons show the efficiency of the method which often requires one or two basis functions to describe the lowest eigenstates with high accuracy.
A Case Study on Sepsis Using PubMed and Deep Learning for Ontology Learning.
Arguello Casteleiro, Mercedes; Maseda Fernandez, Diego; Demetriou, George; Read, Warren; Fernandez Prieto, Maria Jesus; Des Diz, Julio; Nenadic, Goran; Keane, John; Stevens, Robert
2017-01-01
We investigate the application of distributional semantics models for facilitating unsupervised extraction of biomedical terms from unannotated corpora. Term extraction is used as the first step of an ontology learning process that aims to (semi-)automatic annotation of biomedical concepts and relations from more than 300K PubMed titles and abstracts. We experimented with both traditional distributional semantics methods such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) as well as the neural language models CBOW and Skip-gram from Deep Learning. The evaluation conducted concentrates on sepsis, a major life-threatening condition, and shows that Deep Learning models outperform LSA and LDA with much higher precision.
Convergence of spectra of graph-like thin manifolds
NASA Astrophysics Data System (ADS)
Exner, Pavel; Post, Olaf
2005-05-01
We consider a family of compact manifolds which shrinks with respect to an appropriate parameter to a graph. The main result is that the spectrum of the Laplace-Beltrami operator converges to the spectrum of the (differential) Laplacian on the graph with Kirchhoff boundary conditions at the vertices. On the other hand, if the shrinking at the vertex parts of the manifold is sufficiently slower comparing to that of the edge parts, the limiting spectrum corresponds to decoupled edges with Dirichlet boundary conditions at the endpoints. At the borderline between the two regimes we have a third possibility when the limiting spectrum can be described by a nontrivial coupling at the vertices.
Creation and perturbation of planar networks of chemical oscillators
Tompkins, Nathan; Cambria, Matthew Carl; Wang, Adam L.; Heymann, Michael; Fraden, Seth
2015-01-01
Methods for creating custom planar networks of diffusively coupled chemical oscillators and perturbing individual oscillators within the network are presented. The oscillators consist of the Belousov-Zhabotinsky (BZ) reaction contained in an emulsion. Networks of drops of the BZ reaction are created with either Dirichlet (constant-concentration) or Neumann (no-flux) boundary conditions in a custom planar configuration using programmable illumination for the perturbations. The differences between the observed network dynamics for each boundary condition are described. Using light, we demonstrate the ability to control the initial conditions of the network and to cause individual oscillators within the network to undergo sustained period elongation or a one-time phase delay. PMID:26117136
A contour for the entanglement entropies in harmonic lattices
NASA Astrophysics Data System (ADS)
Coser, Andrea; De Nobili, Cristiano; Tonni, Erik
2017-08-01
We construct a contour function for the entanglement entropies in generic harmonic lattices. In one spatial dimension, numerical analysis are performed by considering harmonic chains with either periodic or Dirichlet boundary conditions. In the massless regime and for some configurations where the subsystem is a single interval, the numerical results for the contour function are compared to the inverse of the local weight function which multiplies the energy-momentum tensor in the corresponding entanglement hamiltonian, found through conformal field theory methods, and a good agreement is observed. A numerical analysis of the contour function for the entanglement entropy is performed also in a massless harmonic chain for a subsystem made by two disjoint intervals.
Applying the method of fundamental solutions to harmonic problems with singular boundary conditions
NASA Astrophysics Data System (ADS)
Valtchev, Svilen S.; Alves, Carlos J. S.
2017-07-01
The method of fundamental solutions (MFS) is known to produce highly accurate numerical results for elliptic boundary value problems (BVP) with smooth boundary conditions, posed in analytic domains. However, due to the analyticity of the shape functions in its approximation basis, the MFS is usually disregarded when the boundary functions possess singularities. In this work we present a modification of the classical MFS which can be applied for the numerical solution of the Laplace BVP with Dirichlet boundary conditions exhibiting jump discontinuities. In particular, a set of harmonic functions with discontinuous boundary traces is added to the MFS basis. The accuracy of the proposed method is compared with the results form the classical MFS.
40 CFR 63.10010 - What are my monitoring, installation, operation, and maintenance requirements?
Code of Federal Regulations, 2012 CFR
2012-07-01
... emissions are controlled with a common control device or series of control devices, are discharged to the... devices or multiple series of control devices are discharged to the atmosphere through more than one stack... control activities (including, as applicable, calibration checks and required zero and span adjustments...
Africans in America: America's Journey through Slavery. Teacher's Guide.
ERIC Educational Resources Information Center
WGBH-TV, Boston, MA.
This printed guide for viewing the Public Television series, "Africans in America," first points out that the series, told from multiple perspectives and informed by "leading-edge" scholars, illuminates U.S. history from 1607 to 1861: how Africans and Europeans together built a new nation even as they struggled over the meaning…
Series Connected Buck-Boost Regulator
NASA Technical Reports Server (NTRS)
Birchenough, Arthur G. (Inventor)
2006-01-01
A Series Connected Buck-Boost Regulator (SCBBR) that switches only a fraction of the input power, resulting in relatively high efficiencies. The SCBBR has multiple operating modes including a buck, a boost, and a current limiting mode, so that an output voltage of the SCBBR ranges from below the source voltage to above the source voltage.
Time-Critical Cooperative Path Following of Multiple UAVs over Time-Varying Networks
2011-01-01
Notes in Control and Information Systems Series (K. Y. Pettersen, T. Gravdahl, and H. Nijmeijer, Eds.). Springer-Verlag, 2006. 29M. Breivik , V...Information Systems Series (K. Y. Pettersen, T. Gravdahl, and H. Nijmeijer, Eds.). Springer-Verlag, 2006. 31M. Breivik , E. Hovstein, and T. I. Fossen. Ship
FINDING A COMMON DATA REPRESENTATION AND INTERCHANGE APPROACH FOR MULTIMEDIA MODELS
Within many disciplines, multiple approaches are used to represent and access very similar data (e.g., a time series of values), often due to the lack of commonly accepted standards. When projects must use data from multiple disciplines, the problems quickly compound. Often sig...
Relations between elliptic multiple zeta values and a special derivation algebra
NASA Astrophysics Data System (ADS)
Broedel, Johannes; Matthes, Nils; Schlotterer, Oliver
2016-04-01
We investigate relations between elliptic multiple zeta values (eMZVs) and describe a method to derive the number of indecomposable elements of given weight and length. Our method is based on representing eMZVs as iterated integrals over Eisenstein series and exploiting the connection with a special derivation algebra. Its commutator relations give rise to constraints on the iterated integrals over Eisenstein series relevant for eMZVs and thereby allow to count the indecomposable representatives. Conversely, the above connection suggests apparently new relations in the derivation algebra. Under https://tools.aei.mpg.de/emzv we provide relations for eMZVs over a wide range of weights and lengths.
Simulating maar-diatreme volcanic systems in bench-scale experiments
NASA Astrophysics Data System (ADS)
Andrews, R. G.; White, J. D. L.; Dürig, T.; Zimanowski, B.
2015-12-01
Maar-diatreme eruptions are incompletely understood, and explanations for the processes involved in them have been debated for decades. This study extends bench-scale analogue experiments previously conducted on maar-diatreme systems and attempts to scale the results up to both field-scale experimentation and natural volcanic systems in order to produce a reconstructive toolkit for maar volcanoes. These experimental runs produced via multiple mechanisms complex deposits that match many features seen in natural maar-diatreme deposits. The runs include deeper single blasts, series of descending discrete blasts, and series of ascending blasts. Debris-jet inception and diatreme formation are indicated by this study to involve multiple types of granular fountains within diatreme deposits produced under varying initial conditions. The individual energies of blasts in multiple-blast series are not possible to infer from the final deposits. The depositional record of blast sequences can be ascertained from the proportion of fallback sedimentation versus maar ejecta rim material, the final crater size and the degree of overturning or slumping of accessory strata. Quantitatively, deeper blasts involve a roughly equal partitioning of energy into crater excavation energy versus mass movement of juvenile material, whereas shallower blasts expend a much greater proportion of energy in crater excavation.
Finch, Paul; Bessonnette, Susan
2014-01-01
This research was conducted to examine changes in self self-efficacy, (the perception/belief that one can competently cope with a challenging situation) in multiple sclerosis clients following a series of massage therapy treatments. This small practical trial investigated the effects of a pragmatic treatment protocol using a prospective randomized pretest posttest waitlist control design. Self-Efficacy scores were obtained before the first treatment, mid-treatment series, after the last treatment in the series, four weeks after the final treatment and again eight weeks after the final treatment had been received. The intervention involved a series of weekly one hour therapeutic massage treatments conducted over eight weeks and a subsequent eight week follow up period. All treatments were delivered by supervised student therapists in the final term of their two year massage therapy program. Self-Efficacy [SE] was the outcome for the study, measured using the Multiple Sclerosis Self-Efficacy survey [MSSE]. Descriptive statistics for SE scores were assessed and inferential analysis involved the testing of between group differences at each of the measurement points noted above. Statistically significant improvement in self-efficacy was noted between treatment (n = 8) and control (n = 7) groups at mid treatment series (t = 2.32; p < 0.02), post treatment series (t = 1.81; p < 0.05) and at four week follow up (t = 2.24; p < 0.02). At the eight week follow up self-efficacy scores had decreased and there was no statistically significant difference between groups (t = 0.87; p < 0.2). Study results support previous findings indicating that massage therapy increases the self-efficacy of clients with multiple sclerosis, potentially resulting in a better overall adjustment to the disease and an improvement in psycho-emotional state. The increase in self-efficacy after 4 weeks of treatment suggests that positive response occurs more rapidly that was previously demonstrated. The improvement in self-efficacy endured 4 weeks after the end of the treatment series, which suggests that massage therapy may have longer term effects on self-efficacy that were not previously noted. Lack of inter group difference at the eight week follow up reinforces the notion that on-going treatment is required in order to maintain the positive changes observed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre
2016-01-01
A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749
Response Strength in Extreme Multiple Schedules
ERIC Educational Resources Information Center
McLean, Anthony P.; Grace, Randolph C.; Nevin, John A.
2012-01-01
Four pigeons were trained in a series of two-component multiple schedules. Reinforcers were scheduled with random-interval schedules. The ratio of arranged reinforcer rates in the two components was varied over 4 log units, a much wider range than previously studied. When performance appeared stable, prefeeding tests were conducted to assess…
Convergence of electromagnetic field components across discontinuous permittivity profiles: comment.
Li, Lifeng
2002-07-01
The inverse rule that is described in a recent paper [J. Opt. Soc. Am. A 17, 491 (2000)] is not a multiplication rule for multiplying two infinite series, because it does not address how the terms of two series being multiplied are combined to form the product series. Furthermore, it is not the one that is being used in numerical practice. Therefore the insight that the paper provides into why the inverse rule yields correct results at the points of complementary discontinuities is questionable.
Documentation of a spreadsheet for time-series analysis and drawdown estimation
Halford, Keith J.
2006-01-01
Drawdowns during aquifer tests can be obscured by barometric pressure changes, earth tides, regional pumping, and recharge events in the water-level record. These stresses can create water-level fluctuations that should be removed from observed water levels prior to estimating drawdowns. Simple models have been developed for estimating unpumped water levels during aquifer tests that are referred to as synthetic water levels. These models sum multiple time series such as barometric pressure, tidal potential, and background water levels to simulate non-pumping water levels. The amplitude and phase of each time series are adjusted so that synthetic water levels match measured water levels during periods unaffected by an aquifer test. Differences between synthetic and measured water levels are minimized with a sum-of-squares objective function. Root-mean-square errors during fitting and prediction periods were compared multiple times at four geographically diverse sites. Prediction error equaled fitting error when fitting periods were greater than or equal to four times prediction periods. The proposed drawdown estimation approach has been implemented in a spreadsheet application. Measured time series are independent so that collection frequencies can differ and sampling times can be asynchronous. Time series can be viewed selectively and magnified easily. Fitting and prediction periods can be defined graphically or entered directly. Synthetic water levels for each observation well are created with earth tides, measured time series, moving averages of time series, and differences between measured and moving averages of time series. Selected series and fitting parameters for synthetic water levels are stored and drawdowns are estimated for prediction periods. Drawdowns can be viewed independently and adjusted visually if an anomaly skews initial drawdowns away from 0. The number of observations in a drawdown time series can be reduced by averaging across user-defined periods. Raw or reduced drawdown estimates can be copied from the spreadsheet application or written to tab-delimited ASCII files.
ERIC Educational Resources Information Center
Martin, Jason
2013-01-01
Taylor series convergence is a complicated mathematical structure which incorporates multiple concepts. Therefore, it can be very difficult for students to initially comprehend. How might students make sense of this structure? How might experts make sense of this structure? To answer these questions, an exploratory study was conducted using…
ERIC Educational Resources Information Center
Towgood, Karren J.; Meuwese, Julia D. I.; Gilbert, Sam J.; Turner, Martha S.; Burgess, Paul W.
2009-01-01
In the neuropsychological case series approach, tasks are administered that tap different cognitive domains, and differences within rather than across individuals are the basis for theorising; each individual is effectively their own control. This approach is a mainstay of cognitive neuropsychology, and is particularly suited to the study of…
Application of time series analysis for assessing reservoir trophic status
Paris Honglay Chen; Ka-Chu Leung
2000-01-01
This study is to develop and apply a practical procedure for the time series analysis of reservoir eutrophication conditions. A multiplicative decomposition method is used to determine the trophic variations including seasonal, circular, long-term and irregular changes. The results indicate that (1) there is a long high peak for seven months from April to October...
A multiple-fan active control wind tunnel for outdoor wind speed and direction simulation
NASA Astrophysics Data System (ADS)
Wang, Jia-Ying; Meng, Qing-Hao; Luo, Bing; Zeng, Ming
2018-03-01
This article presents a new type of active controlled multiple-fan wind tunnel. The wind tunnel consists of swivel plates and arrays of direct current fans, and the rotation speed of each fan and the shaft angle of each swivel plate can be controlled independently for simulating different kinds of outdoor wind fields. To measure the similarity between the simulated wind field and the outdoor wind field, wind speed and direction time series of two kinds of wind fields are recorded by nine two-dimensional ultrasonic anemometers, and then statistical properties of the wind signals in different time scales are analyzed based on the empirical mode decomposition. In addition, the complexity of wind speed and direction time series is also investigated using multiscale entropy and multivariate multiscale entropy. Results suggest that the simulated wind field in the multiple-fan wind tunnel has a high degree of similarity with the outdoor wind field.
Semi-autonomous remote sensing time series generation tool
NASA Astrophysics Data System (ADS)
Babu, Dinesh Kumar; Kaufmann, Christof; Schmidt, Marco; Dhams, Thorsten; Conrad, Christopher
2017-10-01
High spatial and temporal resolution data is vital for crop monitoring and phenology change detection. Due to the lack of satellite architecture and frequent cloud cover issues, availability of daily high spatial data is still far from reality. Remote sensing time series generation of high spatial and temporal data by data fusion seems to be a practical alternative. However, it is not an easy process, since it involves multiple steps and also requires multiple tools. In this paper, a framework of Geo Information System (GIS) based tool is presented for semi-autonomous time series generation. This tool will eliminate the difficulties by automating all the steps and enable the users to generate synthetic time series data with ease. Firstly, all the steps required for the time series generation process are identified and grouped into blocks based on their functionalities. Later two main frameworks are created, one to perform all the pre-processing steps on various satellite data and the other one to perform data fusion to generate time series. The two frameworks can be used individually to perform specific tasks or they could be combined to perform both the processes in one go. This tool can handle most of the known geo data formats currently available which makes it a generic tool for time series generation of various remote sensing satellite data. This tool is developed as a common platform with good interface which provides lot of functionalities to enable further development of more remote sensing applications. A detailed description on the capabilities and the advantages of the frameworks are given in this paper.
SIBIS: a Bayesian model for inconsistent protein sequence estimation.
Khenoussi, Walyd; Vanhoutrève, Renaud; Poch, Olivier; Thompson, Julie D
2014-09-01
The prediction of protein coding genes is a major challenge that depends on the quality of genome sequencing, the accuracy of the model used to elucidate the exonic structure of the genes and the complexity of the gene splicing process leading to different protein variants. As a consequence, today's protein databases contain a huge amount of inconsistency, due to both natural variants and sequence prediction errors. We have developed a new method, called SIBIS, to detect such inconsistencies based on the evolutionary information in multiple sequence alignments. A Bayesian framework, combined with Dirichlet mixture models, is used to estimate the probability of observing specific amino acids and to detect inconsistent or erroneous sequence segments. We evaluated the performance of SIBIS on a reference set of protein sequences with experimentally validated errors and showed that the sensitivity is significantly higher than previous methods, with only a small loss of specificity. We also assessed a large set of human sequences from the UniProt database and found evidence of inconsistency in 48% of the previously uncharacterized sequences. We conclude that the integration of quality control methods like SIBIS in automatic analysis pipelines will be critical for the robust inference of structural, functional and phylogenetic information from these sequences. Source code, implemented in C on a linux system, and the datasets of protein sequences are freely available for download at http://www.lbgi.fr/∼julie/SIBIS. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
3D variational brain tumor segmentation using Dirichlet priors on a clustered feature set.
Popuri, Karteek; Cobzas, Dana; Murtha, Albert; Jägersand, Martin
2012-07-01
Brain tumor segmentation is a required step before any radiation treatment or surgery. When performed manually, segmentation is time consuming and prone to human errors. Therefore, there have been significant efforts to automate the process. But, automatic tumor segmentation from MRI data is a particularly challenging task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. In our work, we propose an automatic brain tumor segmentation method that addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multidimensional feature set. Then, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this work is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned region statistics in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters from the normal brain region to be in the tumor region. This leads to a better disambiguation of the tumor from brain tissue. We evaluated the performance of our automatic segmentation method on 15 real MRI scans of brain tumor patients, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Validation with the expert segmentation labels yielded encouraging results: Jaccard (58%), Precision (81%), Recall (67%), Hausdorff distance (24 mm). Using priors on the brain/tumor appearance, our proposed automatic 3D variational segmentation method was able to better disambiguate the tumor from the surrounding tissue.
De Filippis, Giovanna; Foglia, Laura; Giudici, Mauro; Mehl, Steffen; Margiotta, Stefano; Negri, Sergio Luigi
2016-12-15
Mediterranean areas are characterized by complex hydrogeological systems, where management of freshwater resources, mostly stored in karstic, coastal aquifers, is necessary and requires the application of numerical tools to detect and prevent deterioration of groundwater, mostly caused by overexploitation. In the Taranto area (southern Italy), the deep, karstic aquifer is the only source of freshwater and satisfies the main human activities. Preserving quantity and quality of this system through management policies is so necessary and such task can be addressed through modeling tools which take into account human impacts and the effects of climate changes. A variable-density flow model was developed with SEAWAT to depict the "current" status of the saltwater intrusion, namely the status simulated over an average hydrogeological year. Considering the goals of this analysis and the scale at which the model was built, the equivalent porous medium approach was adopted to represent the deep aquifer. The effects that different flow boundary conditions along the coast have on the transport model were assessed. Furthermore, salinity stratification occurs within a strip spreading between 4km and 7km from the coast in the deep aquifer. The model predicts a similar phenomenon for some submarine freshwater springs and modeling outcomes were positively compared with measurements found in the literature. Two scenarios were simulated to assess the effects of decreased rainfall and increased pumping on saline intrusion. Major differences in the concentration field with respect to the "current" status were found where the hydraulic conductivity of the deep aquifer is higher and such differences are higher when Dirichlet flow boundary conditions are assigned. Furthermore, the Dirichlet boundary condition along the coast for transport modeling influences the concentration field in different scenarios at shallow depths; as such, concentration values simulated under stressed conditions are lower than those simulated under undisturbed conditions. Copyright © 2016 Elsevier B.V. All rights reserved.
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
Synthesis and crystal structure analysis of uranyl triple acetates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klepov, Vladislav V., E-mail: vladislavklepov@gmail.com; Department of Chemistry, Samara National Research University, 443086 Samara; Serezhkina, Larisa B.
2016-12-15
Single crystals of triple acetates NaR[UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 3}·6H{sub 2}O (R=Mg, Co, Ni, Zn), well-known for their use as reagents for sodium determination, were grown from aqueous solutions and their structural and spectroscopic properties were studied. Crystal structures of the mentioned phases are based upon (Na[UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 3}){sup 2–} clusters and [R(H{sub 2}O){sub 6}]{sup 2+} aqua-complexes. The cooling of a single crystal of NaMg[UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 3}·6H{sub 2}O from 300 to 100 K leads to a phase transition from trigonal to monoclinic crystal system. Intermolecular interactions between the structural units and their mutual packing were studiedmore » and compared from the point of view of the stereoatomic model of crystal structures based on Voronoi-Dirichlet tessellation. Using this method we compared the crystal structures of the triple acetates with Na[UO{sub 2}(CH{sub 3}COO){sub 3}] and [R(H{sub 2}O){sub 6}][UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 2} and proposed reasons of triple acetates stability. Infrared and Raman spectra were collected and their bands were assigned. - Graphical abstract: Single crystals of uranium based triple acetates, analytical reagents for sodium determination, were synthesized and structurally, spectroscopically and topologically characterized. The structures were compared with the structures of compounds from preceding families [M(H{sub 2}O){sub 6})][UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 2} (M = Mg, Co, Ni, Zn) and Na[UO{sub 2}(CH{sub 3}COO){sub 3}]. Analysis was performed with the method of molecular Voronoi-Dirichlet polyhedra to reveal a large contribution of the hydrogen bonds into intermolecular interactions which can be a reason of low solubility of studied complexes.« less
Sun, Tao; Liu, Hongbo; Yu, Hong; Chen, C L Philip
2016-06-28
The central time series crystallizes the common patterns of the set it represents. In this paper, we propose a global constrained degree-pruning dynamic programming (g(dp)²) approach to obtain the central time series through minimizing dynamic time warping (DTW) distance between two time series. The DTW matching path theory with global constraints is proved theoretically for our degree-pruning strategy, which is helpful to reduce the time complexity and computational cost. Our approach can achieve the optimal solution between two time series. An approximate method to the central time series of multiple time series [called as m_g(dp)²] is presented based on DTW barycenter averaging and our g(dp)² approach by considering hierarchically merging strategy. As illustrated by the experimental results, our approaches provide better within-group sum of squares and robustness than other relevant algorithms.
Using time series structural characteristics to analyze grain prices in food insecure countries
Davenport, Frank; Funk, Chris
2015-01-01
Two components of food security monitoring are accurate forecasts of local grain prices and the ability to identify unusual price behavior. We evaluated a method that can both facilitate forecasts of cross-country grain price data and identify dissimilarities in price behavior across multiple markets. This method, characteristic based clustering (CBC), identifies similarities in multiple time series based on structural characteristics in the data. Here, we conducted a simulation experiment to determine if CBC can be used to improve the accuracy of maize price forecasts. We then compared forecast accuracies among clustered and non-clustered price series over a rolling time horizon. We found that the accuracy of forecasts on clusters of time series were equal to or worse than forecasts based on individual time series. However, in the following experiment we found that CBC was still useful for price analysis. We used the clusters to explore the similarity of price behavior among Kenyan maize markets. We found that price behavior in the isolated markets of Mandera and Marsabit has become increasingly dissimilar from markets in other Kenyan cities, and that these dissimilarities could not be explained solely by geographic distance. The structural isolation of Mandera and Marsabit that we find in this paper is supported by field studies on food security and market integration in Kenya. Our results suggest that a market with a unique price series (as measured by structural characteristics that differ from neighboring markets) may lack market integration and food security.
A simple and fast representation space for classifying complex time series
NASA Astrophysics Data System (ADS)
Zunino, Luciano; Olivares, Felipe; Bariviera, Aurelio F.; Rosso, Osvaldo A.
2017-03-01
In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease.
Unit: Making Life Easier, Inspection Pack, National Trial Print.
ERIC Educational Resources Information Center
Australian Science Education Project, Toorak, Victoria.
As a part of the unit materials in the series produced by the Australian Science Education Project, this teacher edition is primarily composed of three sections: a core relating to a bicycle, tests, and options. The core is concerned with basic properties of a machine such as force multiplication, speed multiplication, energy dissipation, and…
ERIC Educational Resources Information Center
Ahrens, Steve
Predictor variables that could be used effectively to place entering freshmen methematics students into courses of instruction in mathematics were investigated at West Virginia University. Multiple discriminant analysis was used with nearly 6,000 student records collected over a three-year period, and a series of predictive equations were…
ERIC Educational Resources Information Center
Kaufman, Dahlia; Codding, Robin S.; Markus, Keith A.; Tryon, Georgiana Shick; Kyse, Eden Nagler
2013-01-01
Verbal and written performance feedback for improving preschool and kindergarten teachers' treatment integrity of behavior plans was compared using a combined multiple-baseline and multiple-treatment design across teacher-student dyads with order counterbalanced as within-series conditions. Supplemental generalized least square regression analyses…
Using a Multiple Intelligences Assessment To Facilitate Teacher Development.
ERIC Educational Resources Information Center
Shearer, C. Branton
In phase 1 of this study, development and validation studies of a new assessment for the multiple intelligences were conducted. The second phase of the study was a pilot implementation project during a single academic year in collaboration with several public school teachers. Phase 1 involved a series of activities including initial instrument…
Multiple quantum coherence spectroscopy.
Mathew, Nathan A; Yurs, Lena A; Block, Stephen B; Pakoulev, Andrei V; Kornau, Kathryn M; Wright, John C
2009-08-20
Multiple quantum coherences provide a powerful approach for studies of complex systems because increasing the number of quantum states in a quantum mechanical superposition state increases the selectivity of a spectroscopic measurement. We show that frequency domain multiple quantum coherence multidimensional spectroscopy can create these superposition states using different frequency excitation pulses. The superposition state is created using two excitation frequencies to excite the symmetric and asymmetric stretch modes in a rhodium dicarbonyl chelate and the dynamic Stark effect to climb the vibrational ladders involving different overtone and combination band states. A monochromator resolves the free induction decay of different coherences comprising the superposition state. The three spectral dimensions provide the selectivity required to observe 19 different spectral features associated with fully coherent nonlinear processes involving up to 11 interactions with the excitation fields. The different features act as spectroscopic probes of the diagonal and off-diagonal parts of the molecular potential energy hypersurface. This approach can be considered as a coherent pump-probe spectroscopy where the pump is a series of excitation pulses that prepares a multiple quantum coherence and the probe is another series of pulses that creates the output coherence.
ERIC Educational Resources Information Center
Monk, John S.; And Others
A multiple-group, single-intervention intensive time-series design was used to examine the achievement of an abstract concept, plate tectonics, of students grouped on the basis of cognitive tendency. Two questions were addressed: (1) How do daily achievement patterns differ between formal and concrete cognitive tendency groups when learning an…
Multimedia Language Learning Courseware: A Design Solution to the Production of a Series of CD-ROMs.
ERIC Educational Resources Information Center
Brett, P. A.; Nash, M.
1999-01-01
Discusses multimedia software and describes the production and the learning rationale of a series of six multimedia CD-ROMs that develop the listening skills of learners of Business English. Describes problems of cost, time, and quality in producing multiple courseware and explains the programming solution which gives control to subject experts.…
A radarsat-2 quad-polarized time series for monitoring crop and soil conditions in Barrax, Spain
USDA-ARS?s Scientific Manuscript database
The European Space Agency (ESA) along with multiple university and agency investigators joined to conduct the AgriSAR Campaign in 2009. The main objective was to analyze a dense time series of RADARSAT-2 quad-pol data to define and quantify the performance of Sentinel-1 and other future ESA C-Band ...
Steed, Chad A.; Halsey, William; Dehoff, Ryan; ...
2017-02-16
Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A.; Halsey, William; Dehoff, Ryan
Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less
Kaliff, Malin; Sorbe, Bengt; Mordhorst, Louise Bohr; Helenius, Gisela; Karlsson, Mats G; Lillsunde-Larsson, Gabriella
2018-04-10
Cervical cancer (CC) is one of the most common cancers in women and virtually all cases of CC are a result of a persistent infection of human papillomavirus (HPV). For disease detected in early stages there is curing treatment but when diagnosed late with recurring disease and metastasis there are limited possibilities. Here we evaluate HPV impact on treatment resistance and metastatic disease progression. Prevalence and distribution of HPV genotypes and HPV16 variants in a Swedish CC patient cohort (n=209) was evaluated, as well as HPV influence on patient prognosis. Tumor samples suitable for analysis (n=204) were genotyped using two different real-time PCR methods. HPV16 variant analysis was made using pyrosequencing. Results showed that HPV prevalence in the total series was 93%. Of the HPV-positive samples, 13% contained multiple infections, typically with two high-risk HPV together. Primary cure rate for the complete series was 95%. Recurrence rate of the complete series was 28% and distant recurrences were most frequent (20%). Patients with tumors containing multiple HPV-strains and particularly HPV genotypes belonging to the alpha 7 and 9 species together had a significantly higher rate of distant tumor recurrences and worse cancer-specific survival rate.
NASA Astrophysics Data System (ADS)
Bertini, Lorenzo; Gabrielli, Davide; Landim, Claudio
2009-07-01
We consider the weakly asymmetric exclusion process on a bounded interval with particles reservoirs at the endpoints. The hydrodynamic limit for the empirical density, obtained in the diffusive scaling, is given by the viscous Burgers equation with Dirichlet boundary conditions. In the case in which the bulk asymmetry is in the same direction as the drift due to the boundary reservoirs, we prove that the quasi-potential can be expressed in terms of the solution to a one-dimensional boundary value problem which has been introduced by Enaud and Derrida [16]. We consider the strong asymmetric limit of the quasi-potential and recover the functional derived by Derrida, Lebowitz, and Speer [15] for the asymmetric exclusion process.
Stochastic species abundance models involving special copulas
NASA Astrophysics Data System (ADS)
Huillet, Thierry E.
2018-01-01
Copulas offer a very general tool to describe the dependence structure of random variables supported by the hypercube. Inspired by problems of species abundances in Biology, we study three distinct toy models where copulas play a key role. In a first one, a Marshall-Olkin copula arises in a species extinction model with catastrophe. In a second one, a quasi-copula problem arises in a flagged species abundance model. In a third model, we study completely random species abundance models in the hypercube as those, not of product type, with uniform margins and singular. These can be understood from a singular copula supported by an inflated simplex. An exchangeable singular Dirichlet copula is also introduced, together with its induced completely random species abundance vector.
Recurrence relations for orthogonal polynomials for PDEs in polar and cylindrical geometries.
Richardson, Megan; Lambers, James V
2016-01-01
This paper introduces two families of orthogonal polynomials on the interval (-1,1), with weight function [Formula: see text]. The first family satisfies the boundary condition [Formula: see text], and the second one satisfies the boundary conditions [Formula: see text]. These boundary conditions arise naturally from PDEs defined on a disk with Dirichlet boundary conditions and the requirement of regularity in Cartesian coordinates. The families of orthogonal polynomials are obtained by orthogonalizing short linear combinations of Legendre polynomials that satisfy the same boundary conditions. Then, the three-term recurrence relations are derived. Finally, it is shown that from these recurrence relations, one can efficiently compute the corresponding recurrences for generalized Jacobi polynomials that satisfy the same boundary conditions.
High-Reproducibility and High-Accuracy Method for Automated Topic Classification
NASA Astrophysics Data System (ADS)
Lancichinetti, Andrea; Sirer, M. Irmak; Wang, Jane X.; Acuna, Daniel; Körding, Konrad; Amaral, Luís A. Nunes
2015-01-01
Much of human knowledge sits in large databases of unstructured text. Leveraging this knowledge requires algorithms that extract and record metadata on unstructured text documents. Assigning topics to documents will enable intelligent searching, statistical characterization, and meaningful classification. Latent Dirichlet allocation (LDA) is the state of the art in topic modeling. Here, we perform a systematic theoretical and numerical analysis that demonstrates that current optimization techniques for LDA often yield results that are not accurate in inferring the most suitable model parameters. Adapting approaches from community detection in networks, we propose a new algorithm that displays high reproducibility and high accuracy and also has high computational efficiency. We apply it to a large set of documents in the English Wikipedia and reveal its hierarchical structure.
NASA Astrophysics Data System (ADS)
Zhu, Qiao-Zhen; Fan, En-Gui; Xu, Jian
2017-10-01
The Fokas unified method is used to analyze the initial-boundary value problem of two-component Gerdjikov-Ivanonv equation on the half-line. It is shown that the solution of the initial-boundary problem can be expressed in terms of the solution of a 3 × 3 Riemann-Hilbert problem. The Dirichlet to Neumann map is obtained through the global relation. Supported by grants from the National Science Foundation of China under Grant No. 11671095, National Science Foundation of China under Grant No. 11501365, Shanghai Sailing Program supported by Science and Technology Commission of Shanghai Municipality under Grant No 15YF1408100, and the Hujiang Foundation of China (B14005)
Regularity gradient estimates for weak solutions of singular quasi-linear parabolic equations
NASA Astrophysics Data System (ADS)
Phan, Tuoc
2017-12-01
This paper studies the Sobolev regularity for weak solutions of a class of singular quasi-linear parabolic problems of the form ut -div [ A (x , t , u , ∇u) ] =div [ F ] with homogeneous Dirichlet boundary conditions over bounded spatial domains. Our main focus is on the case that the vector coefficients A are discontinuous and singular in (x , t)-variables, and dependent on the solution u. Global and interior weighted W 1 , p (ΩT , ω)-regularity estimates are established for weak solutions of these equations, where ω is a weight function in some Muckenhoupt class of weights. The results obtained are even new for linear equations, and for ω = 1, because of the singularity of the coefficients in (x , t)-variables.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1987-01-01
The approximation of optimal discrete-time linear quadratic Gaussian (LQG) compensators for distributed parameter control systems with boundary input and unbounded measurement is considered. The approach applies to a wide range of problems that can be formulated in a state space on which both the discrete-time input and output operators are continuous. Approximating compensators are obtained via application of the LQG theory and associated approximation results for infinite dimensional discrete-time control systems with bounded input and output. Numerical results for spline and modal based approximation schemes used to compute optimal compensators for a one dimensional heat equation with either Neumann or Dirichlet boundary control and pointwise measurement of temperature are presented and discussed.
Hamiltonian models for the propagation of irrotational surface gravity waves over a variable bottom.
Compelli, A; Ivanov, R; Todorov, M
2018-01-28
A single incompressible, inviscid, irrotational fluid medium bounded by a free surface and varying bottom is considered. The Hamiltonian of the system is expressed in terms of the so-called Dirichlet-Neumann operators. The equations for the surface waves are presented in Hamiltonian form. Specific scaling of the variables is selected which leads to approximations of Boussinesq and Korteweg-de Vries (KdV) types, taking into account the effect of the slowly varying bottom. The arising KdV equation with variable coefficients is studied numerically when the initial condition is in the form of the one-soliton solution for the initial depth.This article is part of the theme issue 'Nonlinear water waves'. © 2017 The Author(s).
Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images
Zhou, Mingyuan; Chen, Haojun; Paisley, John; Ren, Lu; Li, Lingbo; Xing, Zhengming; Dunson, David; Sapiro, Guillermo; Carin, Lawrence
2013-01-01
Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete, and/or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, uniformly selected at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature. PMID:21693421
Stochastic Model for Phonemes Uncovers an Author-Dependency of Their Usage.
Deng, Weibing; Allahverdyan, Armen E
2016-01-01
We study rank-frequency relations for phonemes, the minimal units that still relate to linguistic meaning. We show that these relations can be described by the Dirichlet distribution, a direct analogue of the ideal-gas model in statistical mechanics. This description allows us to demonstrate that the rank-frequency relations for phonemes of a text do depend on its author. The author-dependency effect is not caused by the author's vocabulary (common words used in different texts), and is confirmed by several alternative means. This suggests that it can be directly related to phonemes. These features contrast to rank-frequency relations for words, which are both author and text independent and are governed by the Zipf's law.
NASA Technical Reports Server (NTRS)
Johnson, F. T.
1980-01-01
A method for solving the linear integral equations of incompressible potential flow in three dimensions is presented. Both analysis (Neumann) and design (Dirichlet) boundary conditions are treated in a unified approach to the general flow problem. The method is an influence coefficient scheme which employs source and doublet panels as boundary surfaces. Curved panels possessing singularity strengths, which vary as polynomials are used, and all influence coefficients are derived in closed form. These and other features combine to produce an efficient scheme which is not only versatile but eminently suited to the practical realities of a user-oriented environment. A wide variety of numerical results demonstrating the method is presented.
Preconditioned conjugate residual methods for the solution of spectral equations
NASA Technical Reports Server (NTRS)
Wong, Y. S.; Zang, T. A.; Hussaini, M. Y.
1986-01-01
Conjugate residual methods for the solution of spectral equations are described. An inexact finite-difference operator is introduced as a preconditioner in the iterative procedures. Application of these techniques is limited to problems for which the symmetric part of the coefficient matrix is positive definite. Although the spectral equation is a very ill-conditioned and full matrix problem, the computational effort of the present iterative methods for solving such a system is comparable to that for the sparse matrix equations obtained from the application of either finite-difference or finite-element methods to the same problems. Numerical experiments are shown for a self-adjoint elliptic partial differential equation with Dirichlet boundary conditions, and comparison with other solution procedures for spectral equations is presented.
Detection of dominant flow and abnormal events in surveillance video
NASA Astrophysics Data System (ADS)
Kwak, Sooyeong; Byun, Hyeran
2011-02-01
We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.
Operational Overview for UAS Integration in the NAS Project Flight Test Series 3
NASA Technical Reports Server (NTRS)
Valkov, Steffi B.; Sternberg, Daniel; Marston, Michael
2018-01-01
The National Aeronautics and Space Administration Unmanned Aircraft Systems Integration in the National Airspace System Project has conducted a series of flight tests intended to support the reduction of barriers that prevent unmanned aircraft from flying without the required waivers from the Federal Aviation Administration. The 2015 Flight Test Series 3, supported two separate test configurations. The first configuration investigated the timing of Detect and Avoid alerting thresholds using a radar equipped unmanned vehicle and multiple live intruders flown at varying encounter geometries.
Some contributions of the Department of Veterans Affairs to the epidemiology of multiple sclerosis.
Kurtzke, J F
2008-09-01
The first class 1 treatment trial ever conducted in multiple sclerosis (MS) was a Veterans Administration Cooperative Study. This led us to explore MS in the military-veteran populations of the United States in three main series: Army men hospitalized with final diagnoses of MS in World War II, all veterans of World War II and the Korean Conflict, and veterans of later service up to 1994. In each series, all cases had been matched with pre-illness military peers. These series provide major information on its clinical features, course and prognosis, including survival, by sex and race (white men and women; black men), as well as risk factors for occurrence, course, and survival. They comprise the only available nationwide morbidity distributions of MS in the United States. Veterans who are service-connected for MS by the Department of Veterans Affairs and matched with their military peers remain a unique and currently available resource for further clinical and epidemiological study of this disease.
Seasonality of childhood infectious diseases in Niono, Mali.
Findley, S E; Medina, D C; Sogoba, N; Guindo, B; Doumbia, S
2010-01-01
Common childhood diseases vary seasonally in Mali, much of the Sahel, and other parts of the world, yet patterns for multiple diseases have rarely been simultaneously described for extended periods at single locations. In this retrospective longitudinal (1996-2004) investigation, we studied the seasonality of malaria, acute respiratory infection and diarrhoea time-series in the district of Niono, Sahelian Mali. We extracted and analysed seasonal patterns from each time-series with the Multiplicative Holt-Winters and Wavelet Transform methods. Subsequently, we considered hypothetical scenarios where successful prevention and intervention measures reduced disease seasonality by 25 or 50% to assess the impact of health programmes on annual childhood morbidity. The results showed that all three disease time-series displayed remarkable seasonal stability. Malaria, acute respiratory infection and diarrhoea peaked in December, March (and September) and August, respectively. Finally, the annual childhood morbidity stemming from each disease diminished 7-26% in the considered hypothetical scenarios. We concluded that seasonality may assist with guiding the development of integrated seasonal disease calendars for programmatic child health promotion activities.
Wang, Zhuo; Jin, Shuilin; Liu, Guiyou; Zhang, Xiurui; Wang, Nan; Wu, Deliang; Hu, Yang; Zhang, Chiping; Jiang, Qinghua; Xu, Li; Wang, Yadong
2017-05-23
The development of single-cell RNA sequencing has enabled profound discoveries in biology, ranging from the dissection of the composition of complex tissues to the identification of novel cell types and dynamics in some specialized cellular environments. However, the large-scale generation of single-cell RNA-seq (scRNA-seq) data collected at multiple time points remains a challenge to effective measurement gene expression patterns in transcriptome analysis. We present an algorithm based on the Dynamic Time Warping score (DTWscore) combined with time-series data, that enables the detection of gene expression changes across scRNA-seq samples and recovery of potential cell types from complex mixtures of multiple cell types. The DTWscore successfully classify cells of different types with the most highly variable genes from time-series scRNA-seq data. The study was confined to methods that are implemented and available within the R framework. Sample datasets and R packages are available at https://github.com/xiaoxiaoxier/DTWscore .
NASA Astrophysics Data System (ADS)
Oriani, Fabio
2017-04-01
The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model. Softw., vol. 86, pp. 264 - 276, http://dx.doi.org/10.1016/j.envsoft.2016.10.002
ERIC Educational Resources Information Center
Sebok, Stefanie S.; Luu, King; Klinger, Don A.
2014-01-01
The multiple mini-interview (MMI) has become an increasingly popular admissions method for selecting prospective students into professional programs (e.g., medical school). The MMI uses a series of short, labour intensive simulation stations and scenario interviews to more effectively assess applicants' non-cognitive qualities such as…
Links between Bloom's Taxonomy and Gardener's Multiple Intelligences: The Issue of Textbook Analysis
ERIC Educational Resources Information Center
Tabari, Mahmoud Abdi; Tabari, Iman Abdi
2015-01-01
The major thrust of this research was to investigate the cognitive aspect of the high school textbooks and interchange series, due to their extensive use, through content analysis based on Bloom's taxonomy and Gardner's Multiple Intelligences (MI). This study embraced two perspectives in a grid in order to broaden and deepen the analysis by…
ERIC Educational Resources Information Center
Kaput, James J.
The Educational Technology Center has attempted to develop a series of computer based learning environments to support the learning and application of multiplicative reasoning. The work and software described in this paper, including the teaching experiment that generated the error phenomena examined, is part of a larger ongoing research project.…
MIMIC Methods for Assessing Differential Item Functioning in Polytomous Items
ERIC Educational Resources Information Center
Wang, Wen-Chung; Shih, Ching-Lin
2010-01-01
Three multiple indicators-multiple causes (MIMIC) methods, namely, the standard MIMIC method (M-ST), the MIMIC method with scale purification (M-SP), and the MIMIC method with a pure anchor (M-PA), were developed to assess differential item functioning (DIF) in polytomous items. In a series of simulations, it appeared that all three methods…
ERIC Educational Resources Information Center
Choi, Kilchan
2011-01-01
This report explores a new latent variable regression 4-level hierarchical model for monitoring school performance over time using multisite multiple-cohorts longitudinal data. This kind of data set has a 4-level hierarchical structure: time-series observation nested within students who are nested within different cohorts of students. These…
Quasi-elastic nuclear scattering at high energies
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Townsend, Lawrence W.; Wilson, John W.
1992-01-01
The quasi-elastic scattering of two nuclei is considered in the high-energy optical model. Energy loss and momentum transfer spectra for projectile ions are evaluated in terms of an inelastic multiple-scattering series corresponding to multiple knockout of target nucleons. The leading-order correction to the coherent projectile approximation is evaluated. Calculations are compared with experiments.
NASA Astrophysics Data System (ADS)
Dimova, Dilyana; Bajorath, Jürgen
2017-07-01
Computational scaffold hopping aims to identify core structure replacements in active compounds. To evaluate scaffold hopping potential from a principal point of view, regardless of the computational methods that are applied, a global analysis of conventional scaffolds in analog series from compound activity classes was carried out. The majority of analog series was found to contain multiple scaffolds, thus enabling the detection of intra-series scaffold hops among closely related compounds. More than 1000 activity classes were found to contain increasing proportions of multi-scaffold analog series. Thus, using such activity classes for scaffold hopping analysis is likely to overestimate the scaffold hopping (core structure replacement) potential of computational methods, due to an abundance of artificial scaffold hops that are possible within analog series.
Parametric, nonparametric and parametric modelling of a chaotic circuit time series
NASA Astrophysics Data System (ADS)
Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.
2000-09-01
The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.
The relative effects on math performance of single- versus multiple-ratio schedules: a case study1
Lovitt, Tom C.; Esveldt, Karen A.
1970-01-01
This series of four experiments sought to assess the comparative effects of multiple- versus single-ratio schedules on a pupil's responding to mathematics materials. Experiment I, which alternated between single- and multiple-ratio contingencies, revealed that during the latter phase the subject responded at a higher rate. Similar findings were revealed by Exp. II. The third experiment, which manipulated frequency of reinforcement rather than multiple ratios, revealed that the alteration had a minimal effect on the subject's response rate. A final experiment, conducted to assess further the effects of multiple ratios, provided data similar to those of Exp. I and II. PMID:16795267
Chondrodysplasia with multiple dislocations: comprehensive study of a series of 30 cases.
Ranza, E; Huber, C; Levin, N; Baujat, G; Bole-Feysot, C; Nitschke, P; Masson, C; Alanay, Y; Al-Gazali, L; Bitoun, P; Boute, O; Campeau, P; Coubes, C; McEntagart, M; Elcioglu, N; Faivre, L; Gezdirici, A; Johnson, D; Mihci, E; Nur, B G; Perrin, L; Quelin, C; Terhal, P; Tuysuz, B; Cormier-Daire, V
2017-06-01
The group of chondrodysplasia with multiple dislocations includes several entities, characterized by short stature, dislocation of large joints, hand and/or vertebral anomalies. Other features, such as epiphyseal or metaphyseal changes, cleft palate, intellectual disability are also often part of the phenotype. In addition, several conditions with overlapping features are related to this group and broaden the spectrum. The majority of these disorders have been linked to pathogenic variants in genes encoding proteins implicated in the synthesis or sulfation of proteoglycans (PG). In a series of 30 patients with multiple dislocations, we have performed exome sequencing and subsequent targeted analysis of 15 genes, implicated in chondrodysplasia with multiple dislocations, and related conditions. We have identified causative pathogenic variants in 60% of patients (18/30); when a clinical diagnosis was suspected, this was molecularly confirmed in 53% of cases. Forty percent of patients remain without molecular etiology. Pathogenic variants in genes implicated in PG synthesis are of major importance in chondrodysplasia with multiple dislocations and related conditions. The combination of hand features, growth failure severity, radiological aspects of long bones and of vertebrae allowed discrimination among the different conditions. We propose key diagnostic clues to the clinician. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph
2018-07-01
To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also slightly outperformed the other MRC models with respect to the intensity-frequency relationship. To assess the performance of the coupled Poisson rectangular pulse and constrained cascade model, precipitation events were stochastically generated by the Poisson rectangular pulse model and then disaggregated by the constrained cascade model. We found that the coupled model performs satisfactorily in terms of the temporal pattern of the precipitation time series, event characteristics and the intensity-frequency relationship.
Short Term Rain Prediction For Sustainability of Tanks in the Tropic Influenced by Shadow Rains
NASA Astrophysics Data System (ADS)
Suresh, S.
2007-07-01
Rainfall and flow prediction, adapting the Venkataraman single time series approach and Wiener multiple time series approach were conducted for Aralikottai tank system, and Kothamangalam tank system, Tamilnadu, India. The results indicated that the raw prediction of daily values is closer to actual values than trend identified predictions. The sister seasonal time series were more amenable for prediction than whole parent time series. Venkataraman single time approach was more suited for rainfall prediction. Wiener approach proved better for daily prediction of flow based on rainfall. The major conclusion is that the sister seasonal time series of rain and flow have their own identities even though they form part of the whole parent time series. Further studies with other tropical small watersheds are necessary to establish this unique characteristic of independent but not exclusive behavior of seasonal stationary stochastic processes as compared to parent non stationary stochastic processes.
Method of multi-channel data readout and acquisition
Degtiarenko, Pavel V.; Popov, Vladimir E.
2010-06-15
A method for dealing with the problem of simultaneous continuous readout of large number of data channels from the set of multiple sensors in instances where the use of multiple amplitude-to-digital converters is not practical or causes undesirable extra noise and distortion in the data. The new method uses sensor front-end s and subsequent electronics to transform the analog input signals and encode them into a series of short pulses that can be transmitted to a long distance via a high frequency transmission line without information loss. Upon arrival at a destination data decoder and analyzer device, the series of short pulses can be decoded and transformed back, to obtain, store, and utilize the sensor information with the required accuracy.
The spectrum of psychosis in multiple sclerosis: a clinical case series
Gilberthorpe, Thomas G; O’Connell, Kara E; Carolan, Alison; Silber, Eli; Brex, Peter A; Sibtain, Naomi A; David, Anthony S
2017-01-01
Psychosis in the context of multiple sclerosis (MS) has previously been reported as a rare occurrence. However, recent epidemiological studies have found prevalence rates of psychosis in MS that are two to three times higher than those in the general population. Untreated psychosis in patients with MS can adversely impact on adherence to MS medication, levels of disability, and quality of life. This retrospective case series describes the spectrum of psychotic disorders occurring in association with MS using demographic, clinical, and neuroimaging data. In the discussion, we highlight the particular diagnostic and treatment challenges that such disorders can pose for clinicians and through our case vignettes provide examples of potential interventions for this complex patient population. PMID:28203081
Brown Baer, Pamela R.; Wenke, Joseph C.; Thomas, Steven J.; Hale, Colonel Robert G.
2012-01-01
This case series describes craniomaxillofacial battle injuries, currently available surgical techniques, and the compromised outcomes of four service members who sustained severe craniomaxillofacial battle injuries in Iraq or Afghanistan. Demographic information, diagnostic evaluation, surgical procedures, and outcomes were collected and detailed with a follow-up of over 2 years. Reconstructive efforts with advanced, multidisciplinary, and multiple revision procedures were indicated; the full scope of conventional surgical options and resources were utilized. Patients experienced surgical complications, including postoperative wound dehiscence, infection, flap failure, inadequate mandibular healing, and failure of fixation. These complications required multiple revisions and salvage interventions. In addition, facial burns complicated reconstructive efforts by delaying treatment, decreasing surgical options, and increasing procedural numbers. All patients, despite multiple surgeries, continue to have functional and aesthetic deficits as a result of their injuries. Currently, no conventional treatments are available to satisfactorily reconstruct the face severely ravaged by explosive devices to an acceptable level, much less to natural form and function. PMID:24294409
Hopke, P K; Liu, C; Rubin, D B
2001-03-01
Many chemical and environmental data sets are complicated by the existence of fully missing values or censored values known to lie below detection thresholds. For example, week-long samples of airborne particulate matter were obtained at Alert, NWT, Canada, between 1980 and 1991, where some of the concentrations of 24 particulate constituents were coarsened in the sense of being either fully missing or below detection limits. To facilitate scientific analysis, it is appealing to create complete data by filling in missing values so that standard complete-data methods can be applied. We briefly review commonly used strategies for handling missing values and focus on the multiple-imputation approach, which generally leads to valid inferences when faced with missing data. Three statistical models are developed for multiply imputing the missing values of airborne particulate matter. We expect that these models are useful for creating multiple imputations in a variety of incomplete multivariate time series data sets.
Analysis of the thermal balance characteristics for multiple-connected piezoelectric transformers.
Park, Joung-Hu; Cho, Bo-Hyung; Choi, Sung-Jin; Lee, Sang-Min
2009-08-01
Because the amount of power that a piezoelectric transformer (PT) can handle is limited, multiple connections of PTs are necessary for the power-capacity improvement of PT-applications. In the connection, thermal imbalance between the PTs should be prevented to avoid the thermal runaway of each PT. The thermal balance of the multiple-connected PTs is dominantly affected by the electrothermal characteristics of individual PTs. In this paper, the thermal balance of both parallel-parallel and parallel-series connections are analyzed by electrical model parameters. For quantitative analysis, the thermal-balance effects are estimated by the simulation of the mechanical loss ratio between the PTs. The analysis results show that with PTs of similar characteristics, the parallel-series connection has better thermal balance characteristics due to the reduced mechanical loss of the higher temperature PT. For experimental verification of the analysis, a hardware-prototype test of a Cs-Lp type 40 W adapter system with radial-vibration mode PTs has been performed.
Stochastic nature of Landsat MSS data
NASA Technical Reports Server (NTRS)
Labovitz, M. L.; Masuoka, E. J.
1987-01-01
A multiple series generalization of the ARIMA models is used to model Landsat MSS scan lines as sequences of vectors, each vector having four elements (bands). The purpose of this work is to investigate if Landsat scan lines can be described by a general multiple series linear stochastic model and if the coefficients of such a model vary as a function of satellite system and target attributes. To accomplish this objective, an exploratory experimental design was set up incorporating six factors, four representing target attributes - location, cloud cover, row (within location), and column (within location) - and two factors representing system attributes - satellite number and detector bank. Each factor was included in the design at two levels and, with two replicates per treatment, 128 scan lines were analyzed. The results of the analysis suggests that a multiple AR(4) model is an adequate representation across all scan lines. Furthermore, the coefficients of the AR(4) model vary with location, particularly changes in physiography (slope regimes), and with percent cloud cover, but are insensitive to changes in system attributes.
CCD Measurements of Double and Multiple Stars at NAO Rozhen and ASV in 2015
NASA Astrophysics Data System (ADS)
Cvetković, Z.; Pavlović, R.; Boeva, S.
2017-04-01
Results of CCD observations of 154 double or multiple stars, made with the 2 m telescope of the Bulgarian National Astronomical Observatory at Rozhen over five nights in 2015, are presented. This is the ninth series of measurements of CCD frames obtained at Rozhen. We also present results of CCD observations of 323 double or multiple stars made with the 0.6 m telescope of the Serbian Astronomical Station on the mountain of Vidojevica over 23 nights in 2015. This is the fourth series of measurements of CCD frames obtained at this station. This paper contains the results for the position angle and angular separation for 801 pairs and residuals for 127 pairs with published orbital elements or linear solutions. The angular separations are in the range from 1.″52 to 201.″56, with a median angular separation of 8.″26. We also present eight pairs that are measured for the first time and linear elements for five pairs.
Yudoyono, Farid; Sidabutar, Roland; Arifin, Muhammad Zafrullah; Faried, Ahmad
2015-01-01
Multiple histopathology of meningioma is a condition in which the patient has more than one histopathology feature of meningioma in different intracranial locations, with or without sign of neurofibromatosis. Meningiomas are the most common, non-glial, primitive intracranial tumors; their prevalence among operated tumors is around 13–19%. They may occur at any age, but have a peak incidence around 45 years of age. The incidence of multiple intracranial meningiomas varies from 1% to 10% in different series, and the frequency of multiple meningiomas without neurofibromatosis was reported to be <3%. PMID:26425174
Medina, Daniel C.; Findley, Sally E.; Guindo, Boubacar; Doumbia, Seydou
2007-01-01
Background Much of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with diarrhea, acute respiratory infection, and malaria. With the increasing awareness that the aforementioned infectious diseases impose an enormous burden on developing countries, public health programs therein could benefit from parsimonious general-purpose forecasting methods to enhance infectious disease intervention. Unfortunately, these disease time-series often i) suffer from non-stationarity; ii) exhibit large inter-annual plus seasonal fluctuations; and, iii) require disease-specific tailoring of forecasting methods. Methodology/Principal Findings In this longitudinal retrospective (01/1996–06/2004) investigation, diarrhea, acute respiratory infection of the lower tract, and malaria consultation time-series are fitted with a general-purpose econometric method, namely the multiplicative Holt-Winters, to produce contemporaneous on-line forecasts for the district of Niono, Mali. This method accommodates seasonal, as well as inter-annual, fluctuations and produces reasonably accurate median 2- and 3-month horizon forecasts for these non-stationary time-series, i.e., 92% of the 24 time-series forecasts generated (2 forecast horizons, 3 diseases, and 4 age categories = 24 time-series forecasts) have mean absolute percentage errors circa 25%. Conclusions/Significance The multiplicative Holt-Winters forecasting method: i) performs well across diseases with dramatically distinct transmission modes and hence it is a strong general-purpose forecasting method candidate for non-stationary epidemiological time-series; ii) obliquely captures prior non-linear interactions between climate and the aforementioned disease dynamics thus, obviating the need for more complex disease-specific climate-based parametric forecasting methods in the district of Niono; furthermore, iii) readily decomposes time-series into seasonal components thereby potentially assisting with programming of public health interventions, as well as monitoring of disease dynamics modification. Therefore, these forecasts could improve infectious diseases management in the district of Niono, Mali, and elsewhere in the Sahel. PMID:18030322
Medina, Daniel C; Findley, Sally E; Guindo, Boubacar; Doumbia, Seydou
2007-11-21
Much of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with diarrhea, acute respiratory infection, and malaria. With the increasing awareness that the aforementioned infectious diseases impose an enormous burden on developing countries, public health programs therein could benefit from parsimonious general-purpose forecasting methods to enhance infectious disease intervention. Unfortunately, these disease time-series often i) suffer from non-stationarity; ii) exhibit large inter-annual plus seasonal fluctuations; and, iii) require disease-specific tailoring of forecasting methods. In this longitudinal retrospective (01/1996-06/2004) investigation, diarrhea, acute respiratory infection of the lower tract, and malaria consultation time-series are fitted with a general-purpose econometric method, namely the multiplicative Holt-Winters, to produce contemporaneous on-line forecasts for the district of Niono, Mali. This method accommodates seasonal, as well as inter-annual, fluctuations and produces reasonably accurate median 2- and 3-month horizon forecasts for these non-stationary time-series, i.e., 92% of the 24 time-series forecasts generated (2 forecast horizons, 3 diseases, and 4 age categories = 24 time-series forecasts) have mean absolute percentage errors circa 25%. The multiplicative Holt-Winters forecasting method: i) performs well across diseases with dramatically distinct transmission modes and hence it is a strong general-purpose forecasting method candidate for non-stationary epidemiological time-series; ii) obliquely captures prior non-linear interactions between climate and the aforementioned disease dynamics thus, obviating the need for more complex disease-specific climate-based parametric forecasting methods in the district of Niono; furthermore, iii) readily decomposes time-series into seasonal components thereby potentially assisting with programming of public health interventions, as well as monitoring of disease dynamics modification. Therefore, these forecasts could improve infectious diseases management in the district of Niono, Mali, and elsewhere in the Sahel.
ERIC Educational Resources Information Center
Rattanavich, Saowalak
2013-01-01
This study is aimed at comparing the effects of teaching English to Thai undergraduate teacher-students through cross-curricular thematic instruction program based on multiple intelligence theory and through conventional instruction. Two experimental groups, which utilized Randomized True Control Group-Pretest-posttest Time Series Design and…
Using Advice from Multiple Sources to Revise and Improve Judgments
ERIC Educational Resources Information Center
Yaniv, Ilan; Milyavsky, Maxim
2007-01-01
How might people revise their opinions on the basis of multiple pieces of advice? What sort of gains could be obtained from rules for using advice? In the present studies judges first provided their initial estimates for a series of questions; next they were presented with several (2, 4, or 8) opinions from an ecological pool of advisory estimates…
ERIC Educational Resources Information Center
Dissemination and Assessment Center for Bilingual Education, Austin, TX.
This is one of a series of student booklets designed for use in a bilingual mathematics program in grades 6-8. The general format is to present each page in both Spanish and English. The mathematical topics in this booklet include multiplication and division. (MK)
Multiple-scattering model for inclusive proton production in heavy ion collisions
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.
1994-01-01
A formalism is developed for evaluating the momentum distribution for proton production in nuclear abrasion during heavy ion collisions using the Glauber multiple-scattering series. Several models for the one-body density matrix of nuclei are considered for performing numerical calculations. Calculations for the momentum distribution of protons in abrasion are compared with experimental data for inclusive proton production.
NASA Technical Reports Server (NTRS)
Giesy, D. P.
1978-01-01
A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.
A Quantitative and Combinatorial Approach to Non-Linear Meanings of Multiplication
ERIC Educational Resources Information Center
Tillema, Erik; Gatza, Andrew
2016-01-01
We provide a conceptual analysis of how combinatorics problems have the potential to support students to establish non-linear meanings of multiplication (NLMM). The problems we analyze we have used in a series of studies with 6th, 8th, and 10th grade students. We situate the analysis in prior work on students' quantitative and multiplicative…
ERIC Educational Resources Information Center
Morris, Maureen Batza
1995-01-01
The tree drawings of 80 subjects, who were diagnosed with either multiple personality disorder, schizophrenia, or major depression, and a control group, were rated. Patterns were examined and graphs were used to depict results. Certain features were found to distinguish each category. The descriptive statistical findings were both consistent and…
ERIC Educational Resources Information Center
Knowling, Wynn; And Others
This group of papers was presented as part of a symposium entitled "Classroom Observation of Students and Teachers (COST): A Multiple Payoff Approach to Inservice Training." The first paper, "Films for Inservice Teacher Training: A Miniworkshop," outlines the rationale and development of the film series of which the film, Consequences of Behavior,…
A harmonic linear dynamical system for prominent ECG feature extraction.
Thi, Ngoc Anh Nguyen; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc
2014-01-01
Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.
Progressive multiple sclerosis: prospects for disease therapy, repair, and restoration of function.
Ontaneda, Daniel; Thompson, Alan J; Fox, Robert J; Cohen, Jeffrey A
2017-04-01
Multiple sclerosis is a major cause of neurological disability, which accrues predominantly during progressive forms of the disease. Although development of multifocal inflammatory lesions is the underlying pathological process in relapsing-remitting multiple sclerosis, the gradual accumulation of disability that characterises progressive multiple sclerosis seems to result more from diffuse immune mechanisms and neurodegeneration. As a result, the 14 anti-inflammatory drugs that have regulatory approval for treatment of relapsing-remitting multiple sclerosis have little or no efficacy in progressive multiple sclerosis without inflammatory lesion activity. Effective therapies for progressive multiple sclerosis that prevent worsening, reverse damage, and restore function are a major unmet need. In this Series paper we summarise the current status of therapy for progressive multiple sclerosis and outline prospects for the future. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Boshuizen, Henny P. A.; Bongaerts, Maureen Machiels; van de Wiel, Margaretha W. J.; Schmidt, Henk G.
The effects of experience with a series of similar cases on the knowledge restructuring and learning from text were studied in a longitudinal design. Two groups of fourth-year medical students were confronted with a series of cases, part of them having the same underlying disease. The cases were interspersed with fillers, and each set of cases had…
A method for analyzing temporal patterns of variability of a time series from Poincare plots.
Fishman, Mikkel; Jacono, Frank J; Park, Soojin; Jamasebi, Reza; Thungtong, Anurak; Loparo, Kenneth A; Dick, Thomas E
2012-07-01
The Poincaré plot is a popular two-dimensional, time series analysis tool because of its intuitive display of dynamic system behavior. Poincaré plots have been used to visualize heart rate and respiratory pattern variabilities. However, conventional quantitative analysis relies primarily on statistical measurements of the cumulative distribution of points, making it difficult to interpret irregular or complex plots. Moreover, the plots are constructed to reflect highly correlated regions of the time series, reducing the amount of nonlinear information that is presented and thereby hiding potentially relevant features. We propose temporal Poincaré variability (TPV), a novel analysis methodology that uses standard techniques to quantify the temporal distribution of points and to detect nonlinear sources responsible for physiological variability. In addition, the analysis is applied across multiple time delays, yielding a richer insight into system dynamics than the traditional circle return plot. The method is applied to data sets of R-R intervals and to synthetic point process data extracted from the Lorenz time series. The results demonstrate that TPV complements the traditional analysis and can be applied more generally, including Poincaré plots with multiple clusters, and more consistently than the conventional measures and can address questions regarding potential structure underlying the variability of a data set.
Logé, David; De Coster, Olivier; Washburn, Stephanie
2012-07-01
The use of multiple cylindrical leads and multicolumn and single column paddle leads in spinal cord stimulation offers many advantages over the use of a single cylindrical lead. Despite these advantages, placement of multiple cylindrical leads or a paddle lead requires a more invasive surgical procedure. Thus, the ideal situation for lead delivery would be percutaneous insertion of a paddle lead or multiple cylindrical leads. This study evaluated the feasibility and safety of percutaneous delivery of S-Series paddle leads using a new delivery device called the Epiducer lead delivery system (all St. Jude Medical Neuromodulation Division, Plano, TX, USA). This uncontrolled, open-label, prospective, two-center study approved by the AZ St. Lucas (Ghent) Ethics Committee evaluated procedural aspects of implantation of an S-Series paddle lead using the Epiducer lead delivery system and any adverse events relating to the device. Efficacy data during the patent's 30-day trial also were collected. Data from 34 patients were collected from two investigational sites. There were no adverse events related to the Epiducer lead delivery system. The device was inserted at an angle of either 20°-30° or 30°-40° and was entered into the epidural space at T12/L1 in most patients. The S-Series paddle lead was advanced four vertebral segments in more than 50% of patients. The average (±standard deviation [SD]) time it took to place the Epiducer lead delivery system was 8.7 (±5.0) min. The average (+SD) patient-reported pain relief was 78.8% (+24.1%). This study suggests the safe use of the Epiducer lead delivery system for percutaneous implantation and advancement of the S-Series paddle lead in 34 patients. © 2012 International Neuromodulation Society.
Inflatable artificial sphincter - series (image)
... sphincter dysfunction related to spinal cord injury or multiple sclerosis. Most experts advise their patients to try medication and bladder retraining therapy first before resorting to this treatment. Alternatives to ...
Vogtmann, Emily; Hua, Xing; Zhou, Liang; Wan, Yunhu; Suman, Shalabh; Zhu, Bin; Dagnall, Casey L; Hutchinson, Amy; Jones, Kristine; Hicks, Belynda D; Sinha, Rashmi; Shi, Jianxin; Abnet, Christian C
2018-05-01
Background: Few studies have prospectively evaluated the association between oral microbiota and health outcomes. Precise estimates of the intrasubject microbial metric stability will allow better study planning. Therefore, we conducted a study to evaluate the temporal variability of oral microbiota. Methods: Forty individuals provided six oral samples using the OMNIgene ORAL kit and Scope mouthwash oral rinses approximately every two months over 10 months. DNA was extracted using the QIAsymphony and the V4 region of the 16S rRNA gene was amplified and sequenced using the MiSeq. To estimate temporal variation, we calculated intraclass correlation coefficients (ICCs) for a variety of metrics and examined stability after clustering samples into distinct community types using Dirichlet multinomial models (DMMs). Results: The ICCs for the alpha diversity measures were high, including for number of observed bacterial species [0.74; 95% confidence interval (CI): 0.65-0.82 and 0.79; 95% CI: 0.75-0.94] from OMNIgene ORAL and Scope mouthwash, respectively. The ICCs for the relative abundance of the top four phyla and beta diversity matrices were lower. Three clusters provided the best model fit for the DMM from the OMNIgene ORAL samples, and the probability of remaining in a specific cluster was high (59.5%-80.7%). Conclusions: The oral microbiota appears to be stable over time for multiple metrics, but some measures, particularly relative abundance, were less stable. Impact: We used this information to calculate stability-adjusted power calculations that will inform future field study protocols and experimental analytic designs. Cancer Epidemiol Biomarkers Prev; 27(5); 594-600. ©2018 AACR . ©2018 American Association for Cancer Research.
Clusternomics: Integrative context-dependent clustering for heterogeneous datasets
Wernisch, Lorenz
2017-01-01
Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm. PMID:29036190
Clusternomics: Integrative context-dependent clustering for heterogeneous datasets.
Gabasova, Evelina; Reid, John; Wernisch, Lorenz
2017-10-01
Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm.
Hierarchical brain mapping via a generalized Dirichlet solution for mapping brain manifolds
NASA Astrophysics Data System (ADS)
Joshi, Sarang C.; Miller, Michael I.; Christensen, Gary E.; Banerjee, Ayan; Coogan, Tom; Grenander, Ulf
1995-08-01
In this paper we present a coarse-to-fine approach for the transformation of digital anatomical textbooks from the ideal to the individual that unifies the work on landmark deformations and volume based transformation. The Hierarchical approach is linked to the Biological problem itself, coming out of the various kinds of information which is provided by the anatomists. This information is in the form of points, lines, surfaces and sub-volumes corresponding to 0, 1, 2, and 3 dimensional sub-manifolds respectively. The algorithm is driven by these sub- manifolds. We follow the approach that the highest dimensional transformation is a result from the solution of a sequence of lower dimensional problems driven by successive refinements or partitions of the images into various Biologically meaningful sub-structures.
Identifying synonymy between relational phrases using word embeddings.
Nguyen, Nhung T H; Miwa, Makoto; Tsuruoka, Yoshimasa; Tojo, Satoshi
2015-08-01
Many text mining applications in the biomedical domain benefit from automatic clustering of relational phrases into synonymous groups, since it alleviates the problem of spurious mismatches caused by the diversity of natural language expressions. Most of the previous work that has addressed this task of synonymy resolution uses similarity metrics between relational phrases based on textual strings or dependency paths, which, for the most part, ignore the context around the relations. To overcome this shortcoming, we employ a word embedding technique to encode relational phrases. We then apply the k-means algorithm on top of the distributional representations to cluster the phrases. Our experimental results show that this approach outperforms state-of-the-art statistical models including latent Dirichlet allocation and Markov logic networks. Copyright © 2015 Elsevier Inc. All rights reserved.
The Effect of Multigrid Parameters in a 3D Heat Diffusion Equation
NASA Astrophysics Data System (ADS)
Oliveira, F. De; Franco, S. R.; Pinto, M. A. Villela
2018-02-01
The aim of this paper is to reduce the necessary CPU time to solve the three-dimensional heat diffusion equation using Dirichlet boundary conditions. The finite difference method (FDM) is used to discretize the differential equations with a second-order accuracy central difference scheme (CDS). The algebraic equations systems are solved using the lexicographical and red-black Gauss-Seidel methods, associated with the geometric multigrid method with a correction scheme (CS) and V-cycle. Comparisons are made between two types of restriction: injection and full weighting. The used prolongation process is the trilinear interpolation. This work is concerned with the study of the influence of the smoothing value (v), number of mesh levels (L) and number of unknowns (N) on the CPU time, as well as the analysis of algorithm complexity.
An Eigenvalue Analysis of finite-difference approximations for hyperbolic IBVPs
NASA Technical Reports Server (NTRS)
Warming, Robert F.; Beam, Richard M.
1989-01-01
The eigenvalue spectrum associated with a linear finite-difference approximation plays a crucial role in the stability analysis and in the actual computational performance of the discrete approximation. The eigenvalue spectrum associated with the Lax-Wendroff scheme applied to a model hyperbolic equation was investigated. For an initial-boundary-value problem (IBVP) on a finite domain, the eigenvalue or normal mode analysis is analytically intractable. A study of auxiliary problems (Dirichlet and quarter-plane) leads to asymptotic estimates of the eigenvalue spectrum and to an identification of individual modes as either benign or unstable. The asymptotic analysis establishes an intuitive as well as quantitative connection between the algebraic tests in the theory of Gustafsson, Kreiss, and Sundstrom and Lax-Richtmyer L(sub 2) stability on a finite domain.
Event-triggered synchronization for reaction-diffusion complex networks via random sampling
NASA Astrophysics Data System (ADS)
Dong, Tao; Wang, Aijuan; Zhu, Huiyun; Liao, Xiaofeng
2018-04-01
In this paper, the synchronization problem of the reaction-diffusion complex networks (RDCNs) with Dirichlet boundary conditions is considered, where the data is sampled randomly. An event-triggered controller based on the sampled data is proposed, which can reduce the number of controller and the communication load. Under this strategy, the synchronization problem of the diffusion complex network is equivalently converted to the stability of a of reaction-diffusion complex dynamical systems with time delay. By using the matrix inequality technique and Lyapunov method, the synchronization conditions of the RDCNs are derived, which are dependent on the diffusion term. Moreover, it is found the proposed control strategy can get rid of the Zeno behavior naturally. Finally, a numerical example is given to verify the obtained results.
Hypergeometric Forms for Ising-Class Integrals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.; Borwein, David; Borwein, Jonathan M.
2006-07-01
We apply experimental-mathematical principles to analyzecertain integrals relevant to the Ising theory of solid-state physics. Wefind representations of the these integrals in terms of MeijerG-functions and nested-Barnes integrals. Our investigations began bycomputing 500-digit numerical values of Cn,k,namely a 2-D array of Isingintegrals for all integers n, k where n is in [2,12]and k is in [0,25].We found that some Cn,k enjoy exact evaluations involving DirichletL-functions or the Riemann zeta function. In theprocess of analyzinghypergeometric representations, we found -- experimentally and strikingly-- that the Cn,k almost certainly satisfy certain inter-indicialrelations including discrete k-recursions. Using generating functions,differential theory, complex analysis, and Wilf-Zeilbergermore » algorithms weare able to prove some central cases of these relations.« less
Simple diffusion can support the pitchfork, the flip bifurcations, and the chaos
NASA Astrophysics Data System (ADS)
Meng, Lili; Li, Xinfu; Zhang, Guang
2017-12-01
In this paper, a discrete rational fration population model with the Dirichlet boundary conditions will be considered. According to the discrete maximum principle and the sub- and supper-solution method, the necessary and sufficient conditions of uniqueness and existence of positive steady state solutions will be obtained. In addition, the dynamical behavior of a special two patch metapopulation model is investigated by using the bifurcation method, the center manifold theory, the bifurcation diagrams and the largest Lyapunov exponent. The results show that there exist the pitchfork, the flip bifurcations, and the chaos. Clearly, these phenomena are caused by the simple diffusion. The theoretical analysis of chaos is very imortant, unfortunately, there is not any results in this hand. However, some open problems are given.
Investigation occurrences of turing pattern in Schnakenberg and Gierer-Meinhardt equation
NASA Astrophysics Data System (ADS)
Nurahmi, Annisa Fitri; Putra, Prama Setia; Nuraini, Nuning
2018-03-01
There are several types of animals with unusual, varied patterns on their skin. The skin pigmentation system influences this in the animal. On the other side, in 1950 Alan Turing formulated the mathematical theory of morphogenesis, where this model can bring up a spatial pattern or so-called Turing pattern. This research discusses the identification of Turing's model that can produce animal skin pattern. Investigations conducted on two types of equations: Schnakenberg (1979), and Gierer-Meinhardt (1972). In this research, parameters were explored to produce Turing's patter on that both equation. The numerical simulation in this research done using Neumann Homogeneous and Dirichlet Homogeneous boundary condition. The investigation of Schnakenberg equation yielded poison dart frog (Andinobates dorisswansonae) and ladybird (Coccinellidae septempunctata) pattern while skin fish pattern was showed by Gierer-Meinhardt equation.
Discontinuous Galerkin Methods for Turbulence Simulation
NASA Technical Reports Server (NTRS)
Collis, S. Scott
2002-01-01
A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.