Sample records for imprecise dirichlet model

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthias C. M. Troffaes; Gero Walter; Dana Kelly

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus onmore » elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model.« less

  2. Spiritual and ceremonial plants in North America: an assessment of Moerman's ethnobotanical database comparing Residual, Binomial, Bayesian and Imprecise Dirichlet Model (IDM) analysis.

    PubMed

    Turi, Christina E; Murch, Susan J

    2013-07-09

    Ethnobotanical research and the study of plants used for rituals, ceremonies and to connect with the spirit world have led to the discovery of many novel psychoactive compounds such as nicotine, caffeine, and cocaine. In North America, spiritual and ceremonial uses of plants are well documented and can be accessed online via the University of Michigan's Native American Ethnobotany Database. The objective of the study was to compare Residual, Bayesian, Binomial and Imprecise Dirichlet Model (IDM) analyses of ritual, ceremonial and spiritual plants in Moerman's ethnobotanical database and to identify genera that may be good candidates for the discovery of novel psychoactive compounds. The database was queried with the following format "Family Name AND Ceremonial OR Spiritual" for 263 North American botanical families. Spiritual and ceremonial flora consisted of 86 families with 517 species belonging to 292 genera. Spiritual taxa were then grouped further into ceremonial medicines and items categories. Residual, Bayesian, Binomial and IDM analysis were performed to identify over and under-utilized families. The 4 statistical approaches were in good agreement when identifying under-utilized families but large families (>393 species) were underemphasized by Binomial, Bayesian and IDM approaches for over-utilization. Residual, Binomial, and IDM analysis identified similar families as over-utilized in the medium (92-392 species) and small (<92 species) classes. The families Apiaceae, Asteraceae, Ericacea, Pinaceae and Salicaceae were identified as significantly over-utilized as ceremonial medicines in medium and large sized families. Analysis of genera within the Apiaceae and Asteraceae suggest that the genus Ligusticum and Artemisia are good candidates for facilitating the discovery of novel psychoactive compounds. The 4 statistical approaches were not consistent in the selection of over-utilization of flora. Residual analysis revealed overall trends that were supported by Binomial analysis when separated into small, medium and large families. The Bayesian, Binomial and IDM approaches identified different genera as potentially important. Species belonging to the genus Artemisia and Ligusticum were most consistently identified and may be valuable in future studies of the ethnopharmacology. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Meta-analysis using Dirichlet process.

    PubMed

    Muthukumarana, Saman; Tiwari, Ram C

    2016-02-01

    This article develops a Bayesian approach for meta-analysis using the Dirichlet process. The key aspect of the Dirichlet process in meta-analysis is the ability to assess evidence of statistical heterogeneity or variation in the underlying effects across study while relaxing the distributional assumptions. We assume that the study effects are generated from a Dirichlet process. Under a Dirichlet process model, the study effects parameters have support on a discrete space and enable borrowing of information across studies while facilitating clustering among studies. We illustrate the proposed method by applying it to a dataset on the Program for International Student Assessment on 30 countries. Results from the data analysis, simulation studies, and the log pseudo-marginal likelihood model selection procedure indicate that the Dirichlet process model performs better than conventional alternative methods. © The Author(s) 2012.

  4. A stochastic diffusion process for Lochner's generalized Dirichlet distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-10-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less

  5. Application of the perfectly matched layer in 3-D marine controlled-source electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Li, Gang; Li, Yuguo; Han, Bo; Liu, Zhan

    2018-01-01

    In this study, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 3-D frequency-domain marine controlled-source electromagnetic (CSEM) field modelling. The Dirichlet boundary, which is usually used within the traditional framework of EM modelling algorithms, assumes that the electric or magnetic field values are zero at the boundaries. This requires the boundaries to be sufficiently far away from the area of interest. To mitigate the boundary artefacts, a large modelling area may be necessary even though cell sizes are allowed to grow toward the boundaries due to the diffusion of the electromagnetic wave propagation. Compared with the conventional Dirichlet boundary, the PML boundary is preferred as the modelling area of interest could be restricted to the target region and only a few absorbing layers surrounding can effectively depress the artificial boundary effect without losing the numerical accuracy. Furthermore, for joint inversion of seismic and marine CSEM data, if we use the PML for CSEM field simulation instead of the conventional Dirichlet, the modelling area for these two different geophysical data collected from the same survey area could be the same, which is convenient for joint inversion grid matching. We apply the CFS-PML boundary to 3-D marine CSEM modelling by using the staggered finite-difference discretization. Numerical test indicates that the modelling algorithm using the CFS-PML also shows good accuracy compared to the Dirichlet. Furthermore, the modelling algorithm using the CFS-PML shows advantages in computational time and memory saving than that using the Dirichlet boundary. For the 3-D example in this study, the memory saving using the PML is nearly 42 per cent and the time saving is around 48 per cent compared to using the Dirichlet.

  6. Polynomial decay rate of a thermoelastic Mindlin-Timoshenko plate model with Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Grobbelaar-Van Dalsen, Marié

    2015-02-01

    In this article, we are concerned with the polynomial stabilization of a two-dimensional thermoelastic Mindlin-Timoshenko plate model with no mechanical damping. The model is subject to Dirichlet boundary conditions on the elastic as well as the thermal variables. The work complements our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 64:1305-1325, 2013) on the polynomial stabilization of a Mindlin-Timoshenko model in a radially symmetric domain under Dirichlet boundary conditions on the displacement and thermal variables and free boundary conditions on the shear angle variables. In particular, our aim is to investigate the effect of the Dirichlet boundary conditions on all the variables on the polynomial decay rate of the model. By once more applying a frequency domain method in which we make critical use of an inequality for the trace of Sobolev functions on the boundary of a bounded, open connected set we show that the decay is slower than in the model considered in the cited work. A comparison of our result with our polynomial decay result for a magnetoelastic Mindlin-Timoshenko model subject to Dirichlet boundary conditions on the elastic variables in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) also indicates a correlation between the robustness of the coupling between parabolic and hyperbolic dynamics and the polynomial decay rate in the two models.

  7. Decomposing biodiversity data using the Latent Dirichlet Allocation model, a probabilistic multivariate statistical method

    Treesearch

    Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave

    2014-01-01

    We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...

  8. Prior Design for Dependent Dirichlet Processes: An Application to Marathon Modeling

    PubMed Central

    F. Pradier, Melanie; J. R. Ruiz, Francisco; Perez-Cruz, Fernando

    2016-01-01

    This paper presents a novel application of Bayesian nonparametrics (BNP) for marathon data modeling. We make use of two well-known BNP priors, the single-p dependent Dirichlet process and the hierarchical Dirichlet process, in order to address two different problems. First, we study the impact of age, gender and environment on the runners’ performance. We derive a fair grading method that allows direct comparison of runners regardless of their age and gender. Unlike current grading systems, our approach is based not only on top world records, but on the performances of all runners. The presented methodology for comparison of densities can be adopted in many other applications straightforwardly, providing an interesting perspective to build dependent Dirichlet processes. Second, we analyze the running patterns of the marathoners in time, obtaining information that can be valuable for training purposes. We also show that these running patterns can be used to predict finishing time given intermediate interval measurements. We apply our models to New York City, Boston and London marathons. PMID:26821155

  9. A Dirichlet-Multinomial Bayes Classifier for Disease Diagnosis with Microbial Compositions.

    PubMed

    Gao, Xiang; Lin, Huaiying; Dong, Qunfeng

    2017-01-01

    Dysbiosis of microbial communities is associated with various human diseases, raising the possibility of using microbial compositions as biomarkers for disease diagnosis. We have developed a Bayes classifier by modeling microbial compositions with Dirichlet-multinomial distributions, which are widely used to model multicategorical count data with extra variation. The parameters of the Dirichlet-multinomial distributions are estimated from training microbiome data sets based on maximum likelihood. The posterior probability of a microbiome sample belonging to a disease or healthy category is calculated based on Bayes' theorem, using the likelihood values computed from the estimated Dirichlet-multinomial distribution, as well as a prior probability estimated from the training microbiome data set or previously published information on disease prevalence. When tested on real-world microbiome data sets, our method, called DMBC (for Dirichlet-multinomial Bayes classifier), shows better classification accuracy than the only existing Bayesian microbiome classifier based on a Dirichlet-multinomial mixture model and the popular random forest method. The advantage of DMBC is its built-in automatic feature selection, capable of identifying a subset of microbial taxa with the best classification accuracy between different classes of samples based on cross-validation. This unique ability enables DMBC to maintain and even improve its accuracy at modeling species-level taxa. The R package for DMBC is freely available at https://github.com/qunfengdong/DMBC. IMPORTANCE By incorporating prior information on disease prevalence, Bayes classifiers have the potential to estimate disease probability better than other common machine-learning methods. Thus, it is important to develop Bayes classifiers specifically tailored for microbiome data. Our method shows higher classification accuracy than the only existing Bayesian classifier and the popular random forest method, and thus provides an alternative option for using microbial compositions for disease diagnosis.

  10. Performance analysis of OOK-based FSO systems in Gamma-Gamma turbulence with imprecise channel models

    NASA Astrophysics Data System (ADS)

    Feng, Jianfeng; Zhao, Xiaohui

    2017-11-01

    For an FSO communication system with imprecise channel model, we investigate its system performance based on outage probability, average BEP and ergodic capacity. The exact FSO links are modeled as Gamma-Gamma fading channel in consideration of both atmospheric turbulence and pointing errors, and the imprecise channel model is treated as the superposition of exact channel gain and a Gaussian random variable. After we derive the PDF, CDF and nth moment of the imprecise channel gain, and based on these statistics the expressions for the outage probability, the average BEP and the ergodic capacity in terms of the Meijer's G functions are obtained. Both numerical and analytical results are presented. The simulation results show that the communication performance deteriorates in the imprecise channel model, and approaches to the exact performance curves as the channel model becomes accurate.

  11. Peak fitting and integration uncertainties for the Aerodyne Aerosol Mass Spectrometer

    NASA Astrophysics Data System (ADS)

    Corbin, J. C.; Othman, A.; Haskins, J. D.; Allan, J. D.; Sierau, B.; Worsnop, D. R.; Lohmann, U.; Mensah, A. A.

    2015-04-01

    The errors inherent in the fitting and integration of the pseudo-Gaussian ion peaks in Aerodyne High-Resolution Aerosol Mass Spectrometers (HR-AMS's) have not been previously addressed as a source of imprecision for these instruments. This manuscript evaluates the significance of these uncertainties and proposes a method for their estimation in routine data analysis. Peak-fitting uncertainties, the most complex source of integration uncertainties, are found to be dominated by errors in m/z calibration. These calibration errors comprise significant amounts of both imprecision and bias, and vary in magnitude from ion to ion. The magnitude of these m/z calibration errors is estimated for an exemplary data set, and used to construct a Monte Carlo model which reproduced well the observed trends in fits to the real data. The empirically-constrained model is used to show that the imprecision in the fitted height of isolated peaks scales linearly with the peak height (i.e., as n1), thus contributing a constant-relative-imprecision term to the overall uncertainty. This constant relative imprecision term dominates the Poisson counting imprecision term (which scales as n0.5) at high signals. The previous HR-AMS uncertainty model therefore underestimates the overall fitting imprecision. The constant relative imprecision in fitted peak height for isolated peaks in the exemplary data set was estimated as ~4% and the overall peak-integration imprecision was approximately 5%. We illustrate the importance of this constant relative imprecision term by performing Positive Matrix Factorization (PMF) on a~synthetic HR-AMS data set with and without its inclusion. Finally, the ability of an empirically-constrained Monte Carlo approach to estimate the fitting imprecision for an arbitrary number of known overlapping peaks is demonstrated. Software is available upon request to estimate these error terms in new data sets.

  12. Semiparametric Bayesian classification with longitudinal markers

    PubMed Central

    De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter

    2013-01-01

    Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871

  13. On selecting a prior for the precision parameter of Dirichlet process mixture models

    USGS Publications Warehouse

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  14. Imprecise results: Utilizing partial computations in real-time systems

    NASA Technical Reports Server (NTRS)

    Lin, Kwei-Jay; Natarajan, Swaminathan; Liu, Jane W.-S.

    1987-01-01

    In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model.

  15. Dirichlet Process Gaussian-mixture model: An application to localizing coalescing binary neutron stars with gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.

    2018-06-01

    We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.

  16. Generalized species sampling priors with latent Beta reinforcements

    PubMed Central

    Airoldi, Edoardo M.; Costa, Thiago; Bassetti, Federico; Leisen, Fabrizio; Guindani, Michele

    2014-01-01

    Many popular Bayesian nonparametric priors can be characterized in terms of exchangeable species sampling sequences. However, in some applications, exchangeability may not be appropriate. We introduce a novel and probabilistically coherent family of non-exchangeable species sampling sequences characterized by a tractable predictive probability function with weights driven by a sequence of independent Beta random variables. We compare their theoretical clustering properties with those of the Dirichlet Process and the two parameters Poisson-Dirichlet process. The proposed construction provides a complete characterization of the joint process, differently from existing work. We then propose the use of such process as prior distribution in a hierarchical Bayes modeling framework, and we describe a Markov Chain Monte Carlo sampler for posterior inference. We evaluate the performance of the prior and the robustness of the resulting inference in a simulation study, providing a comparison with popular Dirichlet Processes mixtures and Hidden Markov Models. Finally, we develop an application to the detection of chromosomal aberrations in breast cancer by leveraging array CGH data. PMID:25870462

  17. An imprecise probability approach for squeal instability analysis based on evidence theory

    NASA Astrophysics Data System (ADS)

    Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie

    2017-01-01

    An imprecise probability approach based on evidence theory is proposed for squeal instability analysis of uncertain disc brakes in this paper. First, the squeal instability of the finite element (FE) model of a disc brake is investigated and its dominant unstable eigenvalue is detected by running two typical numerical simulations, i.e., complex eigenvalue analysis (CEA) and transient dynamical analysis. Next, the uncertainty mainly caused by contact and friction is taken into account and some key parameters of the brake are described as uncertain parameters. All these uncertain parameters are usually involved with imprecise data such as incomplete information and conflict information. Finally, a squeal instability analysis model considering imprecise uncertainty is established by integrating evidence theory, Taylor expansion, subinterval analysis and surrogate model. In the proposed analysis model, the uncertain parameters with imprecise data are treated as evidence variables, and the belief measure and plausibility measure are employed to evaluate system squeal instability. The effectiveness of the proposed approach is demonstrated by numerical examples and some interesting observations and conclusions are summarized from the analyses and discussions. The proposed approach is generally limited to the squeal problems without too many investigated parameters. It can be considered as a potential method for squeal instability analysis, which will act as the first step to reduce squeal noise of uncertain brakes with imprecise information.

  18. Imprecision and Uncertainty in the UFO Database Model.

    ERIC Educational Resources Information Center

    Van Gyseghem, Nancy; De Caluwe, Rita

    1998-01-01

    Discusses how imprecision and uncertainty are dealt with in the UFO (Uncertainty and Fuzziness in an Object-oriented) database model. Such information is expressed by means of possibility distributions, and modeled by means of the proposed concept of "role objects." The role objects model uncertain, tentative information about objects,…

  19. Quantum "violation" of Dirichlet boundary condition

    NASA Astrophysics Data System (ADS)

    Park, I. Y.

    2017-02-01

    Dirichlet boundary conditions have been widely used in general relativity. They seem at odds with the holographic property of gravity simply because a boundary configuration can be varying and dynamic instead of dying out as required by the conditions. In this work we report what should be a tension between the Dirichlet boundary conditions and quantum gravitational effects, and show that a quantum-corrected black hole solution of the 1PI action no longer obeys, in the naive manner one may expect, the Dirichlet boundary conditions imposed at the classical level. We attribute the 'violation' of the Dirichlet boundary conditions to a certain mechanism of the information storage on the boundary.

  20. USING DIRICHLET TESSELLATION TO HELP ESTIMATE MICROBIAL BIOMASS CONCENTRATIONS

    EPA Science Inventory

    Dirichlet tessellation was applied to estimate microbial concentrations from microscope well slides. The use of microscopy/Dirichlet tessellation to quantify biomass was illustrated with two species of morphologically distinct cyanobacteria, and validated empirically by compariso...

  1. A classical Perron method for existence of smooth solutions to boundary value and obstacle problems for degenerate-elliptic operators via holomorphic maps

    NASA Astrophysics Data System (ADS)

    Feehan, Paul M. N.

    2017-09-01

    We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].

  2. Study on monostable and bistable reaction-diffusion equations by iteration of travelling wave maps

    NASA Astrophysics Data System (ADS)

    Yi, Taishan; Chen, Yuming

    2017-12-01

    In this paper, based on the iterative properties of travelling wave maps, we develop a new method to obtain spreading speeds and asymptotic propagation for monostable and bistable reaction-diffusion equations. Precisely, for Dirichlet problems of monostable reaction-diffusion equations on the half line, by making links between travelling wave maps and integral operators associated with the Dirichlet diffusion kernel (the latter is NOT invariant under translation), we obtain some iteration properties of the Dirichlet diffusion and some a priori estimates on nontrivial solutions of Dirichlet problems under travelling wave transformation. We then provide the asymptotic behavior of nontrivial solutions in the space-time region for Dirichlet problems. These enable us to develop a unified method to obtain results on heterogeneous steady states, travelling waves, spreading speeds, and asymptotic spreading behavior for Dirichlet problem of monostable reaction-diffusion equations on R+ as well as of monostable/bistable reaction-diffusion equations on R.

  3. Modeling unobserved sources of heterogeneity in animal abundance using a Dirichlet process prior

    USGS Publications Warehouse

    Dorazio, R.M.; Mukherjee, B.; Zhang, L.; Ghosh, M.; Jelks, H.L.; Jordan, F.

    2008-01-01

    In surveys of natural populations of animals, a sampling protocol is often spatially replicated to collect a representative sample of the population. In these surveys, differences in abundance of animals among sample locations may induce spatial heterogeneity in the counts associated with a particular sampling protocol. For some species, the sources of heterogeneity in abundance may be unknown or unmeasurable, leading one to specify the variation in abundance among sample locations stochastically. However, choosing a parametric model for the distribution of unmeasured heterogeneity is potentially subject to error and can have profound effects on predictions of abundance at unsampled locations. In this article, we develop an alternative approach wherein a Dirichlet process prior is assumed for the distribution of latent abundances. This approach allows for uncertainty in model specification and for natural clustering in the distribution of abundances in a data-adaptive way. We apply this approach in an analysis of counts based on removal samples of an endangered fish species, the Okaloosa darter. Results of our data analysis and simulation studies suggest that our implementation of the Dirichlet process prior has several attractive features not shared by conventional, fully parametric alternatives. ?? 2008, The International Biometric Society.

  4. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    PubMed

    Yu, Kezi; Quirk, J Gerald; Djurić, Petar M

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting.

  5. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models

    PubMed Central

    Yu, Kezi; Quirk, J. Gerald

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting. PMID:28953927

  6. Feature extraction for document text using Latent Dirichlet Allocation

    NASA Astrophysics Data System (ADS)

    Prihatini, P. M.; Suryawan, I. K.; Mandia, IN

    2018-01-01

    Feature extraction is one of stages in the information retrieval system that used to extract the unique feature values of a text document. The process of feature extraction can be done by several methods, one of which is Latent Dirichlet Allocation. However, researches related to text feature extraction using Latent Dirichlet Allocation method are rarely found for Indonesian text. Therefore, through this research, a text feature extraction will be implemented for Indonesian text. The research method consists of data acquisition, text pre-processing, initialization, topic sampling and evaluation. The evaluation is done by comparing Precision, Recall and F-Measure value between Latent Dirichlet Allocation and Term Frequency Inverse Document Frequency KMeans which commonly used for feature extraction. The evaluation results show that Precision, Recall and F-Measure value of Latent Dirichlet Allocation method is higher than Term Frequency Inverse Document Frequency KMeans method. This shows that Latent Dirichlet Allocation method is able to extract features and cluster Indonesian text better than Term Frequency Inverse Document Frequency KMeans method.

  7. Explicit treatment for Dirichlet, Neumann and Cauchy boundary conditions in POD-based reduction of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2018-05-01

    In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.

  8. Evaluation and selection of sustainable suppliers in supply chain using new GP-DEA model with imprecise data

    NASA Astrophysics Data System (ADS)

    Jafarzadeh Ghoushchi, Saeid; Dodkanloi Milan, Mehran; Jahangoshai Rezaee, Mustafa

    2017-11-01

    Nowadays, with respect to knowledge growth about enterprise sustainability, sustainable supplier selection is considered a vital factor in sustainable supply chain management. On the other hand, usually in real problems, the data are imprecise. One method that is helpful for the evaluation and selection of the sustainable supplier and has the ability to use a variety of data types is data envelopment analysis (DEA). In the present article, first, the supplier efficiency is measured with respect to all economic, social and environmental dimensions using DEA and applying imprecise data. Then, to have a general evaluation of the suppliers, the DEA model is developed using imprecise data based on goal programming (GP). Integrating the set of criteria changes the new model into a coherent framework for sustainable supplier selection. Moreover, employing this model in a multilateral sustainable supplier selection can be an incentive for the suppliers to move towards environmental, social and economic activities. Improving environmental, economic and social performance will mean improving the supply chain performance. Finally, the application of the proposed approach is presented with a real dataset.

  9. Spectral multigrid methods for elliptic equations 2

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Wong, Y. S.; Hussaini, M. Y.

    1983-01-01

    A detailed description of spectral multigrid methods is provided. This includes the interpolation and coarse-grid operators for both periodic and Dirichlet problems. The spectral methods for periodic problems use Fourier series and those for Dirichlet problems are based upon Chebyshev polynomials. An improved preconditioning for Dirichlet problems is given. Numerical examples and practical advice are included.

  10. Quantum Gravitational Effects on the Boundary

    NASA Astrophysics Data System (ADS)

    James, F.; Park, I. Y.

    2018-04-01

    Quantum gravitational effects might hold the key to some of the outstanding problems in theoretical physics. We analyze the perturbative quantum effects on the boundary of a gravitational system and the Dirichlet boundary condition imposed at the classical level. Our analysis reveals that for a black hole solution, there is a contradiction between the quantum effects and the Dirichlet boundary condition: the black hole solution of the one-particle-irreducible action no longer satisfies the Dirichlet boundary condition as would be expected without going into details. The analysis also suggests that the tension between the Dirichlet boundary condition and loop effects is connected with a certain mechanism of information storage on the boundary.

  11. Linguistic Extensions of Topic Models

    ERIC Educational Resources Information Center

    Boyd-Graber, Jordan

    2010-01-01

    Topic models like latent Dirichlet allocation (LDA) provide a framework for analyzing large datasets where observations are collected into groups. Although topic modeling has been fruitfully applied to problems social science, biology, and computer vision, it has been most widely used to model datasets where documents are modeled as exchangeable…

  12. Regularization of moving boundaries in a laplacian field by a mixed Dirichlet-Neumann boundary condition: exact results.

    PubMed

    Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar

    2005-11-04

    The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.

  13. Study of a mixed dispersal population dynamics model

    DOE PAGES

    Chugunova, Marina; Jadamba, Baasansuren; Kao, Chiu -Yen; ...

    2016-08-27

    In this study, we consider a mixed dispersal model with periodic and Dirichlet boundary conditions and its corresponding linear eigenvalue problem. This model describes the time evolution of a population which disperses both locally and non-locally. We investigate how long time dynamics depend on the parameter values. Furthermore, we study the minimization of the principal eigenvalue under the constraints that the resource function is bounded from above and below, and with a fixed total integral. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for the species to diemore » out more slowly or survive more easily. Our numerical simulations indicate that the optimal favorable region tends to be a simply-connected domain. Numerous results are shown to demonstrate various scenarios of optimal favorable regions for periodic and Dirichlet boundary conditions.« less

  14. Dirichlet Component Regression and its Applications to Psychiatric Data.

    PubMed

    Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel

    2008-08-15

    We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook's distance, and a local jackknife influence metric.

  15. Stability estimate for the aligned magnetic field in a periodic quantum waveguide from Dirichlet-to-Neumann map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mejri, Youssef, E-mail: josef-bizert@hotmail.fr; Dép. des Mathématiques, Faculté des Sciences de Bizerte, 7021 Jarzouna; Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT BP 37, Le Belvedere, 1002 Tunis

    In this article, we study the boundary inverse problem of determining the aligned magnetic field appearing in the magnetic Schrödinger equation in a periodic quantum cylindrical waveguide, by knowledge of the Dirichlet-to-Neumann map. We prove a Hölder stability estimate with respect to the Dirichlet-to-Neumann map, by means of the geometrical optics solutions of the magnetic Schrödinger equation.

  16. Constructing Weyl group multiple Dirichlet series

    NASA Astrophysics Data System (ADS)

    Chinta, Gautam; Gunnells, Paul E.

    2010-01-01

    Let Phi be a reduced root system of rank r . A Weyl group multiple Dirichlet series for Phi is a Dirichlet series in r complex variables s_1,dots,s_r , initially converging for {Re}(s_i) sufficiently large, that has meromorphic continuation to {{C}}^r and satisfies functional equations under the transformations of {{C}}^r corresponding to the Weyl group of Phi . A heuristic definition of such a series was given by Brubaker, Bump, Chinta, Friedberg, and Hoffstein, and they have been investigated in certain special cases by others. In this paper we generalize results by Chinta and Gunnells to construct Weyl group multiple Dirichlet series by a uniform method and show in all cases that they have the expected properties.

  17. Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation.

    PubMed

    Chen, Chao; Zare, Alina; Trinh, Huy N; Omotara, Gbenga O; Cobb, James Tory; Lagaunne, Timotius A

    2017-12-01

    Topic models [e.g., probabilistic latent semantic analysis, latent Dirichlet allocation (LDA), and supervised LDA] have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership LDA (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery, where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.

  18. Modeling Information Content Via Dirichlet-Multinomial Regression Analysis.

    PubMed

    Ferrari, Alberto

    2017-01-01

    Shannon entropy is being increasingly used in biomedical research as an index of complexity and information content in sequences of symbols, e.g. languages, amino acid sequences, DNA methylation patterns and animal vocalizations. Yet, distributional properties of information entropy as a random variable have seldom been the object of study, leading to researchers mainly using linear models or simulation-based analytical approach to assess differences in information content, when entropy is measured repeatedly in different experimental conditions. Here a method to perform inference on entropy in such conditions is proposed. Building on results coming from studies in the field of Bayesian entropy estimation, a symmetric Dirichlet-multinomial regression model, able to deal efficiently with the issue of mean entropy estimation, is formulated. Through a simulation study the model is shown to outperform linear modeling in a vast range of scenarios and to have promising statistical properties. As a practical example, the method is applied to a data set coming from a real experiment on animal communication.

  19. A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION

    EPA Science Inventory

    We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...

  20. Inducing Fuzzy Models for Student Classification

    ERIC Educational Resources Information Center

    Nykanen, Ossi

    2006-01-01

    We report an approach for implementing predictive fuzzy systems that manage capturing both the imprecision of the empirically induced classifications and the imprecision of the intuitive linguistic expressions via the extensive use of fuzzy sets. From end-users' point of view, the approach enables encapsulating the technical details of the…

  1. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    PubMed

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  2. Numerical Study of Periodic Traveling Wave Solutions for the Predator-Prey Model with Landscape Features

    NASA Astrophysics Data System (ADS)

    Yun, Ana; Shin, Jaemin; Li, Yibao; Lee, Seunggyu; Kim, Junseok

    We numerically investigate periodic traveling wave solutions for a diffusive predator-prey system with landscape features. The landscape features are modeled through the homogeneous Dirichlet boundary condition which is imposed at the edge of the obstacle domain. To effectively treat the Dirichlet boundary condition, we employ a robust and accurate numerical technique by using a boundary control function. We also propose a robust algorithm for calculating the numerical periodicity of the traveling wave solution. In numerical experiments, we show that periodic traveling waves which move out and away from the obstacle are effectively generated. We explain the formation of the traveling waves by comparing the wavelengths. The spatial asynchrony has been shown in quantitative detail for various obstacles. Furthermore, we apply our numerical technique to the complicated real landscape features.

  3. Hierarchical Dirichlet process model for gene expression clustering

    PubMed Central

    2013-01-01

    Clustering is an important data processing tool for interpreting microarray data and genomic network inference. In this article, we propose a clustering algorithm based on the hierarchical Dirichlet processes (HDP). The HDP clustering introduces a hierarchical structure in the statistical model which captures the hierarchical features prevalent in biological data such as the gene express data. We develop a Gibbs sampling algorithm based on the Chinese restaurant metaphor for the HDP clustering. We apply the proposed HDP algorithm to both regulatory network segmentation and gene expression clustering. The HDP algorithm is shown to outperform several popular clustering algorithms by revealing the underlying hierarchical structure of the data. For the yeast cell cycle data, we compare the HDP result to the standard result and show that the HDP algorithm provides more information and reduces the unnecessary clustering fragments. PMID:23587447

  4. Memoized Online Variational Inference for Dirichlet Process Mixture Models

    DTIC Science & Technology

    2014-06-27

    breaking process [7], which places artifically large mass on the final component. It is more efficient and broadly applicable than an alternative trunction...models. In Uncertainty in Artificial Intelligence , 2008. [13] N. Le Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential

  5. A Bayesian Semiparametric Item Response Model with Dirichlet Process Priors

    ERIC Educational Resources Information Center

    Miyazaki, Kei; Hoshino, Takahiro

    2009-01-01

    In Item Response Theory (IRT), item characteristic curves (ICCs) are illustrated through logistic models or normal ogive models, and the probability that examinees give the correct answer is usually a monotonically increasing function of their ability parameters. However, since only limited patterns of shapes can be obtained from logistic models…

  6. Using Dirichlet Processes for Modeling Heterogeneous Treatment Effects across Sites

    ERIC Educational Resources Information Center

    Miratrix, Luke; Feller, Avi; Pillai, Natesh; Pati, Debdeep

    2016-01-01

    Modeling the distribution of site level effects is an important problem, but it is also an incredibly difficult one. Current methods rely on distributional assumptions in multilevel models for estimation. There it is hoped that the partial pooling of site level estimates with overall estimates, designed to take into account individual variation as…

  7. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models

    PubMed Central

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348

  8. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.

    PubMed

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.

  9. Using Dirichlet Priors to Improve Model Parameter Plausibility

    ERIC Educational Resources Information Center

    Rai, Dovan; Gong, Yue; Beck, Joseph E.

    2009-01-01

    Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…

  10. Bounded solutions in a T-shaped waveguide and the spectral properties of the Dirichlet ladder

    NASA Astrophysics Data System (ADS)

    Nazarov, S. A.

    2014-08-01

    The Dirichlet problem is considered on the junction of thin quantum waveguides (of thickness h ≪ 1) in the shape of an infinite two-dimensional ladder. Passage to the limit as h → +0 is discussed. It is shown that the asymptotically correct transmission conditions at nodes of the corresponding one-dimensional quantum graph are Dirichlet conditions rather than the conventional Kirchhoff transmission conditions. The result is obtained by analyzing bounded solutions of a problem in the T-shaped waveguide that the boundary layer phenomenon.

  11. General stability of memory-type thermoelastic Timoshenko beam acting on shear force

    NASA Astrophysics Data System (ADS)

    Apalara, Tijani A.

    2018-03-01

    In this paper, we consider a linear thermoelastic Timoshenko system with memory effects where the thermoelastic coupling is acting on shear force under Neumann-Dirichlet-Dirichlet boundary conditions. The same system with fully Dirichlet boundary conditions was considered by Messaoudi and Fareh (Nonlinear Anal TMA 74(18):6895-6906, 2011, Acta Math Sci 33(1):23-40, 2013), but they obtained a general stability result which depends on the speeds of wave propagation. In our case, we obtained a general stability result irrespective of the wave speeds of the system.

  12. Latent Dirichlet Allocation (LDA) Model and kNN Algorithm to Classify Research Project Selection

    NASA Astrophysics Data System (ADS)

    Safi’ie, M. A.; Utami, E.; Fatta, H. A.

    2018-03-01

    Universitas Sebelas Maret has a teaching staff more than 1500 people, and one of its tasks is to carry out research. In the other side, the funding support for research and service is limited, so there is need to be evaluated to determine the Research proposal submission and devotion on society (P2M). At the selection stage, research proposal documents are collected as unstructured data and the data stored is very large. To extract information contained in the documents therein required text mining technology. This technology applied to gain knowledge to the documents by automating the information extraction. In this articles we use Latent Dirichlet Allocation (LDA) to the documents as a model in feature extraction process, to get terms that represent its documents. Hereafter we use k-Nearest Neighbour (kNN) algorithm to classify the documents based on its terms.

  13. First-passage dynamics of linear stochastic interface models: weak-noise theory and influence of boundary conditions

    NASA Astrophysics Data System (ADS)

    Gross, Markus

    2018-03-01

    We consider a one-dimensional fluctuating interfacial profile governed by the Edwards–Wilkinson or the stochastic Mullins-Herring equation for periodic, standard Dirichlet and Dirichlet no-flux boundary conditions. The minimum action path of an interfacial fluctuation conditioned to reach a given maximum height M at a finite (first-passage) time T is calculated within the weak-noise approximation. Dynamic and static scaling functions for the profile shape are obtained in the transient and the equilibrium regime, i.e. for first-passage times T smaller or larger than the characteristic relaxation time, respectively. In both regimes, the profile approaches the maximum height M with a universal algebraic time dependence characterized solely by the dynamic exponent of the model. It is shown that, in the equilibrium regime, the spatial shape of the profile depends sensitively on boundary conditions and conservation laws, but it is essentially independent of them in the transient regime.

  14. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

    PubMed Central

    Chen, Yun; Yang, Hui

    2016-01-01

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581

  15. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures.

    PubMed

    Chen, Yun; Yang, Hui

    2016-12-14

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

  16. A generalized Poisson solver for first-principles device simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch; Brück, Sascha

    2016-01-28

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative methodmore » in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.« less

  17. Dirichlet Component Regression and its Applications to Psychiatric Data

    PubMed Central

    Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel

    2011-01-01

    Summary We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook’s distance, and a local jackknife influence metric. PMID:22058582

  18. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-03-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  19. A Meinardus Theorem with Multiple Singularities

    NASA Astrophysics Data System (ADS)

    Granovsky, Boris L.; Stark, Dudley

    2012-09-01

    Meinardus proved a general theorem about the asymptotics of the number of weighted partitions, when the Dirichlet generating function for weights has a single pole on the positive real axis. Continuing (Granovsky et al., Adv. Appl. Math. 41:307-328, 2008), we derive asymptotics for the numbers of three basic types of decomposable combinatorial structures (or, equivalently, ideal gas models in statistical mechanics) of size n, when their Dirichlet generating functions have multiple simple poles on the positive real axis. Examples to which our theorem applies include ones related to vector partitions and quantum field theory. Our asymptotic formula for the number of weighted partitions disproves the belief accepted in the physics literature that the main term in the asymptotics is determined by the rightmost pole.

  20. A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times

    PubMed Central

    Heath, Tracy A.

    2012-01-01

    In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343

  1. Formal analysis of imprecise system requirements with Event-B.

    PubMed

    Le, Hong Anh; Nakajima, Shin; Truong, Ninh Thuan

    2016-01-01

    Formal analysis of functional properties of system requirements needs precise descriptions. However, the stakeholders sometimes describe the system with ambiguous, vague or fuzzy terms, hence formal frameworks for modeling and verifying such requirements are desirable. The Fuzzy If-Then rules have been used for imprecise requirements representation, but verifying their functional properties still needs new methods. In this paper, we propose a refinement-based modeling approach for specification and verification of such requirements. First, we introduce a representation of imprecise requirements in the set theory. Then we make use of Event-B refinement providing a set of translation rules from Fuzzy If-Then rules to Event-B notations. After that, we show how to verify both safety and eventuality properties with RODIN/Event-B. Finally, we illustrate the proposed method on the example of Crane Controller.

  2. Fusion of Imperfect Information in the Unified Framework of Random Sets Theory: Application to Target Identification

    DTIC Science & Technology

    2007-11-01

    Florea, Anne-Laure Jousselme, Éloi Bossé ; DRDC Valcartier TR 2003-319 ; R & D pour la défense Canada – Valcartier ; novembre 2007. Contexte : Pour...12 3.3.2 Imprecise information . . . . . . . . . . . . . . . . . . . . . 13 3.3.3 Uncertain and imprecise information...information proposed by Philippe Smets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Figure 5: The process of information modelling

  3. Dual Sticky Hierarchical Dirichlet Process Hidden Markov Model and Its Application to Natural Language Description of Motions.

    PubMed

    Hu, Weiming; Tian, Guodong; Kang, Yongxin; Yuan, Chunfeng; Maybank, Stephen

    2017-09-25

    In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequence of atomic activities, the action represented by the trajectory can be described in natural language in as automatic a way as possible. The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene.

  4. Application of the perfectly matched layer in 2.5D marine controlled-source electromagnetic modeling

    NASA Astrophysics Data System (ADS)

    Li, Gang; Han, Bo

    2017-09-01

    For the traditional framework of EM modeling algorithms, the Dirichlet boundary is usually used which assumes the field values are zero at the boundaries. This crude condition requires that the boundaries should be sufficiently far away from the area of interest. Although cell sizes could become larger toward the boundaries as electromagnetic wave is propagated diffusively, a large modeling area may still be necessary to mitigate the boundary artifacts. In this paper, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 2.5D frequency-domain marine controlled-source electromagnetic (CSEM) field modeling. By using this PML boundary, one can restrict the modeling area of interest to the target region. Only a few absorbing layers surrounding the computational area can effectively depress the artificial boundary effect without losing the numerical accuracy. A 2.5D marine CSEM modeling scheme with the CFS-PML is developed by using the staggered finite-difference discretization. This modeling algorithm using the CFS-PML is of high accuracy, and shows advantages in computational time and memory saving than that using the Dirichlet boundary. For 3D problem, this computation time and memory saving should be more significant.

  5. A simple way to unify multicriteria decision analysis (MCDA) and stochastic multicriteria acceptability analysis (SMAA) using a Dirichlet distribution in benefit-risk assessment.

    PubMed

    Saint-Hilary, Gaelle; Cadour, Stephanie; Robert, Veronique; Gasparini, Mauro

    2017-05-01

    Quantitative methodologies have been proposed to support decision making in drug development and monitoring. In particular, multicriteria decision analysis (MCDA) and stochastic multicriteria acceptability analysis (SMAA) are useful tools to assess the benefit-risk ratio of medicines according to the performances of the treatments on several criteria, accounting for the preferences of the decision makers regarding the relative importance of these criteria. However, even in its probabilistic form, MCDA requires the exact elicitations of the weights of the criteria by the decision makers, which may be difficult to achieve in practice. SMAA allows for more flexibility and can be used with unknown or partially known preferences, but it is less popular due to its increased complexity and the high degree of uncertainty in its results. In this paper, we propose a simple model as a generalization of MCDA and SMAA, by applying a Dirichlet distribution to the weights of the criteria and by making its parameters vary. This unique model permits to fit both MCDA and SMAA, and allows for a more extended exploration of the benefit-risk assessment of treatments. The precision of its results depends on the precision parameter of the Dirichlet distribution, which could be naturally interpreted as the strength of confidence of the decision makers in their elicitation of preferences. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Scheduling periodic jobs that allow imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1990-01-01

    The problem of scheduling periodic jobs in hard real-time systems that support imprecise computations is discussed. Two workload models of imprecise computations are presented. These models differ from traditional models in that a task may be terminated any time after it has produced an acceptable result. Each task is logically decomposed into a mandatory part followed by an optional part. In a feasible schedule, the mandatory part of every task is completed before the deadline of the task. The optional part refines the result produced by the mandatory part to reduce the error in the result. Applications are classified as type N and type C, according to undesirable effects of errors. The two workload models characterize the two types of applications. The optional parts of the tasks in an N job need not ever be completed. The resulting quality of each type-N job is measured in terms of the average error in the results over several consecutive periods. A class of preemptive, priority-driven algorithms that leads to feasible schedules with small average error is described and evaluated.

  7. Stable, high-order computation of impedance-impedance operators for three-dimensional layered medium simulations.

    PubMed

    Nicholls, David P

    2018-04-01

    The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.

  8. A three-dimensional Dirichlet-to-Neumann operator for water waves over topography

    NASA Astrophysics Data System (ADS)

    Andrade, D.; Nachbin, A.

    2018-06-01

    Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.

  9. Stable, high-order computation of impedance-impedance operators for three-dimensional layered medium simulations

    NASA Astrophysics Data System (ADS)

    Nicholls, David P.

    2018-04-01

    The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.

  10. Null boundary controllability of a one-dimensional heat equation with an internal point mass and variable coefficients

    NASA Astrophysics Data System (ADS)

    Ben Amara, Jamel; Bouzidi, Hedi

    2018-01-01

    In this paper, we consider a linear hybrid system which is composed by two non-homogeneous rods connected by a point mass with Dirichlet boundary conditions on the left end and a boundary control acts on the right end. We prove that this system is null controllable with Dirichlet or Neumann boundary controls. Our approach is mainly based on a detailed spectral analysis together with the moment method. In particular, we show that the associated spectral gap in both cases (Dirichlet or Neumann boundary controls) is positive without further conditions on the coefficients other than the regularities.

  11. The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples

    ERIC Educational Resources Information Center

    Avetisyan, Marianna; Fox, Jean-Paul

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…

  12. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  13. DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.

    PubMed

    Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei

    2018-01-01

    Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  14. New solutions to the constant-head test performed at a partially penetrating well

    NASA Astrophysics Data System (ADS)

    Chang, Y. C.; Yeh, H. D.

    2009-05-01

    SummaryThe mathematical model describing the aquifer response to a constant-head test performed at a fully penetrating well can be easily solved by the conventional integral transform technique. In addition, the Dirichlet-type condition should be chosen as the boundary condition along the rim of wellbore for such a test well. However, the boundary condition for a test well with partial penetration must be considered as a mixed-type condition. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The model for such a mixed boundary problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the dual series equations and perturbation method. This approach provides analytical results for the drawdown in the partially penetrating well and the well discharge along the screen. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.

  15. Probabilistic sensitivity analysis for decision trees with multiple branches: use of the Dirichlet distribution in a Bayesian framework.

    PubMed

    Briggs, Andrew H; Ades, A E; Price, Martin J

    2003-01-01

    In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.

  16. An improved approximate-Bayesian model-choice method for estimating shared evolutionary history

    PubMed Central

    2014-01-01

    Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937

  17. The Smoothed Dirichlet Distribution: Understanding Cross-Entropy Ranking in Information Retrieval

    DTIC Science & Technology

    2006-07-01

    reflect those of the spon- sor. viii ABSTRACT Unigram Language modeling is a successful probabilistic framework for Information Retrieval (IR) that uses...the Relevance model (RM), a state-of-the-art model for IR in the language modeling framework that uses the same cross-entropy as its ranking function...In addition, the SD based classifier provides more flexibility than RM in modeling documents owing to a consistent generative framework . We

  18. Scalable Topic Modeling: Online Learning, Diagnostics, and Recommendation

    DTIC Science & Technology

    2017-03-01

    Chinese restaurant processes. Journal of Machine Learning Research, 12:2461–2488, 2011. 15. L. Hannah, D. Blei and W. Powell. Dirichlet process mixtures of...34. S. Ghosh, A. Ungureanu, E. Sudderth, and D. Blei. A Spatial distance dependent Chinese restaurant process for image segmentation. In Neural

  19. On the Dirichlet's Box Principle

    ERIC Educational Resources Information Center

    Poon, Kin-Keung; Shiu, Wai-Chee

    2008-01-01

    In this note, we will focus on several applications on the Dirichlet's box principle in Discrete Mathematics lesson and number theory lesson. In addition, the main result is an innovative game on a triangular board developed by the authors. The game has been used in teaching and learning mathematics in Discrete Mathematics and some high schools in…

  20. Characterization and Modeling of Thoraco-Abdominal Response to Blast Waves. Volume 4. Biomechanical Model of Thorax Response to Blast Loading

    DTIC Science & Technology

    1985-05-01

    non- zero Dirichlet boundary conditions and/or general mixed type boundary conditions. Note that Neumann type boundary condi- tion enters the problem by...Background ................................. ................... I 1.3 General Description ..... ............ ........... . ....... ...... 2 2. ANATOMICAL...human and varions loading conditions for the definition of a generalized safety guideline of blast exposure. To model the response of a sheep torso

  1. Modelling uncertainty with generalized credal sets: application to conjunction and decision

    NASA Astrophysics Data System (ADS)

    Bronevich, Andrey G.; Rozenberg, Igor N.

    2018-01-01

    To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.

  2. Test Design Project: Studies in Test Adequacy. Annual Report.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…

  3. The Role of Type and Source of Uncertainty on the Processing of Climate Models Projections.

    PubMed

    Benjamin, Daniel M; Budescu, David V

    2018-01-01

    Scientists agree that the climate is changing due to human activities, but there is less agreement about the specific consequences and their timeline. Disagreement among climate projections is attributable to the complexity of climate models that differ in their structure, parameters, initial conditions, etc. We examine how different sources of uncertainty affect people's interpretation of, and reaction to, information about climate change by presenting participants forecasts from multiple experts. Participants viewed three types of sets of sea-level rise projections: (1) precise, but conflicting ; (2) imprecise , but agreeing, and (3) hybrid that were both conflicting and imprecise. They estimated the most likely sea-level rise, provided a range of possible values and rated the sets on several features - ambiguity, credibility, completeness, etc. In Study 1, everyone saw the same hybrid set. We found that participants were sensitive to uncertainty between sources, but not to uncertainty about which model was used. The impacts of conflict and imprecision were combined for estimation tasks and compromised for feature ratings . Estimates were closer to the experts' original projections, and sets were rated more favorably under imprecision. Estimates were least consistent with (narrower than) the experts in the hybrid condition, but participants rated the conflicting set least favorably. In Study 2, we investigated the hybrid case in more detail by creating several distinct interval sets that combine conflict and imprecision. Two factors drive perceptual differences: overlap - the structure of the forecast set (whether intersecting, nested, tangent, or disjoint) - and a symmetry - the balance of the set. Estimates were primarily driven by asymmetry, and preferences were primarily driven by overlap. Asymmetric sets were least consistent with the experts: estimated ranges were narrower, and estimates of the most likely value were shifted further below the set mean. Intersecting and nested sets were rated similarly to imprecision, and ratings of disjoint and tangent sets were rated like conflict. Our goal was to determine which underlying factors of information sets drive perceptions of uncertainty in consistent, predictable ways. The two studies lead us to conclude that perceptions of agreement require intersection and balance, and overly precise forecasts lead to greater perceptions of disagreement and a greater likelihood of the public discrediting and misinterpreting information.

  4. The Role of Type and Source of Uncertainty on the Processing of Climate Models Projections

    PubMed Central

    Benjamin, Daniel M.; Budescu, David V.

    2018-01-01

    Scientists agree that the climate is changing due to human activities, but there is less agreement about the specific consequences and their timeline. Disagreement among climate projections is attributable to the complexity of climate models that differ in their structure, parameters, initial conditions, etc. We examine how different sources of uncertainty affect people’s interpretation of, and reaction to, information about climate change by presenting participants forecasts from multiple experts. Participants viewed three types of sets of sea-level rise projections: (1) precise, but conflicting; (2) imprecise, but agreeing, and (3) hybrid that were both conflicting and imprecise. They estimated the most likely sea-level rise, provided a range of possible values and rated the sets on several features – ambiguity, credibility, completeness, etc. In Study 1, everyone saw the same hybrid set. We found that participants were sensitive to uncertainty between sources, but not to uncertainty about which model was used. The impacts of conflict and imprecision were combined for estimation tasks and compromised for feature ratings. Estimates were closer to the experts’ original projections, and sets were rated more favorably under imprecision. Estimates were least consistent with (narrower than) the experts in the hybrid condition, but participants rated the conflicting set least favorably. In Study 2, we investigated the hybrid case in more detail by creating several distinct interval sets that combine conflict and imprecision. Two factors drive perceptual differences: overlap – the structure of the forecast set (whether intersecting, nested, tangent, or disjoint) – and asymmetry – the balance of the set. Estimates were primarily driven by asymmetry, and preferences were primarily driven by overlap. Asymmetric sets were least consistent with the experts: estimated ranges were narrower, and estimates of the most likely value were shifted further below the set mean. Intersecting and nested sets were rated similarly to imprecision, and ratings of disjoint and tangent sets were rated like conflict. Our goal was to determine which underlying factors of information sets drive perceptions of uncertainty in consistent, predictable ways. The two studies lead us to conclude that perceptions of agreement require intersection and balance, and overly precise forecasts lead to greater perceptions of disagreement and a greater likelihood of the public discrediting and misinterpreting information. PMID:29636717

  5. Fuzzy adaptive iterative learning coordination control of second-order multi-agent systems with imprecise communication topology structure

    NASA Astrophysics Data System (ADS)

    Chen, Jiaxi; Li, Junmin

    2018-02-01

    In this paper, we investigate the perfect consensus problem for second-order linearly parameterised multi-agent systems (MAS) with imprecise communication topology structure. Takagi-Sugeno (T-S) fuzzy models are presented to describe the imprecise communication topology structure of leader-following MAS, and a distributed adaptive iterative learning control protocol is proposed with the dynamic of leader unknown to any of the agent. The proposed protocol guarantees that the follower agents can track the leader perfectly on [0,T] for the consensus problem. Under alignment condition, a sufficient condition of the consensus for closed-loop MAS is given based on Lyapunov stability theory. Finally, a numerical example and a multiple pendulum system are given to illustrate the effectiveness of the proposed algorithm.

  6. The Casimir effect for parallel plates revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawakami, N. A.; Nemes, M. C.; Wreszinski, Walter F.

    2007-10-15

    The Casimir effect for a massless scalar field with Dirichlet and periodic boundary conditions (bc's) on infinite parallel plates is revisited in the local quantum field theory (lqft) framework introduced by Kay [Phys. Rev. D 20, 3052 (1979)]. The model displays a number of more realistic features than the ones he treated. In addition to local observables, as the energy density, we propose to consider intensive variables, such as the energy per unit area {epsilon}, as fundamental observables. Adopting this view, lqft rejects Dirichlet (the same result may be proved for Neumann or mixed) bc, and accepts periodic bc: inmore » the former case {epsilon} diverges, in the latter it is finite, as is shown by an expression for the local energy density obtained from lqft through the use of the Poisson summation formula. Another way to see this uses methods from the Euler summation formula: in the proof of regularization independence of the energy per unit area, a regularization-dependent surface term arises upon use of Dirichlet bc, but not periodic bc. For the conformally invariant scalar quantum field, this surface term is absent due to the condition of zero trace of the energy momentum tensor, as remarked by De Witt [Phys. Rep. 19, 295 (1975)]. The latter property does not hold in the application to the dark energy problem in cosmology, in which we argue that periodic bc might play a distinguished role.« less

  7. Uniform gradient estimates on manifolds with a boundary and applications

    NASA Astrophysics Data System (ADS)

    Cheng, Li-Juan; Thalmaier, Anton; Thompson, James

    2018-04-01

    We revisit the problem of obtaining uniform gradient estimates for Dirichlet and Neumann heat semigroups on Riemannian manifolds with boundary. As applications, we obtain isoperimetric inequalities, using Ledoux's argument, and uniform quantitative gradient estimates, firstly for C^2_b functions with boundary conditions and then for the unit spectral projection operators of Dirichlet and Neumann Laplacians.

  8. High skill in low-frequency climate response through fluctuation dissipation theorems despite structural instability.

    PubMed

    Majda, Andrew J; Abramov, Rafail; Gershgorin, Boris

    2010-01-12

    Climate change science focuses on predicting the coarse-grained, planetary-scale, longtime changes in the climate system due to either changes in external forcing or internal variability, such as the impact of increased carbon dioxide. The predictions of climate change science are carried out through comprehensive, computational atmospheric, and oceanic simulation models, which necessarily parameterize physical features such as clouds, sea ice cover, etc. Recently, it has been suggested that there is irreducible imprecision in such climate models that manifests itself as structural instability in climate statistics and which can significantly hamper the skill of computer models for climate change. A systematic approach to deal with this irreducible imprecision is advocated through algorithms based on the Fluctuation Dissipation Theorem (FDT). There are important practical and computational advantages for climate change science when a skillful FDT algorithm is established. The FDT response operator can be utilized directly for multiple climate change scenarios, multiple changes in forcing, and other parameters, such as damping and inverse modelling directly without the need of running the complex climate model in each individual case. The high skill of FDT in predicting climate change, despite structural instability, is developed in an unambiguous fashion using mathematical theory as guidelines in three different test models: a generic class of analytical models mimicking the dynamical core of the computer climate models, reduced stochastic models for low-frequency variability, and models with a significant new type of irreducible imprecision involving many fast, unstable modes.

  9. Using phrases and document metadata to improve topic modeling of clinical reports.

    PubMed

    Speier, William; Ong, Michael K; Arnold, Corey W

    2016-06-01

    Probabilistic topic models provide an unsupervised method for analyzing unstructured text, which have the potential to be integrated into clinical automatic summarization systems. Clinical documents are accompanied by metadata in a patient's medical history and frequently contains multiword concepts that can be valuable for accurately interpreting the included text. While existing methods have attempted to address these problems individually, we present a unified model for free-text clinical documents that integrates contextual patient- and document-level data, and discovers multi-word concepts. In the proposed model, phrases are represented by chained n-grams and a Dirichlet hyper-parameter is weighted by both document-level and patient-level context. This method and three other Latent Dirichlet allocation models were fit to a large collection of clinical reports. Examples of resulting topics demonstrate the results of the new model and the quality of the representations are evaluated using empirical log likelihood. The proposed model was able to create informative prior probabilities based on patient and document information, and captured phrases that represented various clinical concepts. The representation using the proposed model had a significantly higher empirical log likelihood than the compared methods. Integrating document metadata and capturing phrases in clinical text greatly improves the topic representation of clinical documents. The resulting clinically informative topics may effectively serve as the basis for an automatic summarization system for clinical reports. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Augmenting Latent Dirichlet Allocation and Rank Threshold Detection with Ontologies

    DTIC Science & Technology

    2010-03-01

    Probabilistic Latent Semantic Indexing (PLSI) is an automated indexing information retrieval model [20]. It is based on a statistical latent class model which is...uses a statistical foundation that is more accurate in finding hidden semantic relationships [20]. The model uses factor analysis of count data, number...principle of statistical infer- ence which asserts that all of the information in a sample is contained in the likelihood function [20]. The statistical

  11. Dirichlet to Neumann operator for Abelian Yang-Mills gauge fields

    NASA Astrophysics Data System (ADS)

    Díaz-Marín, Homero G.

    We consider the Dirichlet to Neumann operator for Abelian Yang-Mills boundary conditions. The aim is constructing a complex structure for the symplectic space of boundary conditions of Euler-Lagrange solutions modulo gauge for space-time manifolds with smooth boundary. Thus we prepare a suitable scenario for geometric quantization within the reduced symplectic space of boundary conditions of Abelian gauge fields.

  12. A characteristic based volume penalization method for general evolution problems applied to compressible viscous flows

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Kasimov, Nurlybek; Vasilyev, Oleg V.

    2014-04-01

    In order to introduce solid obstacles into flows, several different methods are used, including volume penalization methods which prescribe appropriate boundary conditions by applying local forcing to the constitutive equations. One well known method is Brinkman penalization, which models solid obstacles as porous media. While it has been adapted for compressible, incompressible, viscous and inviscid flows, it is limited in the types of boundary conditions that it imposes, as are most volume penalization methods. Typically, approaches are limited to Dirichlet boundary conditions. In this paper, Brinkman penalization is extended for generalized Neumann and Robin boundary conditions by introducing hyperbolic penalization terms with characteristics pointing inward on solid obstacles. This Characteristic-Based Volume Penalization (CBVP) method is a comprehensive approach to conditions on immersed boundaries, providing for homogeneous and inhomogeneous Dirichlet, Neumann, and Robin boundary conditions on hyperbolic and parabolic equations. This CBVP method can be used to impose boundary conditions for both integrated and non-integrated variables in a systematic manner that parallels the prescription of exact boundary conditions. Furthermore, the method does not depend upon a physical model, as with porous media approach for Brinkman penalization, and is therefore flexible for various physical regimes and general evolutionary equations. Here, the method is applied to scalar diffusion and to direct numerical simulation of compressible, viscous flows. With the Navier-Stokes equations, both homogeneous and inhomogeneous Neumann boundary conditions are demonstrated through external flow around an adiabatic and heated cylinder. Theoretical and numerical examination shows that the error from penalized Neumann and Robin boundary conditions can be rigorously controlled through an a priori penalization parameter η. The error on a transient boundary is found to converge as O(η), which is more favorable than the error convergence of the already established Dirichlet boundary condition.

  13. Neighbor-Dependent Ramachandran Probability Distributions of Amino Acids Developed from a Hierarchical Dirichlet Process Model

    PubMed Central

    Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.

    2010-01-01

    Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867

  14. Extending information retrieval methods to personalized genomic-based studies of disease.

    PubMed

    Ye, Shuyun; Dawson, John A; Kendziorski, Christina

    2014-01-01

    Genomic-based studies of disease now involve diverse types of data collected on large groups of patients. A major challenge facing statistical scientists is how best to combine the data, extract important features, and comprehensively characterize the ways in which they affect an individual's disease course and likelihood of response to treatment. We have developed a survival-supervised latent Dirichlet allocation (survLDA) modeling framework to address these challenges. Latent Dirichlet allocation (LDA) models have proven extremely effective at identifying themes common across large collections of text, but applications to genomics have been limited. Our framework extends LDA to the genome by considering each patient as a "document" with "text" detailing his/her clinical events and genomic state. We then further extend the framework to allow for supervision by a time-to-event response. The model enables the efficient identification of collections of clinical and genomic features that co-occur within patient subgroups, and then characterizes each patient by those features. An application of survLDA to The Cancer Genome Atlas ovarian project identifies informative patient subgroups showing differential response to treatment, and validation in an independent cohort demonstrates the potential for patient-specific inference.

  15. Effect of chemical kinetics uncertainties on calculated constituents in a tropospheric photochemical model

    NASA Technical Reports Server (NTRS)

    Thompson, Anne M.; Stewart, Richard W.

    1991-01-01

    Random photochemical reaction rates are employed in a 1D photochemical model to examine uncertainties in tropospheric concentrations and thereby determine critical kinetic processes and significant correlations. Monte Carlo computations are used to simulate different chemical environments and their related imprecisions. The most critical processes are the primary photodissociation of O3 (which initiates ozone destruction) and NO2 (which initiates ozone formation), and the OH/methane reaction is significant. Several correlations and anticorrelations between species are discussed, and the ozone/transient OH correlation is examined in detail. One important result of the modeling is that estimates of global OH are generally about 25 percent uncertain, limiting the precision of photochemical models. Techniques for reducing the imprecision are discussed which emphasize the use of species and radical species measurements.

  16. Modeling virtual organizations with Latent Dirichlet Allocation: a case for natural language processing.

    PubMed

    Gross, Alexander; Murthy, Dhiraj

    2014-10-01

    This paper explores a variety of methods for applying the Latent Dirichlet Allocation (LDA) automated topic modeling algorithm to the modeling of the structure and behavior of virtual organizations found within modern social media and social networking environments. As the field of Big Data reveals, an increase in the scale of social data available presents new challenges which are not tackled by merely scaling up hardware and software. Rather, they necessitate new methods and, indeed, new areas of expertise. Natural language processing provides one such method. This paper applies LDA to the study of scientific virtual organizations whose members employ social technologies. Because of the vast data footprint in these virtual platforms, we found that natural language processing was needed to 'unlock' and render visible latent, previously unseen conversational connections across large textual corpora (spanning profiles, discussion threads, forums, and other social media incarnations). We introduce variants of LDA and ultimately make the argument that natural language processing is a critical interdisciplinary methodology to make better sense of social 'Big Data' and we were able to successfully model nested discussion topics from forums and blog posts using LDA. Importantly, we found that LDA can move us beyond the state-of-the-art in conventional Social Network Analysis techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Grey fuzzy optimization model for water quality management of a river system

    NASA Astrophysics Data System (ADS)

    Karmakar, Subhankar; Mujumdar, P. P.

    2006-07-01

    A grey fuzzy optimization model is developed for water quality management of river system to address uncertainty involved in fixing the membership functions for different goals of Pollution Control Agency (PCA) and dischargers. The present model, Grey Fuzzy Waste Load Allocation Model (GFWLAM), has the capability to incorporate the conflicting goals of PCA and dischargers in a deterministic framework. The imprecision associated with specifying the water quality criteria and fractional removal levels are modeled in a fuzzy mathematical framework. To address the imprecision in fixing the lower and upper bounds of membership functions, the membership functions themselves are treated as fuzzy in the model and the membership parameters are expressed as interval grey numbers, a closed and bounded interval with known lower and upper bounds but unknown distribution information. The model provides flexibility for PCA and dischargers to specify their aspirations independently, as the membership parameters for different membership functions, specified for different imprecise goals are interval grey numbers in place of a deterministic real number. In the final solution optimal fractional removal levels of the pollutants are obtained in the form of interval grey numbers. This enhances the flexibility and applicability in decision-making, as the decision-maker gets a range of optimal solutions for fixing the final decision scheme considering technical and economic feasibility of the pollutant treatment levels. Application of the GFWLAM is illustrated with case study of the Tunga-Bhadra river system in India.

  18. Effect of defuzzification method of fuzzy modeling

    NASA Astrophysics Data System (ADS)

    Lapohos, Tibor; Buchal, Ralph O.

    1994-10-01

    Imprecision can arise in fuzzy relational modeling as a result of fuzzification, inference and defuzzification. These three sources of imprecision are difficult to separate. We have determined through numerical studies that an important source of imprecision is the defuzzification stage. This imprecision adversely affects the quality of the model output. The most widely used defuzzification algorithm is known by the name of `center of area' (COA) or `center of gravity' (COG). In this paper, we show that this algorithm not only maps the near limit values of the variables improperly but also introduces errors for middle domain values of the same variables. Furthermore, the behavior of this algorithm is a function of the shape of the reference sets. We compare the COA method to the weighted average of cluster centers (WACC) procedure in which the transformation is carried out based on the values of the cluster centers belonging to each of the reference membership functions instead of using the functions themselves. We show that this procedure is more effective and computationally much faster than the COA. The method is tested for a family of reference sets satisfying certain constraints, that is, for any support value the sum of reference membership function values equals one and the peak values of the two marginal membership functions project to the boundaries of the universe of discourse. For all the member sets of this family of reference sets the defuzzification errors do not get bigger as the linguistic variables tend to their extreme values. In addition, the more reference sets that are defined for a certain linguistic variable, the less the average defuzzification error becomes. In case of triangle shaped reference sets there is no defuzzification error at all. Finally, an alternative solution is provided that improves the performance of the COA method.

  19. Existence and uniqueness of steady state solutions of a nonlocal diffusive logistic equation

    NASA Astrophysics Data System (ADS)

    Sun, Linan; Shi, Junping; Wang, Yuwen

    2013-08-01

    In this paper, we consider a dynamical model of population biology which is of the classical Fisher type, but the competition interaction between individuals is nonlocal. The existence, uniqueness, and stability of the steady state solution of the nonlocal problem on a bounded interval with homogeneous Dirichlet boundary conditions are studied.

  20. Entity Relation Detection with Factorial Hidden Markov Models and Maximum Entropy Discriminant Latent Dirichlet Allocations

    ERIC Educational Resources Information Center

    Li, Dingcheng

    2011-01-01

    Coreference resolution (CR) and entity relation detection (ERD) aim at finding predefined relations between pairs of entities in text. CR focuses on resolving identity relations while ERD focuses on detecting non-identity relations. Both CR and ERD are important as they can potentially improve other natural language processing (NLP) related tasks…

  1. Generalized Riemann hypothesis and stochastic time series

    NASA Astrophysics Data System (ADS)

    Mussardo, Giuseppe; LeClair, André

    2018-06-01

    Using the Dirichlet theorem on the equidistribution of residue classes modulo q and the Lemke Oliver–Soundararajan conjecture on the distribution of pairs of residues on consecutive primes, we show that the domain of convergence of the infinite product of Dirichlet L-functions of non-principal characters can be extended from down to , without encountering any zeros before reaching this critical line. The possibility of doing so can be traced back to a universal diffusive random walk behavior of a series C N over the primes which underlies the convergence of the infinite product of the Dirichlet functions. The series C N presents several aspects in common with stochastic time series and its control requires to address a problem similar to the single Brownian trajectory problem in statistical mechanics. In the case of the Dirichlet functions of non principal characters, we show that this problem can be solved in terms of a self-averaging procedure based on an ensemble of block variables computed on extended intervals of primes. Those intervals, called inertial intervals, ensure the ergodicity and stationarity of the time series underlying the quantity C N . The infinity of primes also ensures the absence of rare events which would have been responsible for a different scaling behavior than the universal law of the random walks.

  2. Does Introducing Imprecision around Probabilities for Benefit and Harm Influence the Way People Value Treatments?

    PubMed

    Bansback, Nick; Harrison, Mark; Marra, Carlo

    2016-05-01

    Imprecision in estimates of benefits and harms around treatment choices is rarely described to patients. Variation in sampling error between treatment alternatives (e.g., treatments have similar average risks, but one treatment has a larger confidence interval) can result in patients failing to choose the option that is best for them. The aim of this study is to use a discrete choice experiment to describe how 2 methods for conveying imprecision in risk influence people's treatment decisions. We randomized a representative sample of the Canadian general population to 1 of 3 surveys that sought choices between hypothetical treatments for rheumatoid arthritis based on different levels of 7 attributes: route and frequency of administration, chance of benefit, serious and minor side effects and life expectancy, and imprecision in benefit and side-effect estimates. The surveys differed in the way imprecision was described: 1) no imprecision, 2) quantitative description based on a range with a visual graphic, and 3) qualitative description simply describing the confidence in the evidence. The analyzed data were from 2663 respondents. Results suggested that more people understood imprecision when it was described qualitatively (88%) versus quantitatively (68%). Respondents who appeared to understand imprecision descriptions placed high value on increased precision regarding the actual benefits and harms of treatment, equivalent to the value placed on the information about the probability of serious side effects. Both qualitative and quantitative methods led to small but significant increases in decision uncertainty for choosing any treatment. Limitations included some issues in defining understanding of imprecision and the use of an internet survey of panel members. These findings provide insight into how conveying imprecision information influences patient treatment choices. © The Author(s) 2015.

  3. Coupled oscillators in identification of nonlinear damping of a real parametric pendulum

    NASA Astrophysics Data System (ADS)

    Olejnik, Paweł; Awrejcewicz, Jan

    2018-01-01

    A damped parametric pendulum with friction is identified twice by means of its precise and imprecise mathematical model. A laboratory test stand designed for experimental investigations of nonlinear effects determined by a viscous resistance and the stick-slip phenomenon serves as the model mechanical system. An influence of accurateness of mathematical modeling on the time variability of the nonlinear damping coefficient of the oscillator is proved. A free decay response of a precisely and imprecisely modeled physical pendulum is dependent on two different time-varying coefficients of damping. The coefficients of the analyzed parametric oscillator are identified with the use of a new semi-empirical method based on a coupled oscillators approach, utilizing the fractional order derivative of the discrete measurement series treated as an input to the numerical model. Results of application of the proposed method of identification of the nonlinear coefficients of the damped parametric oscillator have been illustrated and extensively discussed.

  4. Stochastic search, optimization and regression with energy applications

    NASA Astrophysics Data System (ADS)

    Hannah, Lauren A.

    Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Dr. Li; Cui, Xiaohui; Cemerlic, Alma

    Ad hoc networks are very helpful in situations when no fixed network infrastructure is available, such as natural disasters and military conflicts. In such a network, all wireless nodes are equal peers simultaneously serving as both senders and routers for other nodes. Therefore, how to route packets through reliable paths becomes a fundamental problems when behaviors of certain nodes deviate from wireless ad hoc routing protocols. We proposed a novel Dirichlet reputation model based on Bayesian inference theory which evaluates reliability of each node in terms of packet delivery. Our system offers a way to predict and select a reliablemore » path through combination of first-hand observation and second-hand reputation reports. We also proposed moving window mechanism which helps to adjust ours responsiveness of our system to changes of node behaviors. We integrated the Dirichlet reputation into routing protocol of wireless ad hoc networks. Our extensive simulation indicates that our proposed reputation system can improve good throughput of the network and reduce negative impacts caused by misbehaving nodes.« less

  6. The spectra of rectangular lattices of quantum waveguides

    NASA Astrophysics Data System (ADS)

    Nazarov, S. A.

    2017-02-01

    We obtain asymptotic formulae for the spectral segments of a thin (h\\ll 1) rectangular lattice of quantum waveguides which is described by a Dirichlet problem for the Laplacian. We establish that the structure of the spectrum of the lattice is incorrectly described by the commonly accepted quantum graph model with the traditional Kirchhoff conditions at the vertices. It turns out that the lengths of the spectral segments are infinitesimals of order O(e-δ/h), δ> 0, and O(h) as h\\to+0, and gaps of width O(h-2) and O(1) arise between them in the low- frequency and middle- frequency spectral ranges respectively. The first spectral segment is generated by the (unique) eigenvalue in the discrete spectrum of an infinite cross-shaped waveguide \\Theta. The absence of bounded solutions of the problem in \\Theta at the threshold frequency means that the correct model of the lattice is a graph with Dirichlet conditions at the vertices which splits into two infinite subsets of identical edges- intervals. By using perturbations of finitely many joints, we construct any given number of discrete spectrum points of the lattice below the essential spectrum as well as inside the gaps.

  7. Diffusion Processes Satisfying a Conservation Law Constraint

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2014-03-04

    We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less

  8. Diffusion Processes Satisfying a Conservation Law Constraint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakosi, J.; Ristorcelli, J. R.

    We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less

  9. Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.

    PubMed

    Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo

    Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.

  10. Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.

    PubMed

    Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry

    2016-09-01

    Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. A fuzzy-theory-based behavioral model for studying pedestrian evacuation from a single-exit room

    NASA Astrophysics Data System (ADS)

    Fu, Libi; Song, Weiguo; Lo, Siuming

    2016-08-01

    Many mass events in recent years have highlighted the importance of research on pedestrian evacuation dynamics. A number of models have been developed to analyze crowd behavior under evacuation situations. However, few focus on pedestrians' decision-making with respect to uncertainty, vagueness and imprecision. In this paper, a discrete evacuation model defined on the cellular space is proposed according to the fuzzy theory which is able to describe imprecise and subjective information. Pedestrians' percept information and various characteristics are regarded as fuzzy input. Then fuzzy inference systems with rule bases, which resemble human reasoning, are established to obtain fuzzy output that decides pedestrians' movement direction. This model is tested in two scenarios, namely in a single-exit room with and without obstacles. Simulation results reproduce some classic dynamics phenomena discovered in real building evacuation situations, and are consistent with those in other models and experiments. It is hoped that this study will enrich movement rules and approaches in traditional cellular automaton models for evacuation dynamics.

  12. Uniqueness for the electrostatic inverse boundary value problem with piecewise constant anisotropic conductivities

    NASA Astrophysics Data System (ADS)

    Alessandrini, Giovanni; de Hoop, Maarten V.; Gaburro, Romina

    2017-12-01

    We discuss the inverse problem of determining the, possibly anisotropic, conductivity of a body Ω\\subset{R}n when the so-called Neumann-to-Dirichlet map is locally given on a non-empty curved portion Σ of the boundary \\partialΩ . We prove that anisotropic conductivities that are a priori known to be piecewise constant matrices on a given partition of Ω with curved interfaces can be uniquely determined in the interior from the knowledge of the local Neumann-to-Dirichlet map.

  13. Quasi-measures on the group G{sup m}, Dirichlet sets, and uniqueness problems for multiple Walsh series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plotnikov, Mikhail G

    2011-02-11

    Multiple Walsh series (S) on the group G{sup m} are studied. It is proved that every at most countable set is a uniqueness set for series (S) under convergence over cubes. The recovery problem is solved for the coefficients of series (S) that converge outside countable sets or outside sets of Dirichlet type. A number of analogues of the de la Vallee Poussin theorem are established for series (S). Bibliography: 28 titles.

  14. Effect of background dielectric on TE-polarized photonic bandgap of metallodielectric photonic crystals using Dirichlet-to-Neumann map method.

    PubMed

    Sedghi, Aliasghar; Rezaei, Behrooz

    2016-11-20

    Using the Dirichlet-to-Neumann map method, we have calculated the photonic band structure of two-dimensional metallodielectric photonic crystals having the square and triangular lattices of circular metal rods in a dielectric background. We have selected the transverse electric mode of electromagnetic waves, and the resulting band structures showed the existence of photonic bandgap in these structures. We theoretically study the effect of background dielectric on the photonic bandgap.

  15. Manifold Matching: Joint Optimization of Fidelity and Commensurability

    DTIC Science & Technology

    2011-11-12

    identified separately in p◦m, will be geometrically incommensurate (see Figure 7). Thus the null distribution of the test statistic will be inflated...into the objective function obviates the geometric incommensurability phenomenon. Thus we can es- tablish that, for a range of Dirichlet product model...from the geometric incommensu- rability phenomenon. Then q p implies that cca suffers from the spurious correlation phe- nomenon with high probability

  16. Clinical phenotype-based gene prioritization: an initial study using semantic similarity and the human phenotype ontology.

    PubMed

    Masino, Aaron J; Dechene, Elizabeth T; Dulik, Matthew C; Wilkens, Alisha; Spinner, Nancy B; Krantz, Ian D; Pennington, Jeffrey W; Robinson, Peter N; White, Peter S

    2014-07-21

    Exome sequencing is a promising method for diagnosing patients with a complex phenotype. However, variant interpretation relative to patient phenotype can be challenging in some scenarios, particularly clinical assessment of rare complex phenotypes. Each patient's sequence reveals many possibly damaging variants that must be individually assessed to establish clear association with patient phenotype. To assist interpretation, we implemented an algorithm that ranks a given set of genes relative to patient phenotype. The algorithm orders genes by the semantic similarity computed between phenotypic descriptors associated with each gene and those describing the patient. Phenotypic descriptor terms are taken from the Human Phenotype Ontology (HPO) and semantic similarity is derived from each term's information content. Model validation was performed via simulation and with clinical data. We simulated 33 Mendelian diseases with 100 patients per disease. We modeled clinical conditions by adding noise and imprecision, i.e. phenotypic terms unrelated to the disease and terms less specific than the actual disease terms. We ranked the causative gene against all 2488 HPO annotated genes. The median causative gene rank was 1 for the optimal and noise cases, 12 for the imprecision case, and 60 for the imprecision with noise case. Additionally, we examined a clinical cohort of subjects with hearing impairment. The disease gene median rank was 22. However, when also considering the patient's exome data and filtering non-exomic and common variants, the median rank improved to 3. Semantic similarity can rank a causative gene highly within a gene list relative to patient phenotype characteristics, provided that imprecision is mitigated. The clinical case results suggest that phenotype rank combined with variant analysis provides significant improvement over the individual approaches. We expect that this combined prioritization approach may increase accuracy and decrease effort for clinical genetic diagnosis.

  17. Automatically Assessing Graph-Based Diagrams

    ERIC Educational Resources Information Center

    Thomas, Pete; Smith, Neil; Waugh, Kevin

    2008-01-01

    To date there has been very little work on the machine understanding of imprecise diagrams, such as diagrams drawn by students in response to assessment questions. Imprecise diagrams exhibit faults such as missing, extraneous and incorrectly formed elements. The semantics of imprecise diagrams are difficult to determine. While there have been…

  18. Predator cognition permits imperfect coral snake mimicry.

    PubMed

    Kikuchi, David W; Pfennig, David W

    2010-12-01

    Batesian mimicry is often imprecise. An underexplored explanation for imperfect mimicry is that predators might not be able to use all dimensions of prey phenotype to distinguish mimics from models and thus permit imperfect mimicry to persist. We conducted a field experiment to test whether or not predators can distinguish deadly coral snakes (Micrurus fulvius) from nonvenomous scarlet kingsnakes (Lampropeltis elapsoides). Although the two species closely resemble one another, the order of colored rings that encircle their bodies differs. Despite this imprecise mimicry, we found that L. elapsoides that match coral snakes in other respects are not under selection to match the ring order of their model. We suggest that L. elapsoides have evolved only those signals necessary to deceive predators. Generally, imperfect mimicry might suffice if it exploits limitations in predator cognitive abilities.

  19. A Scheduling Algorithm for Replicated Real-Time Tasks

    NASA Technical Reports Server (NTRS)

    Yu, Albert C.; Lin, Kwei-Jay

    1991-01-01

    We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.

  20. Decomposition of Sources of Errors in Seasonal Streamflow Forecasts in a Rainfall-Runoff Dominated Basin

    NASA Astrophysics Data System (ADS)

    Sinha, T.; Arumugam, S.

    2012-12-01

    Seasonal streamflow forecasts contingent on climate forecasts can be effectively utilized in updating water management plans and optimize generation of hydroelectric power. Streamflow in the rainfall-runoff dominated basins critically depend on forecasted precipitation in contrast to snow dominated basins, where initial hydrological conditions (IHCs) are more important. Since precipitation forecasts from Atmosphere-Ocean-General Circulation Models are available at coarse scale (~2.8° by 2.8°), spatial and temporal downscaling of such forecasts are required to implement land surface models, which typically runs on finer spatial and temporal scales. Consequently, multiple sources are introduced at various stages in predicting seasonal streamflow. Therefore, in this study, we addresses the following science questions: 1) How do we attribute the errors in monthly streamflow forecasts to various sources - (i) model errors, (ii) spatio-temporal downscaling, (iii) imprecise initial conditions, iv) no forecasts, and (iv) imprecise forecasts? and 2) How does monthly streamflow forecast errors propagate with different lead time over various seasons? In this study, the Variable Infiltration Capacity (VIC) model is calibrated over Apalachicola River at Chattahoochee, FL in the southeastern US and implemented with observed 1/8° daily forcings to estimate reference streamflow during 1981 to 2010. The VIC model is then forced with different schemes under updated IHCs prior to forecasting period to estimate relative mean square errors due to: a) temporally disaggregation, b) spatial downscaling, c) Reverse Ensemble Streamflow Prediction (imprecise IHCs), d) ESP (no forecasts), and e) ECHAM4.5 precipitation forecasts. Finally, error propagation under different schemes are analyzed with different lead time over different seasons.

  1. The Reliability Estimation for the Open Function of Cabin Door Affected by the Imprecise Judgment Corresponding to Distribution Hypothesis

    NASA Astrophysics Data System (ADS)

    Yu, Z. P.; Yue, Z. F.; Liu, W.

    2018-05-01

    With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.

  2. Application of fuzzy set and Dempster-Shafer theory to organic geochemistry interpretation

    NASA Technical Reports Server (NTRS)

    Kim, C. S.; Isaksen, G. H.

    1993-01-01

    An application of fuzzy sets and Dempster Shafter Theory (DST) in modeling the interpretational process of organic geochemistry data for predicting the level of maturities of oil and source rock samples is presented. This was accomplished by (1) representing linguistic imprecision and imprecision associated with experience by a fuzzy set theory, (2) capturing the probabilistic nature of imperfect evidences by a DST, and (3) combining multiple evidences by utilizing John Yen's generalized Dempster-Shafter Theory (GDST), which allows DST to deal with fuzzy information. The current prototype provides collective beliefs on the predicted levels of maturity by combining multiple evidences through GDST's rule of combination.

  3. Parameter Estimation for the Dirichlet-Multinomial Distribution Using Supplementary Beta-Binomial Data.

    DTIC Science & Technology

    1987-07-01

    multinomial distribution as a magazine exposure model. J. of Marketing Research . 21, 100-106. Lehmann, E.L. (1983). Theory of Point Estimation. John Wiley and... Marketing Research . 21, 89-99. V I flWflW WflW~WWMWSS tWN ,rw fl rwwrwwr-w~ w-. ~. - - -- .~ 𔃾 4’.) ~a 4’ ., 𔃾. ’-4. .4.: .4~ I .4. ~J3iAf a,’ -a’ 4

  4. Multimodal brain-tumor segmentation based on Dirichlet process mixture model with anisotropic diffusion and Markov random field prior.

    PubMed

    Lu, Yisu; Jiang, Jun; Yang, Wei; Feng, Qianjin; Chen, Wufan

    2014-01-01

    Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use.

  5. Multimodal Brain-Tumor Segmentation Based on Dirichlet Process Mixture Model with Anisotropic Diffusion and Markov Random Field Prior

    PubMed Central

    Lu, Yisu; Jiang, Jun; Chen, Wufan

    2014-01-01

    Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use. PMID:25254064

  6. Resolving an ostensible inconsistency in calculating the evaporation rate of sessile drops.

    PubMed

    Chini, S F; Amirfazli, A

    2017-05-01

    This paper resolves an ostensible inconsistency in the literature in calculating the evaporation rate for sessile drops in a quiescent environment. The earlier models in the literature have shown that adapting the evaporation flux model for a suspended spherical drop to calculate the evaporation rate of a sessile drop needs a correction factor; the correction factor was shown to be a function of the drop contact angle, i.e. f(θ). However, there seemed to be a problem as none of the earlier models explicitly or implicitly mentioned the evaporation flux variations along the surface of a sessile drop. The more recent evaporation models include this variation using an electrostatic analogy, i.e. the Laplace equation (steady-state continuity) in a domain with a known boundary condition value, or known as the Dirichlet problem for Laplace's equation. The challenge is that the calculated evaporation rates using the earlier models seemed to differ from that of the recent models (note both types of models were validated in the literature by experiments). We have reinvestigated the recent models and found that the mathematical simplifications in solving the Dirichlet problem in toroidal coordinates have created the inconsistency. We also proposed a closed form approximation for f(θ) which is valid in a wide range, i.e. 8°≤θ≤131°. Using the proposed model in this study, theoretically, it was shown that the evaporation rate in the CWA (constant wetted area) mode is faster than the evaporation rate in the CCA (constant contact angle) mode for a sessile drop. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Boundary conditions and formation of pure spin currents in magnetic field

    NASA Astrophysics Data System (ADS)

    Eliashvili, Merab; Tsitsishvili, George

    2017-09-01

    Schrödinger equation for an electron confined to a two-dimensional strip is considered in the presence of homogeneous orthogonal magnetic field. Since the system has edges, the eigenvalue problem is supplied by the boundary conditions (BC) aimed in preventing the leakage of matter away across the edges. In the case of spinless electrons the Dirichlet and Neumann BC are considered. The Dirichlet BC result in the existence of charge carrying edge states. For the Neumann BC each separate edge comprises two counterflow sub-currents which precisely cancel out each other provided the system is populated by electrons up to certain Fermi level. Cancelation of electric current is a good starting point for developing the spin-effects. In this scope we reconsider the problem for a spinning electron with Rashba coupling. The Neumann BC are replaced by Robin BC. Again, the two counterflow electric sub-currents cancel out each other for a separate edge, while the spin current survives thus modeling what is known as pure spin current - spin flow without charge flow.

  8. Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation.

    PubMed

    Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu

    2017-07-01

    In the field of pathology, whole slide image (WSI) has become the major carrier of visual and diagnostic information. Content-based image retrieval among WSIs can aid the diagnosis of an unknown pathological image by finding its similar regions in WSIs with diagnostic information. However, the huge size and complex content of WSI pose several challenges for retrieval. In this paper, we propose an unsupervised, accurate, and fast retrieval method for a breast histopathological image. Specifically, the method presents a local statistical feature of nuclei for morphology and distribution of nuclei, and employs the Gabor feature to describe the texture information. The latent Dirichlet allocation model is utilized for high-level semantic mining. Locality-sensitive hashing is used to speed up the search. Experiments on a WSI database with more than 8000 images from 15 types of breast histopathology demonstrate that our method achieves about 0.9 retrieval precision as well as promising efficiency. Based on the proposed framework, we are developing a search engine for an online digital slide browsing and retrieval platform, which can be applied in computer-aided diagnosis, pathology education, and WSI archiving and management.

  9. Modeling error in experimental assays using the bootstrap principle: Understanding discrepancies between assays using different dispensing technologies

    PubMed Central

    Hanson, Sonya M.; Ekins, Sean; Chodera, John D.

    2015-01-01

    All experimental assay data contains error, but the magnitude, type, and primary origin of this error is often not obvious. Here, we describe a simple set of assay modeling techniques based on the bootstrap principle that allow sources of error and bias to be simulated and propagated into assay results. We demonstrate how deceptively simple operations—such as the creation of a dilution series with a robotic liquid handler—can significantly amplify imprecision and even contribute substantially to bias. To illustrate these techniques, we review an example of how the choice of dispensing technology can impact assay measurements, and show how large contributions to discrepancies between assays can be easily understood and potentially corrected for. These simple modeling techniques—illustrated with an accompanying IPython notebook—can allow modelers to understand the expected error and bias in experimental datasets, and even help experimentalists design assays to more effectively reach accuracy and imprecision goals. PMID:26678597

  10. Topics in inference and decision-making with partial knowledge

    NASA Technical Reports Server (NTRS)

    Safavian, S. Rasoul; Landgrebe, David

    1990-01-01

    Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.

  11. A probabilistic NF2 relational algebra for integrated information retrieval and database systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuhr, N.; Roelleke, T.

    The integration of information retrieval (IR) and database systems requires a data model which allows for modelling documents as entities, representing uncertainty and vagueness and performing uncertain inference. For this purpose, we present a probabilistic data model based on relations in non-first-normal-form (NF2). Here, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Thus, the set of weighted index terms of a document are represented as a probabilistic subrelation. In a similar way, imprecise attribute values are modelled as a set-valued attribute. We redefine the relational operators for this type of relations such thatmore » the result of each operator is again a probabilistic NF2 relation, where the weight of a tuple gives the probability that this tuple belongs to the result. By ordering the tuples according to decreasing probabilities, the model yields a ranking of answers like in most IR models. This effect also can be used for typical database queries involving imprecise attribute values as well as for combinations of database and IR queries.« less

  12. A New Family of Solvable Pearson-Dirichlet Random Walks

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2011-07-01

    An n-step Pearson-Gamma random walk in ℝ d starts at the origin and consists of n independent steps with gamma distributed lengths and uniform orientations. The gamma distribution of each step length has a shape parameter q>0. Constrained random walks of n steps in ℝ d are obtained from the latter walks by imposing that the sum of the step lengths is equal to a fixed value. Simple closed-form expressions were obtained in particular for the distribution of the endpoint of such constrained walks for any d≥ d 0 and any n≥2 when q is either q = d/2 - 1 ( d 0=3) or q= d-1 ( d 0=2) (Le Caër in J. Stat. Phys. 140:728-751, 2010). When the total walk length is chosen, without loss of generality, to be equal to 1, then the constrained step lengths have a Dirichlet distribution whose parameters are all equal to q and the associated walk is thus named a Pearson-Dirichlet random walk. The density of the endpoint position of a n-step planar walk of this type ( n≥2), with q= d=2, was shown recently to be a weighted mixture of 1+ floor( n/2) endpoint densities of planar Pearson-Dirichlet walks with q=1 (Beghin and Orsingher in Stochastics 82:201-229, 2010). The previous result is generalized to any walk space dimension and any number of steps n≥2 when the parameter of the Pearson-Dirichlet random walk is q= d>1. We rely on the connection between an unconstrained random walk and a constrained one, which have both the same n and the same q= d, to obtain a closed-form expression of the endpoint density. The latter is a weighted mixture of 1+ floor( n/2) densities with simple forms, equivalently expressed as a product of a power and a Gauss hypergeometric function. The weights are products of factors which depends both on d and n and Bessel numbers independent of d.

  13. Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro

    2017-04-01

    We present an efficient implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The I2SPH's accuracy and convergence are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.

  14. Stability and Hopf Bifurcation in a Reaction-Diffusion Model with Chemotaxis and Nonlocal Delay Effect

    NASA Astrophysics Data System (ADS)

    Li, Dong; Guo, Shangjiang

    Chemotaxis is an observed phenomenon in which a biological individual moves preferentially toward a relatively high concentration, which is contrary to the process of natural diffusion. In this paper, we study a reaction-diffusion model with chemotaxis and nonlocal delay effect under Dirichlet boundary condition by using Lyapunov-Schmidt reduction and the implicit function theorem. The existence, multiplicity, stability and Hopf bifurcation of spatially nonhomogeneous steady state solutions are investigated. Moreover, our results are illustrated by an application to the model with a logistic source, homogeneous kernel and one-dimensional spatial domain.

  15. Faà di Bruno's formula and the distributions of random partitions in population genetics and physics.

    PubMed

    Hoppe, Fred M

    2008-06-01

    We show that the formula of Faà di Bruno for the derivative of a composite function gives, in special cases, the sampling distributions in population genetics that are due to Ewens and to Pitman. The composite function is the same in each case. Other sampling distributions also arise in this way, such as those arising from Dirichlet, multivariate hypergeometric, and multinomial models, special cases of which correspond to Bose-Einstein, Fermi-Dirac, and Maxwell-Boltzmann distributions in physics. Connections are made to compound sampling models.

  16. A fuzzy mathematical model of West Java population with logistic growth model

    NASA Astrophysics Data System (ADS)

    Nurkholipah, N. S.; Amarti, Z.; Anggriani, N.; Supriatna, A. K.

    2018-03-01

    In this paper we develop a mathematics model of population growth in the West Java Province Indonesia. The model takes the form as a logistic differential equation. We parameterize the model using several triples of data, and choose the best triple which has the smallest Mean Absolute Percentage Error (MAPE). The resulting model is able to predict the historical data with a high accuracy and it also able to predict the future of population number. Predicting the future population is among the important factors that affect the consideration is preparing a good management for the population. Several experiment are done to look at the effect of impreciseness in the data. This is done by considering a fuzzy initial value to the crisp model assuming that the model propagates the fuzziness of the independent variable to the dependent variable. We assume here a triangle fuzzy number representing the impreciseness in the data. We found that the fuzziness may disappear in the long-term. Other scenarios also investigated, such as the effect of fuzzy parameters to the crisp initial value of the population. The solution of the model is obtained numerically using the fourth-order Runge-Kutta scheme.

  17. Low frequency acoustic and electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Maccamy, R. C.

    1986-01-01

    This paper deals with two classes of problems arising from acoustics and electromagnetics scattering in the low frequency stations. The first class of problem is solving Helmholtz equation with Dirichlet boundary conditions on an arbitrary two dimensional body while the second one is an interior-exterior interface problem with Helmholtz equation in the exterior. Low frequency analysis show that there are two intermediate problems which solve the above problems accurate to 0(k/2/ log k) where k is the frequency. These solutions greatly differ from the zero frequency approximations. For the Dirichlet problem numerical examples are shown to verify the theoretical estimates.

  18. The first eigenvalue of the p-Laplacian on quantum graphs

    NASA Astrophysics Data System (ADS)

    Del Pezzo, Leandro M.; Rossi, Julio D.

    2016-12-01

    We study the first eigenvalue of the p-Laplacian (with 1

  19. Detecting Anisotropic Inclusions Through EIT

    NASA Astrophysics Data System (ADS)

    Cristina, Jan; Päivärinta, Lassi

    2017-12-01

    We study the evolution equation {partialtu=-Λtu} where {Λt} is the Dirichlet-Neumann operator of a decreasing family of Riemannian manifolds with boundary {Σt}. We derive a lower bound for the solution of such an equation, and apply it to a quantitative density estimate for the restriction of harmonic functions on M}=Σ_{0 to the boundaries of {partialΣt}. Consequently we are able to derive a lower bound for the difference of the Dirichlet-Neumann maps in terms of the difference of a background metrics g and an inclusion metric {g+χ_{Σ}(h-g)} on a manifold M.

  20. Evaluation of the Technicon Axon analyser.

    PubMed

    Martínez, C; Márquez, M; Cortés, M; Mercé, J; Rodriguez, J; González, F

    1990-01-01

    An evaluation of the Technicon Axon analyser was carried out following the guidelines of the 'Sociedad Española de Química Clínica' and the European Committee for Clinical Laboratory Standards.A photometric study revealed acceptable results at both 340 nm and 404 nm. Inaccuracy and imprecision were lower at 404 nm than at 340 nm, although poor dispersion was found at both wavelengths, even at low absorbances. Drift was negligible, the imprecision of the sample pipette delivery system was greater for small sample volumes, the reagent pipette delivery system imprecision was acceptable and the sample diluting system study showed good precision and accuracy.Twelve analytes were studied for evaluation of the analyser under routine working conditions. Satisfactory results were obtained for within-run imprecision, while coefficients of variation for betweenrun imprecision were much greater than expected. Neither specimenrelated nor specimen-independent contamination was found in the carry-over study. For all analytes assayed, when comparing patient sample results with those obtained in a Hitachi 737 analyser, acceptable relative inaccuracy was observed.

  1. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  2. A Multi Criteria Group Decision-Making Model for Teacher Evaluation in Higher Education Based on Cloud Model and Decision Tree

    ERIC Educational Resources Information Center

    Chang, Ting-Cheng; Wang, Hui

    2016-01-01

    This paper proposes a cloud multi-criteria group decision-making model for teacher evaluation in higher education which is involving subjectivity, imprecision and fuzziness. First, selecting the appropriate evaluation index depending on the evaluation objectives, indicating a clear structural relationship between the evaluation index and…

  3. Extraction of decision rules via imprecise probabilities

    NASA Astrophysics Data System (ADS)

    Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.

    2017-05-01

    Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.

  4. Multi-objective and Perishable Fuzzy Inventory Models Having Weibull Life-time With Time Dependent Demand, Demand Dependent Production and Time Varying Holding Cost: A Possibility/Necessity Approach

    NASA Astrophysics Data System (ADS)

    Pathak, Savita; Mondal, Seema Sarkar

    2010-10-01

    A multi-objective inventory model of deteriorating item has been developed with Weibull rate of decay, time dependent demand, demand dependent production, time varying holding cost allowing shortages in fuzzy environments for non- integrated and integrated businesses. Here objective is to maximize the profit from different deteriorating items with space constraint. The impreciseness of inventory parameters and goals for non-integrated business has been expressed by linear membership functions. The compromised solutions are obtained by different fuzzy optimization methods. To incorporate the relative importance of the objectives, the different cardinal weights crisp/fuzzy have been assigned. The models are illustrated with numerical examples and results of models with crisp/fuzzy weights are compared. The result for the model assuming them to be integrated business is obtained by using Generalized Reduced Gradient Method (GRG). The fuzzy integrated model with imprecise inventory cost is formulated to optimize the possibility necessity measure of fuzzy goal of the objective function by using credibility measure of fuzzy event by taking fuzzy expectation. The results of crisp/fuzzy integrated model are illustrated with numerical examples and results are compared.

  5. Numerical reconstruction of unknown Robin inclusions inside a heat conductor by a non-iterative method

    NASA Astrophysics Data System (ADS)

    Nakamura, Gen; Wang, Haibing

    2017-05-01

    Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.

  6. A Matlab-based finite-difference solver for the Poisson problem with mixed Dirichlet-Neumann boundary conditions

    NASA Astrophysics Data System (ADS)

    Reimer, Ashton S.; Cheviakov, Alexei F.

    2013-03-01

    A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.

  7. Repulsive Casimir effect from extra dimensions and Robin boundary conditions: From branes to pistons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elizalde, E.; Odintsov, S. D.; Institucio Catalana de Recerca i Estudis Avanccats

    2009-03-15

    We evaluate the Casimir energy and force for a massive scalar field with general curvature coupling parameter, subject to Robin boundary conditions on two codimension-one parallel plates, located on a (D+1)-dimensional background spacetime with an arbitrary internal space. The most general case of different Robin coefficients on the two separate plates is considered. With independence of the geometry of the internal space, the Casimir forces are seen to be attractive for special cases of Dirichlet or Neumann boundary conditions on both plates and repulsive for Dirichlet boundary conditions on one plate and Neumann boundary conditions on the other. For Robinmore » boundary conditions, the Casimir forces can be either attractive or repulsive, depending on the Robin coefficients and the separation between the plates, what is actually remarkable and useful. Indeed, we demonstrate the existence of an equilibrium point for the interplate distance, which is stabilized due to the Casimir force, and show that stability is enhanced by the presence of the extra dimensions. Applications of these properties in braneworld models are discussed. Finally, the corresponding results are generalized to the geometry of a piston of arbitrary cross section.« less

  8. Latent Dirichlet Allocation (LDA) for Sentiment Analysis Toward Tourism Review in Indonesia

    NASA Astrophysics Data System (ADS)

    Putri, IR; Kusumaningrum, R.

    2017-01-01

    The tourism industry is one of foreign exchange sector, which has considerable potential development in Indonesia. Compared to other Southeast Asia countries such as Malaysia with 18 million tourists and Singapore 20 million tourists, Indonesia which is the largest Southeast Asia’s country have failed to attract higher tourist numbers compared to its regional peers. Indonesia only managed to attract 8,8 million foreign tourists in 2013, with the value of foreign tourists each year which is likely to decrease. Apart from the infrastructure problems, marketing and managing also form of obstacles for tourism growth. An evaluation and self-analysis should be done by the stakeholder to respond toward this problem and capture opportunities that related to tourism satisfaction from tourists review. Recently, one of technology to answer this problem only relying on the subjective of statistical data which collected by voting or grading from user randomly. So the result is still not to be accountable. Thus, we proposed sentiment analysis with probabilistic topic model using Latent Dirichlet Allocation (LDA) method to be applied for reading general tendency from tourist review into certain topics that can be classified toward positive and negative sentiment.

  9. A new analytical solution solved by triple series equations method for constant-head tests in confined aquifers

    NASA Astrophysics Data System (ADS)

    Chang, Ya-Chi; Yeh, Hund-Der

    2010-06-01

    The constant-head pumping tests are usually employed to determine the aquifer parameters and they can be performed in fully or partially penetrating wells. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The mathematical model describing the aquifer response to a constant-head test performed in a fully penetrating well can be easily solved by the conventional integral transform technique under the uniform Dirichlet-type condition along the rim of wellbore. However, the boundary condition for a test well with partial penetration should be considered as a mixed-type condition. This mixed boundary value problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the Laplace and finite Fourier transforms in conjunction with the triple series equations method. This approach provides analytical results for the drawdown in a partially penetrating well for arbitrary location of the well screen in a finite thickness aquifer. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.

  10. Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics

    DOE PAGES

    Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; ...

    2017-01-03

    In this paper, we present a consistent implicit incompressible smoothed particle hydrodynamics (I 2SPH) discretization of Navier–Stokes, Poisson–Boltzmann, and advection–diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The accuracy and convergence of the consistent I 2SPH are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. Lastly, the new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.

  11. Effects of degeneracy and response function in a diffusion predator-prey model

    NASA Astrophysics Data System (ADS)

    Li, Shanbing; Wu, Jianhua; Dong, Yaying

    2018-04-01

    In this paper, we consider positive solutions of a diffusion predator-prey model with a degeneracy under the Dirichlet boundary conditions. We first obtain sufficient conditions of the existence of positive solutions by the Leray-Schauder degree theory, and then analyze the limiting behavior of positive solutions as the growth rate of the predator goes to infinity and the conversion rates of the predator goes to zero, respectively. It is shown that these results for Holling II response function (i.e. m  >  0) reveal interesting contrast with that for the classical Lotka-Volterra predator-prey model (i.e. m  =  0).

  12. Imprecise Probability Methods for Weapons UQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picard, Richard Roy; Vander Wiel, Scott Alan

    Building on recent work in uncertainty quanti cation, we examine the use of imprecise probability methods to better characterize expert knowledge and to improve on misleading aspects of Bayesian analysis with informative prior distributions. Quantitative approaches to incorporate uncertainties in weapons certi cation are subject to rigorous external peer review, and in this regard, certain imprecise probability methods are well established in the literature and attractive. These methods are illustrated using experimental data from LANL detonator impact testing.

  13. A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.

    2014-01-01

    We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.

  14. On the exterior Dirichlet problem for Hessian quotient equations

    NASA Astrophysics Data System (ADS)

    Li, Dongsheng; Li, Zhisu

    2018-06-01

    In this paper, we establish the existence and uniqueness theorem for solutions of the exterior Dirichlet problem for Hessian quotient equations with prescribed asymptotic behavior at infinity. This extends the previous related results on the Monge-Ampère equations and on the Hessian equations, and rearranges them in a systematic way. Based on the Perron's method, the main ingredient of this paper is to construct some appropriate subsolutions of the Hessian quotient equation, which is realized by introducing some new quantities about the elementary symmetric polynomials and using them to analyze the corresponding ordinary differential equation related to the generalized radially symmetric subsolutions of the original equation.

  15. A three dimensional Dirichlet-to-Neumann map for surface waves over topography

    NASA Astrophysics Data System (ADS)

    Nachbin, Andre; Andrade, David

    2016-11-01

    We consider three dimensional surface water waves in the potential theory regime. The bottom topography can have a quite general profile. In the case of linear waves the Dirichlet-to-Neumann operator is formulated in a matrix decomposition form. Computational simulations illustrate the performance of the method. Two dimensional periodic bottom variations are considered in both the Bragg resonance regime as well as the rapidly varying (homogenized) regime. In the three-dimensional case we use the Luneburg lens-shaped submerged mound, which promotes the focusing of the underlying rays. FAPERJ Cientistas do Nosso Estado Grant 102917/2011 and ANP/PRH-32.

  16. Two-point correlation function for Dirichlet L-functions

    NASA Astrophysics Data System (ADS)

    Bogomolny, E.; Keating, J. P.

    2013-03-01

    The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy-Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question.

  17. Synchronization of Reaction-Diffusion Neural Networks With Dirichlet Boundary Conditions and Infinite Delays.

    PubMed

    Sheng, Yin; Zhang, Hao; Zeng, Zhigang

    2017-10-01

    This paper is concerned with synchronization for a class of reaction-diffusion neural networks with Dirichlet boundary conditions and infinite discrete time-varying delays. By utilizing theories of partial differential equations, Green's formula, inequality techniques, and the concept of comparison, algebraic criteria are presented to guarantee master-slave synchronization of the underlying reaction-diffusion neural networks via a designed controller. Additionally, sufficient conditions on exponential synchronization of reaction-diffusion neural networks with finite time-varying delays are established. The proposed criteria herein enhance and generalize some published ones. Three numerical examples are presented to substantiate the validity and merits of the obtained theoretical results.

  18. Joint analysis of epistemic and aleatory uncertainty in stability analysis for geo-hazard assessments

    NASA Astrophysics Data System (ADS)

    Rohmer, Jeremy; Verdel, Thierry

    2017-04-01

    Uncertainty analysis is an unavoidable task of stability analysis of any geotechnical systems. Such analysis usually relies on the safety factor SF (if SF is below some specified threshold), the failure is possible). The objective of the stability analysis is then to estimate the failure probability P for SF to be below the specified threshold. When dealing with uncertainties, two facets should be considered as outlined by several authors in the domain of geotechnics, namely "aleatoric uncertainty" (also named "randomness" or "intrinsic variability") and "epistemic uncertainty" (i.e. when facing "vague, incomplete or imprecise information" such as limited databases and observations or "imperfect" modelling). The benefits of separating both facets of uncertainty can be seen from a risk management perspective because: - Aleatoric uncertainty, being a property of the system under study, cannot be reduced. However, practical actions can be taken to circumvent the potentially dangerous effects of such variability; - Epistemic uncertainty, being due to the incomplete/imprecise nature of available information, can be reduced by e.g., increasing the number of tests (lab or in site survey), improving the measurement methods or evaluating calculation procedure with model tests, confronting more information sources (expert opinions, data from literature, etc.). Uncertainty treatment in stability analysis usually restricts to the probabilistic framework to represent both facets of uncertainty. Yet, in the domain of geo-hazard assessments (like landslides, mine pillar collapse, rockfalls, etc.), the validity of this approach can be debatable. In the present communication, we propose to review the major criticisms available in the literature against the systematic use of probability in situations of high degree of uncertainty. On this basis, the feasibility of using a more flexible uncertainty representation tool is then investigated, namely Possibility distributions (e.g., Baudrit et al., 2007) for geo-hazard assessments. A graphical tool is then developed to explore: 1. the contribution of both types of uncertainty, aleatoric and epistemic; 2. the regions of the imprecise or random parameters which contribute the most to the imprecision on the failure probability P. The method is applied on two case studies (a mine pillar and a steep slope stability analysis, Rohmer and Verdel, 2014) to investigate the necessity for extra data acquisition on parameters whose imprecision can hardly be modelled by probabilities due to the scarcity of the available information (respectively the extraction ratio and the cliff geometry). References Baudrit, C., Couso, I., & Dubois, D. (2007). Joint propagation of probability and possibility in risk analysis: Towards a formal framework. International Journal of Approximate Reasoning, 45(1), 82-105. Rohmer, J., & Verdel, T. (2014). Joint exploration of regional importance of possibilistic and probabilistic uncertainty in stability analysis. Computers and Geotechnics, 61, 308-315.

  19. Theory of multicolor lattice gas - A cellular automaton Poisson solver

    NASA Technical Reports Server (NTRS)

    Chen, H.; Matthaeus, W. H.; Klein, L. W.

    1990-01-01

    The present class of models for cellular automata involving a quiescent hydrodynamic lattice gas with multiple-valued passive labels termed 'colors', the lattice collisions change individual particle colors while preserving net color. The rigorous proofs of the multicolor lattice gases' essential features are rendered more tractable by an equivalent subparticle representation in which the color is represented by underlying two-state 'spins'. Schemes for the introduction of Dirichlet and Neumann boundary conditions are described, and two illustrative numerical test cases are used to verify the theory. The lattice gas model is equivalent to a Poisson equation solution.

  20. Stochastic species abundance models involving special copulas

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry E.

    2018-01-01

    Copulas offer a very general tool to describe the dependence structure of random variables supported by the hypercube. Inspired by problems of species abundances in Biology, we study three distinct toy models where copulas play a key role. In a first one, a Marshall-Olkin copula arises in a species extinction model with catastrophe. In a second one, a quasi-copula problem arises in a flagged species abundance model. In a third model, we study completely random species abundance models in the hypercube as those, not of product type, with uniform margins and singular. These can be understood from a singular copula supported by an inflated simplex. An exchangeable singular Dirichlet copula is also introduced, together with its induced completely random species abundance vector.

  1. Dirichlet boundary conditions for arbitrary-shaped boundaries in stellarator-like magnetic fields for the Flux-Coordinate Independent method

    NASA Astrophysics Data System (ADS)

    Hill, Peter; Shanahan, Brendan; Dudson, Ben

    2017-04-01

    We present a technique for handling Dirichlet boundary conditions with the Flux Coordinate Independent (FCI) parallel derivative operator with arbitrary-shaped material geometry in general 3D magnetic fields. The FCI method constructs a finite difference scheme for ∇∥ by following field lines between poloidal planes and interpolating within planes. Doing so removes the need for field-aligned coordinate systems that suffer from singularities in the metric tensor at null points in the magnetic field (or equivalently, when q → ∞). One cost of this method is that as the field lines are not on the mesh, they may leave the domain at any point between neighbouring planes, complicating the application of boundary conditions. The Leg Value Fill (LVF) boundary condition scheme presented here involves an extrapolation/interpolation of the boundary value onto the field line end point. The usual finite difference scheme can then be used unmodified. We implement the LVF scheme in BOUT++ and use the Method of Manufactured Solutions to verify the implementation in a rectangular domain, and show that it does not modify the error scaling of the finite difference scheme. The use of LVF for arbitrary wall geometry is outlined. We also demonstrate the feasibility of using the FCI approach in no n-axisymmetric configurations for a simple diffusion model in a "straight stellarator" magnetic field. A Gaussian blob diffuses along the field lines, tracing out flux surfaces. Dirichlet boundary conditions impose a last closed flux surface (LCFS) that confines the density. Including a poloidal limiter moves the LCFS to a smaller radius. The expected scaling of the numerical perpendicular diffusion, which is a consequence of the FCI method, in stellarator-like geometry is recovered. A novel technique for increasing the parallel resolution during post-processing, in order to reduce artefacts in visualisations, is described.

  2. The auto-tuned land data assimilation system (ATLAS)

    USDA-ARS?s Scientific Manuscript database

    Land data assimilation systems are tasked with the merging remotely-sensed soil moisture retrievals with information derived from a soil water balance model driven (principally) by observed rainfall. The performance of such systems is frequently degraded by the imprecise specification of parameters ...

  3. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals.

    PubMed

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György

    2018-01-01

    Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.

  4. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.

  5. Asymptotic stability of a nonlinear Korteweg-de Vries equation with critical lengths

    NASA Astrophysics Data System (ADS)

    Chu, Jixun; Coron, Jean-Michel; Shang, Peipei

    2015-10-01

    We study an initial-boundary-value problem of a nonlinear Korteweg-de Vries equation posed on the finite interval (0, 2 kπ) where k is a positive integer. The whole system has Dirichlet boundary condition at the left end-point, and both of Dirichlet and Neumann homogeneous boundary conditions at the right end-point. It is known that the origin is not asymptotically stable for the linearized system around the origin. We prove that the origin is (locally) asymptotically stable for the nonlinear system if the integer k is such that the kernel of the linear Korteweg-de Vries stationary equation is of dimension 1. This is for example the case if k = 1.

  6. The lawful imprecision of human surface tilt estimation in natural scenes

    PubMed Central

    2018-01-01

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. PMID:29384477

  7. The lawful imprecision of human surface tilt estimation in natural scenes.

    PubMed

    Kim, Seha; Burge, Johannes

    2018-01-31

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. © 2018, Kim et al.

  8. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  9. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    NASA Technical Reports Server (NTRS)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  10. A Case Study on Sepsis Using PubMed and Deep Learning for Ontology Learning.

    PubMed

    Arguello Casteleiro, Mercedes; Maseda Fernandez, Diego; Demetriou, George; Read, Warren; Fernandez Prieto, Maria Jesus; Des Diz, Julio; Nenadic, Goran; Keane, John; Stevens, Robert

    2017-01-01

    We investigate the application of distributional semantics models for facilitating unsupervised extraction of biomedical terms from unannotated corpora. Term extraction is used as the first step of an ontology learning process that aims to (semi-)automatic annotation of biomedical concepts and relations from more than 300K PubMed titles and abstracts. We experimented with both traditional distributional semantics methods such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) as well as the neural language models CBOW and Skip-gram from Deep Learning. The evaluation conducted concentrates on sepsis, a major life-threatening condition, and shows that Deep Learning models outperform LSA and LDA with much higher precision.

  11. An efficient algorithm for accurate computation of the Dirichlet-multinomial log-likelihood function.

    PubMed

    Yu, Peng; Shaw, Chad A

    2014-06-01

    The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Seawater intrusion in karstic, coastal aquifers: Current challenges and future scenarios in the Taranto area (southern Italy).

    PubMed

    De Filippis, Giovanna; Foglia, Laura; Giudici, Mauro; Mehl, Steffen; Margiotta, Stefano; Negri, Sergio Luigi

    2016-12-15

    Mediterranean areas are characterized by complex hydrogeological systems, where management of freshwater resources, mostly stored in karstic, coastal aquifers, is necessary and requires the application of numerical tools to detect and prevent deterioration of groundwater, mostly caused by overexploitation. In the Taranto area (southern Italy), the deep, karstic aquifer is the only source of freshwater and satisfies the main human activities. Preserving quantity and quality of this system through management policies is so necessary and such task can be addressed through modeling tools which take into account human impacts and the effects of climate changes. A variable-density flow model was developed with SEAWAT to depict the "current" status of the saltwater intrusion, namely the status simulated over an average hydrogeological year. Considering the goals of this analysis and the scale at which the model was built, the equivalent porous medium approach was adopted to represent the deep aquifer. The effects that different flow boundary conditions along the coast have on the transport model were assessed. Furthermore, salinity stratification occurs within a strip spreading between 4km and 7km from the coast in the deep aquifer. The model predicts a similar phenomenon for some submarine freshwater springs and modeling outcomes were positively compared with measurements found in the literature. Two scenarios were simulated to assess the effects of decreased rainfall and increased pumping on saline intrusion. Major differences in the concentration field with respect to the "current" status were found where the hydraulic conductivity of the deep aquifer is higher and such differences are higher when Dirichlet flow boundary conditions are assigned. Furthermore, the Dirichlet boundary condition along the coast for transport modeling influences the concentration field in different scenarios at shallow depths; as such, concentration values simulated under stressed conditions are lower than those simulated under undisturbed conditions. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Scalar Casimir densities and forces for parallel plates in cosmic string spacetime

    NASA Astrophysics Data System (ADS)

    Bezerra de Mello, E. R.; Saharian, A. A.; Abajyan, S. V.

    2018-04-01

    We analyze the Green function, the Casimir densities and forces associated with a massive scalar quantum field confined between two parallel plates in a higher dimensional cosmic string spacetime. The plates are placed orthogonal to the string, and the field obeys the Robin boundary conditions on them. The boundary-induced contributions are explicitly extracted in the vacuum expectation values (VEVs) of the field squared and of the energy-momentum tensor for both the single plate and two plates geometries. The VEV of the energy-momentum tensor, in additional to the diagonal components, contains an off diagonal component corresponding to the shear stress. The latter vanishes on the plates in special cases of Dirichlet and Neumann boundary conditions. For points outside the string core the topological contributions in the VEVs are finite on the plates. Near the string the VEVs are dominated by the boundary-free part, whereas at large distances the boundary-induced contributions dominate. Due to the nonzero off diagonal component of the vacuum energy-momentum tensor, in addition to the normal component, the Casimir forces have nonzero component parallel to the boundary (shear force). Unlike the problem on the Minkowski bulk, the normal forces acting on the separate plates, in general, do not coincide if the corresponding Robin coefficients are different. Another difference is that in the presence of the cosmic string the Casimir forces for Dirichlet and Neumann boundary conditions differ. For Dirichlet boundary condition the normal Casimir force does not depend on the curvature coupling parameter. This is not the case for other boundary conditions. A new qualitative feature induced by the cosmic string is the appearance of the shear stress acting on the plates. The corresponding force is directed along the radial coordinate and vanishes for Dirichlet and Neumann boundary conditions. Depending on the parameters of the problem, the radial component of the shear force can be either positive or negative.

  14. Mappings of Least Dirichlet Energy and their Hopf Differentials

    NASA Astrophysics Data System (ADS)

    Iwaniec, Tadeusz; Onninen, Jani

    2013-08-01

    The paper is concerned with mappings {h \\colon {X}} {{begin{array}{ll} onto \\ longrightarrow }} {{Y}} between planar domains having least Dirichlet energy. The existence and uniqueness (up to a conformal change of variables in {{X}}) of the energy-minimal mappings is established within the class {overline{fancyscript{H}}_2({X}, {Y})} of strong limits of homeomorphisms in the Sobolev space {fancyscript{W}^{1,2}({X}, {Y})} , a result of considerable interest in the mathematical models of nonlinear elasticity. The inner variation of the independent variable in {{X}} leads to the Hopf differential {hz overline{h_{bar{z}}} dz ⊗ dz} and its trajectories. For a pair of doubly connected domains, in which {{X}} has finite conformal modulus, we establish the following principle: A mapping {h in overline{fancyscript{H}}2 ({X}, {Y})} is energy-minimal if and only if its Hopf-differential is analytic in {{X}} and real along {partial {X}} . In general, the energy-minimal mappings may not be injective, in which case one observes the occurrence of slits in {{X}} (cognate with cracks). Slits are triggered by points of concavity of {{Y}} . They originate from {partial {X}} and advance along vertical trajectories of the Hopf differential toward {{X}} where they eventually terminate, so no crosscuts are created.

  15. Evaluation of the Hitachi 717 analyser.

    PubMed

    Biosca, C; Antoja, F; Sierra, C; Douezi, H; Macià, M; Alsina, M J; Galimany, R

    1989-01-01

    The selective multitest Boehringer Mannheim Hitachi 717 analyser was evaluated according to the guidelines of the Comisión de Instrumentación de la Sociedad Española de Química Clinica and the European Committee for Clinical Laboratory Standards. The evaluation was performed in two steps: examination of the analytical units and evaluation in routine operation.THE EVALUATION OF THE ANALYTICAL UNITS INCLUDED A PHOTOMETRIC STUDY: the inaccuracy is acceptable for 340 and 405 nm; the imprecision ranges from 0.12 to 0.95% at 340 nm and from 0.30 to 0.73 at 405 nm, the linearity shows some dispersion at low absorbance for NADH at 340 nm, the drift is negligible, the imprecision of the pipette delivery system increases when the sample pipette operates with 3 mul, the reagent pipette imprecision is acceptable and the temperature control system is good.UNDER ROUTINE WORKING CONDITIONS, SEVEN DETERMINATIONS WERE STUDIED: glucose, creatinine, iron, total protein, AST, ALP and calcium. The within-run imprecision (CV) ranged from 0.6% for total protein and AST to 6.9% for iron. The between run imprecision ranged from 2.4% for glucose to 9.7% for iron. Some contamination was found in the carry-over study. The relative inaccuracy is good for all the constituents assayed.

  16. Analytical solutions for coupling fractional partial differential equations with Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Ding, Xiao-Li; Nieto, Juan J.

    2017-11-01

    In this paper, we consider the analytical solutions of coupling fractional partial differential equations (FPDEs) with Dirichlet boundary conditions on a finite domain. Firstly, the method of successive approximations is used to obtain the analytical solutions of coupling multi-term time fractional ordinary differential equations. Then, the technique of spectral representation of the fractional Laplacian operator is used to convert the coupling FPDEs to the coupling multi-term time fractional ordinary differential equations. By applying the obtained analytical solutions to the resulting multi-term time fractional ordinary differential equations, the desired analytical solutions of the coupling FPDEs are given. Our results are applied to derive the analytical solutions of some special cases to demonstrate their applicability.

  17. An integrated supply chain model for new products with imprecise production and supply under scenario dependent fuzzy random demand

    NASA Astrophysics Data System (ADS)

    Nagar, Lokesh; Dutta, Pankaj; Jain, Karuna

    2014-05-01

    In the present day business scenario, instant changes in market demand, different source of materials and manufacturing technologies force many companies to change their supply chain planning in order to tackle the real-world uncertainty. The purpose of this paper is to develop a multi-objective two-stage stochastic programming supply chain model that incorporates imprecise production rate and supplier capacity under scenario dependent fuzzy random demand associated with new product supply chains. The objectives are to maximise the supply chain profit, achieve desired service level and minimise financial risk. The proposed model allows simultaneous determination of optimum supply chain design, procurement and production quantities across the different plants, and trade-offs between inventory and transportation modes for both inbound and outbound logistics. Analogous to chance constraints, we have used the possibility measure to quantify the demand uncertainties and the model is solved using fuzzy linear programming approach. An illustration is presented to demonstrate the effectiveness of the proposed model. Sensitivity analysis is performed for maximisation of the supply chain profit with respect to different confidence level of service, risk and possibility measure. It is found that when one considers the service level and risk as robustness measure the variability in profit reduces.

  18. Circumventing Imprecise Geometric Information and Development of a Unified Modeling Technique for Various Flow Regimes in Capillary Tubes

    NASA Astrophysics Data System (ADS)

    Abbasi, Bahman

    2012-11-01

    Owing to their manufacturability and reliability, capillary tubes are the most common expansion devices in household refrigerators. Therefore, investigating flow properties in the capillary tubes is of immense appeal in the said business. The models to predict pressure drop in two-phase internal flows invariably rely upon highly precise geometric information. The manner in which capillary tubes are manufactured makes them highly susceptible to geometric imprecisions, which renders geometry-based models unreliable to the point of obsoleteness. Aware of the issue, manufacturers categorize capillary tubes based on Nitrogen flow rate through them. This categorization method presents an opportunity to substitute geometric details with Nitrogen flow data as the basis for customized models. The simulation tools developed by implementation of this technique have the singular advantage of being applicable across flow regimes. Thus the error-prone process of identifying compatible correlations is eliminated. Equally importantly, compressibility and chocking effects can be incorporated in the same model. The outcome is a standalone correlation that provides accurate predictions, regardless of any particular fluid or flow regime. Thereby, exploratory investigations for capillary tube design and optimization are greatly simplified. Bahman Abbasi, Ph.D., is Lead Advanced Systems Engineer at General Electric Appliances in Louisville, KY. He conducts research projects across disciplines in the household refrigeration industry.

  19. Formal modeling of a system of chemical reactions under uncertainty.

    PubMed

    Ghosh, Krishnendu; Schlipf, John

    2014-10-01

    We describe a novel formalism representing a system of chemical reactions, with imprecise rates of reactions and concentrations of chemicals, and describe a model reduction method, pruning, based on the chemical properties. We present two algorithms, midpoint approximation and interval approximation, for construction of efficient model abstractions with uncertainty in data. We evaluate computational feasibility by posing queries in computation tree logic (CTL) on a prototype of extracellular-signal-regulated kinase (ERK) pathway.

  20. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.

    PubMed

    Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.

  1. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm

    PubMed Central

    Baig, Fahd; Little, Max A.

    2016-01-01

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism. PMID:27669525

  2. Machine Learning-Based Classification of 38 Years of Spine-Related Literature Into 100 Research Topics.

    PubMed

    Sing, David C; Metz, Lionel N; Dudli, Stefan

    2017-06-01

    Retrospective review. To identify the top 100 spine research topics. Recent advances in "machine learning," or computers learning without explicit instructions, have yielded broad technological advances. Topic modeling algorithms can be applied to large volumes of text to discover quantifiable themes and trends. Abstracts were extracted from the National Library of Medicine PubMed database from five prominent peer-reviewed spine journals (European Spine Journal [ESJ], The Spine Journal [SpineJ], Spine, Journal of Spinal Disorders and Techniques [JSDT], Journal of Neurosurgery: Spine [JNS]). Each abstract was entered into a latent Dirichlet allocation model specified to discover 100 topics, resulting in each abstract being assigned a probability of belonging in a topic. Topics were named using the five most frequently appearing terms within that topic. Significance of increasing ("hot") or decreasing ("cold") topic popularity over time was evaluated with simple linear regression. From 1978 to 2015, 25,805 spine-related research articles were extracted and classified into 100 topics. Top two most published topics included "clinical, surgeons, guidelines, information, care" (n = 496 articles) and "pain, back, low, treatment, chronic" (424). Top two hot trends included "disc, cervical, replacement, level, arthroplasty" (+0.05%/yr, P < 0.001), and "minimally, invasive, approach, technique" (+0.05%/yr, P < 0.001). By journal, the most published topics were ESJ-"operative, surgery, postoperative, underwent, preoperative"; SpineJ-"clinical, surgeons, guidelines, information, care"; Spine-"pain, back, low, treatment, chronic"; JNS- "tumor, lesions, rare, present, diagnosis"; JSDT-"cervical, anterior, plate, fusion, ACDF." Topics discovered through latent Dirichlet allocation modeling represent unbiased meaningful themes relevant to spine care. Topic dynamics can provide historical context and direction for future research for aspiring investigators and trainees interested in spine careers. Please explore https://singdc.shinyapps.io/spinetopics. N A.

  3. Nanomechanical motion measured with an imprecision below that at the standard quantum limit.

    PubMed

    Teufel, J D; Donner, T; Castellanos-Beltran, M A; Harlow, J W; Lehnert, K W

    2009-12-01

    Nanomechanical oscillators are at the heart of ultrasensitive detectors of force, mass and motion. As these detectors progress to even better sensitivity, they will encounter measurement limits imposed by the laws of quantum mechanics. If the imprecision of a measurement of the displacement of an oscillator is pushed below a scale set by the standard quantum limit, the measurement must perturb the motion of the oscillator by an amount larger than that scale. Here we show a displacement measurement with an imprecision below the standard quantum limit scale. We achieve this imprecision by measuring the motion of a nanomechanical oscillator with a nearly shot-noise limited microwave interferometer. As the interferometer is naturally operated at cryogenic temperatures, the thermal motion of the oscillator is minimized, yielding an excellent force detector with a sensitivity of 0.51 aN Hz(-1/2). This measurement is a critical step towards observing quantum behaviour in a mechanical object.

  4. Validation workflow for a clinical Bayesian network model in multidisciplinary decision making in head and neck oncology treatment.

    PubMed

    Cypko, Mario A; Stoehr, Matthaeus; Kozniewski, Marcin; Druzdzel, Marek J; Dietz, Andreas; Berliner, Leonard; Lemke, Heinz U

    2017-11-01

    Oncological treatment is being increasingly complex, and therefore, decision making in multidisciplinary teams is becoming the key activity in the clinical pathways. The increased complexity is related to the number and variability of possible treatment decisions that may be relevant to a patient. In this paper, we describe validation of a multidisciplinary cancer treatment decision in the clinical domain of head and neck oncology. Probabilistic graphical models and corresponding inference algorithms, in the form of Bayesian networks, can support complex decision-making processes by providing a mathematically reproducible and transparent advice. The quality of BN-based advice depends on the quality of the model. Therefore, it is vital to validate the model before it is applied in practice. For an example BN subnetwork of laryngeal cancer with 303 variables, we evaluated 66 patient records. To validate the model on this dataset, a validation workflow was applied in combination with quantitative and qualitative analyses. In the subsequent analyses, we observed four sources of imprecise predictions: incorrect data, incomplete patient data, outvoting relevant observations, and incorrect model. Finally, the four problems were solved by modifying the data and the model. The presented validation effort is related to the model complexity. For simpler models, the validation workflow is the same, although it may require fewer validation methods. The validation success is related to the model's well-founded knowledge base. The remaining laryngeal cancer model may disclose additional sources of imprecise predictions.

  5. Queries over Unstructured Data: Probabilistic Methods to the Rescue

    NASA Astrophysics Data System (ADS)

    Sarawagi, Sunita

    Unstructured data like emails, addresses, invoices, call transcripts, reviews, and press releases are now an integral part of any large enterprise. A challenge of modern business intelligence applications is analyzing and querying data seamlessly across structured and unstructured sources. This requires the development of automated techniques for extracting structured records from text sources and resolving entity mentions in data from various sources. The success of any automated method for extraction and integration depends on how effectively it unifies diverse clues in the unstructured source and in existing structured databases. We argue that statistical learning techniques like Conditional Random Fields (CRFs) provide a accurate, elegant and principled framework for tackling these tasks. Given the inherent noise in real-world sources, it is important to capture the uncertainty of the above operations via imprecise data models. CRFs provide a sound probability distribution over extractions but are not easy to represent and query in a relational framework. We present methods of approximating this distribution to query-friendly row and column uncertainty models. Finally, we present models for representing the uncertainty of de-duplication and algorithms for various Top-K count queries on imprecise duplicates.

  6. Improving the estimation of flavonoid intake for study of health outcomes

    USDA-ARS?s Scientific Manuscript database

    Imprecision in estimating intakes of non-nutrient bioactive compounds such as flavonoids is a challenge in epidemiologic studies of health outcomes. The sources of this imprecision, using flavonoids as an example, include the variability of bioactive compounds in foods due to differences in growing ...

  7. Scheduling periodic jobs using imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1987-01-01

    One approach to avoid timing faults in hard, real-time systems is to make available intermediate, imprecise results produced by real-time processes. When a result of the desired quality cannot be produced in time, an imprecise result of acceptable quality produced before the deadline can be used. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. Since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result, the amount of processor time assigned to any task in a valid schedule can be less than the amount of time required to complete the task. A meaningful formulation of the scheduling problem must take into account the overall quality of the results. Depending on the different types of undesirable effects caused by errors, jobs are classified as type N or type C. For type N jobs, the effects of errors in results produced in different periods are not cumulative. A reasonable performance measure is the average error over all jobs. Three heuristic algorithms that lead to feasible schedules with small average errors are described. For type C jobs, the undesirable effects of errors produced in different periods are cumulative. Schedulability criteria of type C jobs are discussed.

  8. Potential usefulness of a topic model-based categorization of lung cancers as quantitative CT biomarkers for predicting the recurrence risk after curative resection

    NASA Astrophysics Data System (ADS)

    Kawata, Y.; Niki, N.; Ohmatsu, H.; Satake, M.; Kusumoto, M.; Tsuchida, T.; Aokage, K.; Eguchi, K.; Kaneko, M.; Moriyama, N.

    2014-03-01

    In this work, we investigate a potential usefulness of a topic model-based categorization of lung cancers as quantitative CT biomarkers for predicting the recurrence risk after curative resection. The elucidation of the subcategorization of a pulmonary nodule type in CT images is an important preliminary step towards developing the nodule managements that are specific to each patient. We categorize lung cancers by analyzing volumetric distributions of CT values within lung cancers via a topic model such as latent Dirichlet allocation. Through applying our scheme to 3D CT images of nonsmall- cell lung cancer (maximum lesion size of 3 cm) , we demonstrate the potential usefulness of the topic model-based categorization of lung cancers as quantitative CT biomarkers.

  9. Development and Utilization of Regional Oceanic Modeling System (ROMS). Delicacy, Imprecision, and Uncertainty of Oceanic Simulations: An Investigation with the Regional Oceanic Modeling System (ROMS). Mixing in the Ocean Surface Layer Using the Regional Oceanic Modeling System (ROMS).

    DTIC Science & Technology

    2011-09-30

    community use for ROMS is biogeochemisty: chemical cycles, water quality, blooms , micro-nutrients, larval dispersal, biome transitions, and coupling to...J.C. McWilliams, X. Capet, and J. Kurian, 2010: Heat balance and eddies in the Peru- Chile Current System. Climate Dynamics, 37, in press. doi10.1007

  10. Modeling Choice Under Uncertainty in Military Systems Analysis

    DTIC Science & Technology

    1991-11-01

    operators rather than fuzzy operators. This is suggested for further research. 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) In AHP , objectives, functions and...14 4.1 IMPRECISELY SPECIFIED MULTIPLE A’ITRIBUTE UTILITY THEORY... 14 4.2 FUZZY DECISION ANALYSIS...14 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) ................................... 14 4.4 SUBJECTIVE TRANSFER FUNCTION APPROACH

  11. Development and Utilization of Regional Oceanic Modeling System (ROMS) & Delicacy, Imprecision, and Uncertainty of Oceanic Simulations: An Investigation with ROMS

    DTIC Science & Technology

    2010-09-30

    and Ecosystems: An important community use for ROMS is biogeochemisty: chemical cycles, water quality, blooms , micro-nutrients, larval dispersal... Chile current system. J. Climate, submitted. Colas, F., X. Capet, and J. McWilliams, 2010b: Mesoscale eddy buoyancy flux and eddy-induced

  12. Delicacy, Imprecision, and Uncertainty of Oceanic Simulations: An Investigation with the Regional Oceanic Modeling System (ROMS)

    DTIC Science & Technology

    2013-09-30

    Geochemistry and Ecosystems: An important community use for ROMS is biogeochemisty: chemical cycles, water quality, blooms , micro-nutrients, larval...Sci., submitted. Colas, F., J.C. McWilliams, X. Capet, and J. Kurian, 2012: Heat balance and eddies in the Peru- Chile Current System. Climate

  13. A cloud model based multi-attribute decision making approach for selection and evaluation of groundwater management schemes

    NASA Astrophysics Data System (ADS)

    Lu, Hongwei; Ren, Lixia; Chen, Yizhong; Tian, Peipei; Liu, Jia

    2017-12-01

    Due to the uncertainty (i.e., fuzziness, stochasticity and imprecision) existed simultaneously during the process for groundwater remediation, the accuracy of ranking results obtained by the traditional methods has been limited. This paper proposes a cloud model based multi-attribute decision making framework (CM-MADM) with Monte Carlo for the contaminated-groundwater remediation strategies selection. The cloud model is used to handle imprecise numerical quantities, which can describe the fuzziness and stochasticity of the information fully and precisely. In the proposed approach, the contaminated concentrations are aggregated via the backward cloud generator and the weights of attributes are calculated by employing the weight cloud module. A case study on the remedial alternative selection for a contaminated site suffering from a 1,1,1-trichloroethylene leakage problem in Shanghai, China is conducted to illustrate the efficiency and applicability of the developed approach. Totally, an attribute system which consists of ten attributes were used for evaluating each alternative through the developed method under uncertainty, including daily total pumping rate, total cost and cloud model based health risk. Results indicated that A14 was evaluated to be the most preferred alternative for the 5-year, A5 for the 10-year, A4 for the 15-year and A6 for the 20-year remediation.

  14. Assessing the likely value of gravity and drawdown measurements to constrain estimates of hydraulic conductivity and specific yield during unconfined aquifer testing

    USGS Publications Warehouse

    Blainey, Joan B.; Ferré, Ty P.A.; Cordova, Jeffrey T.

    2007-01-01

    Pumping of an unconfined aquifer can cause local desaturation detectable with high‐resolution gravimetry. A previous study showed that signal‐to‐noise ratios could be predicted for gravity measurements based on a hydrologic model. We show that although changes should be detectable with gravimeters, estimations of hydraulic conductivity and specific yield based on gravity data alone are likely to be unacceptably inaccurate and imprecise. In contrast, a transect of low‐quality drawdown data alone resulted in accurate estimates of hydraulic conductivity and inaccurate and imprecise estimates of specific yield. Combined use of drawdown and gravity data, or use of high‐quality drawdown data alone, resulted in unbiased and precise estimates of both parameters. This study is an example of the value of a staged assessment regarding the likely significance of a new measurement method or monitoring scenario before collecting field data.

  15. The Need for Precision

    ERIC Educational Resources Information Center

    Weinberg, David R.

    2012-01-01

    People have become accustomed to the imprecision of language, though imprecise language has a subtle way of misguiding thoughts and actions. In this article, the author argues that the term "teacher" in reference to the Montessori practitioner is a distortion of everything Maria Montessori tried to undo about traditional education. In dealing with…

  16. LIMITATIONS ON THE USES OF MULTIMEDIA EXPOSURE MEASUREMENTS FOR MULTIPATHWAY EXPOSURE ASSESSMENT - PART II: EFFECTS OF MISSING DATA AND IMPRECISION

    EPA Science Inventory

    Multimedia data from two probability-based exposure studies were investigated in terms of how missing data and measurement-error imprecision affected estimation of population parameters and associations. Missing data resulted mainly from individuals' refusing to participate in c...

  17. Attending to Precision with Secret Messages

    ERIC Educational Resources Information Center

    Starling, Courtney; Whitacre, Ian

    2016-01-01

    Mathematics is a language that is characterized by words and symbols that have precise definitions. Many opportunities exist for miscommunication in mathematics if the words and symbols are interpreted incorrectly or used in imprecise ways. In fact, it is found that imprecision is a common source of mathematical disagreements and misunderstandings…

  18. Modification of Classical SPM for Slightly Rough Surface Scattering with Low Grazing Angle Incidence

    NASA Astrophysics Data System (ADS)

    Guo, Li-Xin; Wei, Guo-Hui; Kim, Cheyoung; Wu, Zhen-Sen

    2005-11-01

    Based on the impedance/admittance rough boundaries, the reflection coefficients and the scattering cross section with low grazing angle incidence are obtained for both VV and HH polarizations. The error of the classical perturbation method at grazing angle is overcome for the vertical polarization at a rough Neumann boundary of infinite extent. The derivation of the formulae and the numerical results show that the backscattering cross section depends on the grazing angle to the fourth power for both Neumann and Dirichlet boundary conditions with low grazing angle incidence. Our results can reduce to that of the classical small perturbation method by neglecting the Neumann and Dirichlet boundary conditions. The project supported by National Natural Science Foundation of China under Grant No. 60101001 and the National Defense Foundation of China

  19. A Dirichlet process model for classifying and forecasting epidemic curves.

    PubMed

    Nsoesie, Elaine O; Leman, Scotland C; Marathe, Madhav V

    2014-01-09

    A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997-2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods' performance was comparable. Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial.

  20. What are the bias, imprecision, and limits of agreement for finding the flexion-extension plane of the knee with five tibial reference lines?

    PubMed

    Brar, Abheetinder S; Howell, Stephen M; Hull, Maury L

    2016-06-01

    Internal-external (I-E) malrotation of the tibial component is associated with poor function after total knee arthroplasty (TKA). Kinematically aligned (KA) TKA uses a functionally defined flexion-extension (F-E) tibial reference line, which is parallel to the F-E plane of the extended knee, to set I-E rotation of the tibial component. Sixty-two, three-dimensional bone models of normal knees were analyzed. We computed the bias (mean), imprecision (±standard deviation), and limits of agreement (mean±2 standard deviations) of the angle between five anatomically defined tibial reference lines used in mechanically aligned (MA) TKA and the F-E tibial reference line (+external). The following are the bias, imprecision, and limits of agreement of the angle between the F-E tibial reference line and 1) the tibial reference lines connecting the medial border (-2°±6°, -14° to 10°), medial 1/3 (6°±6°, -6° to 18°), and the most anterior point of the tibial tubercle (9°±4°, -1° to 17°) with the center of the posterior cruciate ligament, and 2) the tibial reference lines perpendicular to the posterior condylar axis of the tibia (-3°±4°, -11° to 5°), and a line connecting the centers of the tibial condyles (1°±4°, -7° to 9°). Based on these in vitro findings, it might be prudent to reconsider setting the I-E rotation of the tibial component to tibial reference lines that have bias, imprecision, and limits of agreement that fall outside the -7° to 10° range associated with high function after KA TKA. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Can fuzzy logic bring complex problems into focus? Modeling imprecise factors in environmental policy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKone, Thomas E.; Deshpande, Ashok W.

    2004-06-14

    In modeling complex environmental problems, we often fail to make precise statements about inputs and outcome. In this case the fuzzy logic method native to the human mind provides a useful way to get at these problems. Fuzzy logic represents a significant change in both the approach to and outcome of environmental evaluations. Risk assessment is currently based on the implicit premise that probability theory provides the necessary and sufficient tools for dealing with uncertainty and variability. The key advantage of fuzzy methods is the way they reflect the human mind in its remarkable ability to store and process informationmore » which is consistently imprecise, uncertain, and resistant to classification. Our case study illustrates the ability of fuzzy logic to integrate statistical measurements with imprecise health goals. But we submit that fuzzy logic and probability theory are complementary and not competitive. In the world of soft computing, fuzzy logic has been widely used and has often been the ''smart'' behind smart machines. But it will require more effort and case studies to establish its niche in risk assessment or other types of impact assessment. Although we often hear complaints about ''bright lines,'' could we adapt to a system that relaxes these lines to fuzzy gradations? Would decision makers and the public accept expressions of water or air quality goals in linguistic terms with computed degrees of certainty? Resistance is likely. In many regions, such as the US and European Union, it is likely that both decision makers and members of the public are more comfortable with our current system in which government agencies avoid confronting uncertainties by setting guidelines that are crisp and often fail to communicate uncertainty. But some day perhaps a more comprehensive approach that includes exposure surveys, toxicological data, epidemiological studies coupled with fuzzy modeling will go a long way in resolving some of the conflict, divisiveness, and controversy in the current regulatory paradigm.« less

  2. A new approach for the solution of fuzzy games

    NASA Astrophysics Data System (ADS)

    Krishnaveni, G.; Ganesan, K.

    2018-04-01

    In this paper, a new approach is proposed to solve the games with imprecise entries in its payoff matrix. All these imprecise entries are assumed to be trapezoidal fuzzy numbers. Also the proposed approach provides fuzzy optimal solution of the fuzzy valued game without converting to classical version. A numerical example is provided.

  3. Scheduling real-time, periodic jobs using imprecise results

    NASA Technical Reports Server (NTRS)

    Liu, Jane W. S.; Lin, Kwei-Jay; Natarajan, Swaminathan

    1987-01-01

    A process is called a monotone process if the accuracy of its intermediate results is non-decreasing as more time is spent to obtain the result. The result produced by a monotone process upon its normal termination is the desired result; the error in this result is zero. External events such as timeouts or crashes may cause the process to terminate prematurely. If the intermediate result produced by the process upon its premature termination is saved and made available, the application may still find the result unusable and, hence, acceptable; such a result is said to be an imprecise one. The error in an imprecise result is nonzero. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. This problem differs from the traditional scheduling problems since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result. Consequently, the amounts of processor time assigned to tasks in a valid schedule can be less than the amounts of time required to complete the tasks. A meaningful formulation of this problem taking into account the quality of the overall result is discussed. Three algorithms for scheduling jobs for which the effects of errors in results produced in different periods are not cumulative are described, and their relative merits are evaluated.

  4. Use of dirichlet distributions and orthogonal projection techniques for the fluctuation analysis of steady-state multivariate birth-death systems

    NASA Astrophysics Data System (ADS)

    Palombi, Filippo; Toti, Simona

    2015-05-01

    Approximate weak solutions of the Fokker-Planck equation represent a useful tool to analyze the equilibrium fluctuations of birth-death systems, as they provide a quantitative knowledge lying in between numerical simulations and exact analytic arguments. In this paper, we adapt the general mathematical formalism known as the Ritz-Galerkin method for partial differential equations to the Fokker-Planck equation with time-independent polynomial drift and diffusion coefficients on the simplex. Then, we show how the method works in two examples, namely the binary and multi-state voter models with zealots.

  5. Sound-turbulence interaction in transonic boundary layers

    NASA Astrophysics Data System (ADS)

    Lelostec, Ludovic; Scalo, Carlo; Lele, Sanjiva

    2014-11-01

    Acoustic wave scattering in a transonic boundary layer is investigated through a novel approach. Instead of simulating directly the interaction of an incoming oblique acoustic wave with a turbulent boundary layer, suitable Dirichlet conditions are imposed at the wall to reproduce only the reflected wave resulting from the interaction of the incident wave with the boundary layer. The method is first validated using the laminar boundary layer profiles in a parallel flow approximation. For this scattering problem an exact inviscid solution can be found in the frequency domain which requires numerical solution of an ODE. The Dirichlet conditions are imposed in a high-fidelity unstructured compressible flow solver for Large Eddy Simulation (LES), CharLESx. The acoustic field of the reflected wave is then solved and the interaction between the boundary layer and sound scattering can be studied.

  6. Step scaling and the Yang-Mills gradient flow

    NASA Astrophysics Data System (ADS)

    Lüscher, Martin

    2014-06-01

    The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.

  7. Heat kernel for the elliptic system of linear elasticity with boundary conditions

    NASA Astrophysics Data System (ADS)

    Taylor, Justin; Kim, Seick; Brown, Russell

    2014-10-01

    We consider the elliptic system of linear elasticity with bounded measurable coefficients in a domain where the second Korn inequality holds. We construct heat kernel of the system subject to Dirichlet, Neumann, or mixed boundary condition under the assumption that weak solutions of the elliptic system are Hölder continuous in the interior. Moreover, we show that if weak solutions of the mixed problem are Hölder continuous up to the boundary, then the corresponding heat kernel has a Gaussian bound. In particular, if the domain is a two dimensional Lipschitz domain satisfying a corkscrew or non-tangential accessibility condition on the set where we specify Dirichlet boundary condition, then we show that the heat kernel has a Gaussian bound. As an application, we construct Green's function for elliptic mixed problem in such a domain.

  8. Chemistry in Titan

    NASA Astrophysics Data System (ADS)

    Plessis, S.; Carrasco, N.; Pernot, P.

    2009-04-01

    Modelling the chemical composition of Titan's ionosphere is a very challenging issue. Latest works perform either inversion of CASSINI's INMS mass spectra (neutral[1] or ion[2]), or design coupled ion-neutral chemistry models[3]. Coupling ionic and neutral chemistry has been reported to be an essential feature of accurate modelling[3]. Electron Dissociative Recombination (EDR), where free electrons recombine with positive ions to produce neutral species, is a key component of ion-neutral coupling. There is a major difficulty in EDR modelling: for heavy ions, the distribution of neutral products is incompletely characterized by experiments. For instance, for some hydrocarbon ions only the carbon repartition is measured, leaving the hydrogen repartition and thus the exact neutral species identity unknown[4]. This precludes reliable deterministic modelling of this process and of ion-neutral coupling. We propose a novel stochastic description of the EDR chemical reactions which enables efficient representation and simulation of the partial experimental knowledge. The description of products distribution in multi-pathways reactions is based on branching ratios, which should sum to unity. The keystone of our approach is the design of a probability density function accounting for all available informations and physical constrains. This is done by Dirichlet modelling which enables one to sample random variables whose sum is constant[5]. The specifics of EDR partial uncertainty call for a hierarchiral Dirichlet representation, which generalizes our previous work[5]. We present results on the importance of ion-neutral coupling based on our stochastic model. C repartition H repartition (measured) (unknown ) → C4H2 + 3H2 + H .. -→ C4 . → C4H2 + 7H → C3H8. + CH C4H+9 + e- -→ C3 + C .. → C3H3 + CH2 + 2H2 → C2H6 + C2H2 + H .. -→ C2 + C2 . → 2C2H2 + 2H2 + H (1) References [1] J. Cui, R.V. Yelle, V. Vuitton, J.H. Waite Jr., W.T. Kasprzak, D.A. Gell, H.B. Niemann, I.C.F. Müller-Wodarg, N. Borggren, G.G. Fletcher, E.L. Patrick, E. Raaen, and B.A. Magee. Analysis of Titan's neutral upper atmosphere from Cassini ion neutral mass spectrometer measurements. Icarus, In Press, Accepted Manuscript:-, 2008. [2] V. Vuitton, R. V. Yelle, and M.J. McEwan. Ion chemistry and N-containing molecules in Titan's upper atmosphere. Icarus, 191:722-742, 2007. [3] V. De La Haye, J.H. Waite Jr., T.E. Cravens, I.P. Robertson, and S. Lebonnois. Coupled ion and neutral rotating model of Titan's upper atmosphere. Icarus, 197(1):110 - 136, 2008. [4] J. B. A. Mitchell, C. Rebrion-Rowe, J. L. Le Garrec, G. Angelova, H. Bluhme, K. Seiersen, and L. H. Andersen. Branching ratios for the dissociative recombination of hydrocarbon ions. I: The cases of C4H9+ and C4H5+. International Journal of Mass Spectrometry, 227(2):273-279, June 2003. [5] N. Carrasco and P. Pernot. Modeling of branching ratio uncertainty in chemical networks by Dirichlet distributions. Journal of Physical Chemistry A, 11(18):3507-3512, 2007.

  9. Evaluation of Bole Straightness in Cottonwood Using Visual Scores

    Treesearch

    D.T. Cooper; R.B. Ferguson

    1981-01-01

    Selection for straightness in natural stands of cottonwood can be effective in improving straightness of open-pollinated progeny. Straightness appears to be highly heritable, but it is subject to imprecise evaluation. This can be largely overcome by repeated application of an imprecise scoring system using a minimum of two views per tree separated by 90 degrees.

  10. Imprecise Frequency Descriptors and the Miscomprehension of Prescription Drug Advertising: Public Policy and Regulatory Implications.

    ERIC Educational Resources Information Center

    Davis, Joel J.

    1999-01-01

    Explores the communicative effectiveness of imprecise frequency descriptors within the context of consumer prescription drug advertising. Conducts two separate studies using a total sample of 147 adults. Finds that consumers are unable to accurately estimate the relative likelihood of side effect occurrence when a list of side effects are preceded…

  11. How Often is "Often"? The Use of Imprecise Terms in Exam Items.

    ERIC Educational Resources Information Center

    Case, Susan M.

    This study was designed to gather data on the meaning of imprecise terms from items written by physicians for their students and by test committees for national licensure and certification examinations. A total of 32 members of test committees who write examination items for various medical specialty examinations participated in the study. Each…

  12. Virtual tape measure for the operating microscope: system specifications and performance evaluation.

    PubMed

    Kim, M Y; Drake, J M; Milgram, P

    2000-01-01

    The Virtual Tape Measure for the Operating Microscope (VTMOM) was created to assist surgeons in making accurate 3D measurements of anatomical structures seen in the surgical field under the operating microscope. The VTMOM employs augmented reality techniques by combining stereoscopic video images with stereoscopic computer graphics, and functions by relying on an operator's ability to align a 3D graphic pointer, which serves as the end-point of the virtual tape measure, with designated locations on the anatomical structure being measured. The VTMOM was evaluated for its baseline and application performances as well as its application efficacy. Baseline performance was determined by measuring the mean error (bias) and standard deviation of error (imprecision) in measurements of non-anatomical objects. Application performance was determined by comparing the error in measuring the dimensions of aneurysm models with and without the VTMOM. Application efficacy was determined by comparing the error in selecting the appropriate aneurysm clip size with and without the VTMOM. Baseline performance indicated a bias of 0.3 mm and an imprecision of 0.6 mm. Application bias was 3.8 mm and imprecision was 2.8 mm for aneurysm diameter. The VTMOM did not improve aneurysm clip size selection accuracy. The VTMOM is a potentially accurate tool for use under the operating microscope. However, its performance when measuring anatomical objects is highly dependent on complex visual features of the object surfaces. Copyright 2000 Wiley-Liss, Inc.

  13. Multiple Positive Solutions in the Second Order Autonomous Nonlinear Boundary Value Problems

    NASA Astrophysics Data System (ADS)

    Atslega, Svetlana; Sadyrbaev, Felix

    2009-09-01

    We construct the second order autonomous equations with arbitrarily large number of positive solutions satisfying homogeneous Dirichlet boundary conditions. Phase plane approach and bifurcation of solutions are the main tools.

  14. Variational Problems with Long-Range Interaction

    NASA Astrophysics Data System (ADS)

    Soave, Nicola; Tavares, Hugo; Terracini, Susanna; Zilio, Alessandro

    2018-06-01

    We consider a class of variational problems for densities that repel each other at a distance. Typical examples are given by the Dirichlet functional and the Rayleigh functional D(u) = \\sum_{i=1}^k \\int_{Ω} |\

  15. High-Reproducibility and High-Accuracy Method for Automated Topic Classification

    NASA Astrophysics Data System (ADS)

    Lancichinetti, Andrea; Sirer, M. Irmak; Wang, Jane X.; Acuna, Daniel; Körding, Konrad; Amaral, Luís A. Nunes

    2015-01-01

    Much of human knowledge sits in large databases of unstructured text. Leveraging this knowledge requires algorithms that extract and record metadata on unstructured text documents. Assigning topics to documents will enable intelligent searching, statistical characterization, and meaningful classification. Latent Dirichlet allocation (LDA) is the state of the art in topic modeling. Here, we perform a systematic theoretical and numerical analysis that demonstrates that current optimization techniques for LDA often yield results that are not accurate in inferring the most suitable model parameters. Adapting approaches from community detection in networks, we propose a new algorithm that displays high reproducibility and high accuracy and also has high computational efficiency. We apply it to a large set of documents in the English Wikipedia and reveal its hierarchical structure.

  16. Stochastic Model for Phonemes Uncovers an Author-Dependency of Their Usage.

    PubMed

    Deng, Weibing; Allahverdyan, Armen E

    2016-01-01

    We study rank-frequency relations for phonemes, the minimal units that still relate to linguistic meaning. We show that these relations can be described by the Dirichlet distribution, a direct analogue of the ideal-gas model in statistical mechanics. This description allows us to demonstrate that the rank-frequency relations for phonemes of a text do depend on its author. The author-dependency effect is not caused by the author's vocabulary (common words used in different texts), and is confirmed by several alternative means. This suggests that it can be directly related to phonemes. These features contrast to rank-frequency relations for words, which are both author and text independent and are governed by the Zipf's law.

  17. A Dirichlet process mixture model for automatic (18)F-FDG PET image segmentation: Validation study on phantoms and on lung and esophageal lesions.

    PubMed

    Giri, Maria Grazia; Cavedon, Carlo; Mazzarotto, Renzo; Ferdeghini, Marco

    2016-05-01

    The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10-37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This "calibration curve" was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.

  18. A Dirichlet process mixture model for automatic {sup 18}F-FDG PET image segmentation: Validation study on phantoms and on lung and esophageal lesions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giri, Maria Grazia, E-mail: mariagrazia.giri@ospedaleuniverona.it; Cavedon, Carlo; Mazzarotto, Renzo

    Purpose: The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on {sup 18}F-fluorodeoxyglucose positron emission tomography ({sup 18}F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. Methods: The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracymore » was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10–37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Results: Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This “calibration curve” was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. Conclusions: The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.« less

  19. FUN-LDA: A Latent Dirichlet Allocation Model for Predicting Tissue-Specific Functional Effects of Noncoding Variation: Methods and Applications.

    PubMed

    Backenroth, Daniel; He, Zihuai; Kiryluk, Krzysztof; Boeva, Valentina; Pethukova, Lynn; Khurana, Ekta; Christiano, Angela; Buxbaum, Joseph D; Ionita-Laza, Iuliana

    2018-05-03

    We describe a method based on a latent Dirichlet allocation model for predicting functional effects of noncoding genetic variants in a cell-type- and/or tissue-specific way (FUN-LDA). Using this unsupervised approach, we predict tissue-specific functional effects for every position in the human genome in 127 different tissues and cell types. We demonstrate the usefulness of our predictions by using several validation experiments. Using eQTL data from several sources, including the GTEx project, Geuvadis project, and TwinsUK cohort, we show that eQTLs in specific tissues tend to be most enriched among the predicted functional variants in relevant tissues in Roadmap. We further show how these integrated functional scores can be used for (1) deriving the most likely cell or tissue type causally implicated for a complex trait by using summary statistics from genome-wide association studies and (2) estimating a tissue-based correlation matrix of various complex traits. We found large enrichment of heritability in functional components of relevant tissues for various complex traits, and FUN-LDA yielded higher enrichment estimates than existing methods. Finally, using experimentally validated functional variants from the literature and variants possibly implicated in disease by previous studies, we rigorously compare FUN-LDA with state-of-the-art functional annotation methods and show that FUN-LDA has better prediction accuracy and higher resolution than these methods. In particular, our results suggest that tissue- and cell-type-specific functional prediction methods tend to have substantially better prediction accuracy than organism-level prediction methods. Scores for each position in the human genome and for each ENCODE and Roadmap tissue are available online (see Web Resources). Copyright © 2018 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  20. Evidential Networks for Fault Tree Analysis with Imprecise Knowledge

    NASA Astrophysics Data System (ADS)

    Yang, Jianping; Huang, Hong-Zhong; Liu, Yu; Li, Yan-Feng

    2012-06-01

    Fault tree analysis (FTA), as one of the powerful tools in reliability engineering, has been widely used to enhance system quality attributes. In most fault tree analyses, precise values are adopted to represent the probabilities of occurrence of those events. Due to the lack of sufficient data or imprecision of existing data at the early stage of product design, it is often difficult to accurately estimate the failure rates of individual events or the probabilities of occurrence of the events. Therefore, such imprecision and uncertainty need to be taken into account in reliability analysis. In this paper, the evidential networks (EN) are employed to quantify and propagate the aforementioned uncertainty and imprecision in fault tree analysis. The detailed conversion processes of some logic gates to EN are described in fault tree (FT). The figures of the logic gates and the converted equivalent EN, together with the associated truth tables and the conditional belief mass tables, are also presented in this work. The new epistemic importance is proposed to describe the effect of ignorance degree of event. The fault tree of an aircraft engine damaged by oil filter plugs is presented to demonstrate the proposed method.

  1. How to Fully Represent Expert Information about Imprecise Properties in a Computer System – Random Sets, Fuzzy Sets, and Beyond: An Overview

    PubMed Central

    Nguyen, Hung T.; Kreinovich, Vladik

    2014-01-01

    To help computers make better decisions, it is desirable to describe all our knowledge in computer-understandable terms. This is easy for knowledge described in terms on numerical values: we simply store the corresponding numbers in the computer. This is also easy for knowledge about precise (well-defined) properties which are either true or false for each object: we simply store the corresponding “true” and “false” values in the computer. The challenge is how to store information about imprecise properties. In this paper, we overview different ways to fully store the expert information about imprecise properties. We show that in the simplest case, when the only source of imprecision is disagreement between different experts, a natural way to store all the expert information is to use random sets; we also show how fuzzy sets naturally appear in such random-set representation. We then show how the random-set representation can be extended to the general (“fuzzy”) case when, in addition to disagreements, experts are also unsure whether some objects satisfy certain properties or not. PMID:25386045

  2. Fuzzy Logic as a Tool for Assessing Students' Knowledge and Skills

    ERIC Educational Resources Information Center

    Voskoglou, Michael Gr.

    2013-01-01

    Fuzzy logic, which is based on fuzzy sets theory introduced by Zadeh in 1965, provides a rich and meaningful addition to standard logic. The applications which may be generated from or adapted to fuzzy logic are wide-ranging and provide the opportunity for modeling under conditions which are imprecisely defined. In this article we develop a fuzzy…

  3. The spectrum, radiation conditions and the Fredholm property for the Dirichlet Laplacian in a perforated plane with semi-infinite inclusions

    NASA Astrophysics Data System (ADS)

    Cardone, G.; Durante, T.; Nazarov, S. A.

    2017-07-01

    We consider the spectral Dirichlet problem for the Laplace operator in the plane Ω∘ with double-periodic perforation but also in the domain Ω• with a semi-infinite foreign inclusion so that the Floquet-Bloch technique and the Gelfand transform do not apply directly. We describe waves which are localized near the inclusion and propagate along it. We give a formulation of the problem with radiation conditions that provides a Fredholm operator of index zero. The main conclusion concerns the spectra σ∘ and σ• of the problems in Ω∘ and Ω•, namely we present a concrete geometry which supports the relation σ∘ ⫋σ• due to a new non-empty spectral band caused by the semi-infinite inclusion called an open waveguide in the double-periodic medium.

  4. Unstable Mode Solutions to the Klein-Gordon Equation in Kerr-anti-de Sitter Spacetimes

    NASA Astrophysics Data System (ADS)

    Dold, Dominic

    2017-03-01

    For any cosmological constant {Λ = -3/ℓ2 < 0} and any {α < 9/4}, we find a Kerr-AdS spacetime {({M}, g_{KAdS})}, in which the Klein-Gordon equation {Box_{g_{KAdS}}ψ + α/ℓ2ψ = 0} has an exponentially growing mode solution satisfying a Dirichlet boundary condition at infinity. The spacetime violates the Hawking-Reall bound {r+2 > |a|ℓ}. We obtain an analogous result for Neumann boundary conditions if {5/4 < α < 9/4}. Moreover, in the Dirichlet case, one can prove that, for any Kerr-AdS spacetime violating the Hawking-Reall bound, there exists an open family of masses {α} such that the corresponding Klein-Gordon equation permits exponentially growing mode solutions. Our result adopts methods of Shlapentokh-Rothman developed in (Commun. Math. Phys. 329:859-891, 2014) and provides the first rigorous construction of a superradiant instability for negative cosmological constant.

  5. Stereochemistry of silicon in oxygen-containing compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serezhkin, V. N., E-mail: Serezhkin@samsu.ru; Urusov, V. S.

    2017-01-15

    Specific stereochemical features of silicon in oxygen-containing compounds, including hybrid silicates with all oxygen atoms of SiO{sub n} groups ({sub n} = 4, 5, or 6) entering into the composition of organic anions or molecules, are described by characteristics of Voronoi—Dirichlet polyhedra. It is found that in rutile-like stishovite and post-stishovite phases with the structures similar to those of СаСl{sub 2}, α-PbO{sub 2}, or pyrite FeS{sub 2}, the volume of Voronoi—Dirichlet polyhedra of silicon and oxygen atoms decreases linearly with pressure increasing to 268 GPa. Based on these results, the possibility of formation of new post-stishovite phases is shown, namely,more » the fluorite-like structure (transition predicted at ~400 GPa) and a body-centered cubic lattice with statistical arrangement of silicon and oxygen atoms (~900 GPa).« less

  6. Semiautomated model building for RNA crystallography using a directed rotameric approach.

    PubMed

    Keating, Kevin S; Pyle, Anna Marie

    2010-05-04

    Structured RNA molecules play essential roles in a variety of cellular processes; however, crystallographic studies of such RNA molecules present a large number of challenges. One notable complication arises from the low resolutions typical of RNA crystallography, which results in electron density maps that are imprecise and difficult to interpret. This problem is exacerbated by the lack of computational tools for RNA modeling, as many of the techniques commonly used in protein crystallography have no equivalents for RNA structure. This leads to difficulty and errors in the model building process, particularly in modeling of the RNA backbone, which is highly error prone due to the large number of variable torsion angles per nucleotide. To address this, we have developed a method for accurately building the RNA backbone into maps of intermediate or low resolution. This method is semiautomated, as it requires a crystallographer to first locate phosphates and bases in the electron density map. After this initial trace of the molecule, however, an accurate backbone structure can be built without further user intervention. To accomplish this, backbone conformers are first predicted using RNA pseudotorsions and the base-phosphate perpendicular distance. Detailed backbone coordinates are then calculated to conform both to the predicted conformer and to the previously located phosphates and bases. This technique is shown to produce accurate backbone structure even when starting from imprecise phosphate and base coordinates. A program implementing this methodology is currently available, and a plugin for the Coot model building program is under development.

  7. Analytical performance specifications for changes in assay bias (Δbias) for data with logarithmic distributions as assessed by effects on reference change values.

    PubMed

    Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György

    2016-11-01

    Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.

  8. Validation of an "Intelligent Mouthguard" Single Event Head Impact Dosimeter.

    PubMed

    Bartsch, Adam; Samorezov, Sergey; Benzel, Edward; Miele, Vincent; Brett, Daniel

    2014-11-01

    Dating to Colonel John Paul Stapp MD in 1975, scientists have desired to measure live human head impacts with accuracy and precision. But no instrument exists to accurately and precisely quantify single head impact events. Our goal is to develop a practical single event head impact dosimeter known as "Intelligent Mouthguard" and quantify its performance on the benchtop, in vitro and in vivo. In the Intelligent Mouthguard hardware, limited gyroscope bandwidth requires an algorithm-based correction as a function of impact duration. After we apply gyroscope correction algorithm, Intelligent Mouthguard results at time of CG linear acceleration peak correlate to the Reference Hybrid III within our tested range of pulse durations and impact acceleration profiles in American football and Boxing in vitro tests: American football, IMG=1.00REF-1.1g, R2=0.99; maximum time of peak XYZ component imprecision 3.6g and 370 rad/s2; maximum time of peak azimuth and elevation imprecision 4.8° and 2.9°; maximum average XYZ component temporal imprecision 3.3g and 390 rad/s2. Boxing, IMG=1.00REF-0.9 g, R2=0.99, R2=0.98; maximum time of peak XYZ component imprecision 3.9 g and 390 rad/s2, maximum time of peak azimuth and elevation imprecision 2.9° and 2.1°; average XYZ component temporal imprecision 4.0 g and 440 rad/s2. In vivo Intelligent Mouthguard true positive head impacts from American football players and amateur boxers have temporal characteristics (first harmonic frequency from 35 Hz to 79 Hz) within our tested benchtop (first harmonic frequency<180 Hz) and in vitro (first harmonic frequency<100 Hz) ranges. Our conclusions apply only to situations where the rigid body assumption is valid, sensor-skull coupling is maintained and the ranges of tested parameters and harmonics fall within the boundaries of harmonics validated in vitro. For these situations, Intelligent Mouthguard qualifies as a single event dosimeter in American football and Boxing.

  9. A theoretical model for investigating the effect of vacuum fluctuations on the electromechanical stability of nanotweezers

    NASA Astrophysics Data System (ADS)

    Farrokhabadi, A.; Mokhtari, J.; Koochi, A.; Abadyan, M.

    2015-06-01

    In this paper, the impact of the Casimir attraction on the electromechanical stability of nanowire-fabricated nanotweezers is investigated using a theoretical continuum mechanics model. The Dirichlet mode is considered and an asymptotic solution, based on path integral approach, is applied to consider the effect of vacuum fluctuations in the model. The Euler-Bernoulli beam theory is employed to derive the nonlinear governing equation of the nanotweezers. The governing equations are solved by three different approaches, i.e. the modified variation iteration method, generalized differential quadrature method and using a lumped parameter model. Various perspectives of the problem, including the comparison with the van der Waals force regime, the variation of instability parameters and effects of geometry are addressed in present paper. The proposed approach is beneficial for the precise determination of the electrostatic response of the nanotweezers in the presence of Casimir force.

  10. Recognition of a person named entity from the text written in a natural language

    NASA Astrophysics Data System (ADS)

    Dolbin, A. V.; Rozaliev, V. L.; Orlova, Y. A.

    2017-01-01

    This work is devoted to the semantic analysis of texts, which were written in a natural language. The main goal of the research was to compare latent Dirichlet allocation and latent semantic analysis to identify elements of the human appearance in the text. The completeness of information retrieval was chosen as the efficiency criteria for methods comparison. However, it was insufficient to choose only one method for achieving high recognition rates. Thus, additional methods were used for finding references to the personality in the text. All these methods are based on the created information model, which represents person’s appearance.

  11. Casimir interaction between spheres in ( D + 1)-dimensional Minkowski spacetime

    NASA Astrophysics Data System (ADS)

    Teo, L. P.

    2014-05-01

    We consider the Casimir interaction between two spheres in ( D + 1)-dimensional Minkowski spacetime due to the vacuum fluctuations of scalar fields. We consider combinations of Dirichlet and Neumann boundary conditions. The TGTG formula of the Casimir interaction energy is derived. The computations of the T matrices of the two spheres are straightforward. To compute the two G matrices, known as translation matrices, which relate the hyper-spherical waves in two spherical coordinate frames differ by a translation, we generalize the operator approach employed in [39]. The result is expressed in terms of an integral over Gegenbauer polynomials. In contrast to the D=3 case, we do not re-express the integral in terms of 3 j-symbols and hyper-spherical waves, which in principle, can be done but does not simplify the formula. Using our expression for the Casimir interaction energy, we derive the large separation and small separation asymptotic expansions of the Casimir interaction energy. In the large separation regime, we find that the Casimir interaction energy is of order L -2 D+3, L -2 D+1 and L -2 D-1 respectively for Dirichlet-Dirichlet, Dirichlet-Neumann and Neumann-Neumann boundary conditions, where L is the center-to-center distance of the two spheres. In the small separation regime, we confirm that the leading term of the Casimir interaction agrees with the proximity force approximation, which is of order , where d is the distance between the two spheres. Another main result of this work is the analytic computations of the next-to-leading order term in the small separation asymptotic expansion. This term is computed using careful order analysis as well as perturbation method. In the case the radius of one of the sphere goes to infinity, we find that the results agree with the one we derive for sphere-plate configuration. When D=3, we also recover previously known results. We find that when D is large, the ratio of the next-to-leading order term to the leading order term is linear in D, indicating a larger correction at higher dimensions. The methodologies employed in this work and the results obtained can be used to study the one-loop effective action of the system of two spherical objects in the universe.

  12. National surveys on internal quality control for blood gas analysis and related electrolytes in clinical laboratories of China.

    PubMed

    Duan, Min; Wang, Wei; Zhao, Haijian; Zhang, Chuanbao; He, Falin; Zhong, Kun; Yuan, Shuai; Wang, Zhiguo

    2018-05-01

    Internal quality control (IQC) is essential for precision evaluation and continuous quality improvement. This study aims to investigate the IQC status of blood gas analysis (BGA) in clinical laboratories of China from 2014 to 2017. IQC information on BGA (including pH, pCO2, pO2, Na+, K+, Ca2+, Cl-) was submitted by external quality assessment (EQA) participant laboratories and collected through Clinet-EQA reporting system in March from 2014 to 2017. First, current CVs were compared among different years and measurement systems. Then, percentages of laboratories meeting five allowable imprecision specifications for each analyte were calculated, respectively. Finally, laboratories were divided into different groups based on control rules and frequency to compare their variation trend. The current CVs of BGA were significantly decreasing from 2014 to 2017. pH and pCO2 got the highest pass rates when compared with the minimum imprecision specification, whereas pO2, Na+, K+, Ca2+, Cl- got the highest pass rates when 1/3 TEa imprecision specification applied. The pass rates of pH, pO2, Na+, K+, Ca2+, Cl- were significantly increasing during the 4 years. The comparisons of current CVs among different measurement systems showed that the precision performance of different analytes among different measurement systems had no regular distribution from 2014 to 2017. The analysis of IQC practice indicated great progress and improvement among different years. The imprecision performance of BGA has improved from 2014 to 2017, but the status of imprecision performance in China remains unsatisfying. Therefore, further investigation and continuous improvement measures should be taken.

  13. Moving standard deviation and moving sum of outliers as quality tools for monitoring analytical precision.

    PubMed

    Liu, Jiakai; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping

    2018-02-01

    An increase in analytical imprecision (expressed as CV a ) can introduce additional variability (i.e. noise) to the patient results, which poses a challenge to the optimal management of patients. Relatively little work has been done to address the need for continuous monitoring of analytical imprecision. Through numerical simulations, we describe the use of moving standard deviation (movSD) and a recently described moving sum of outlier (movSO) patient results as means for detecting increased analytical imprecision, and compare their performances against internal quality control (QC) and the average of normal (AoN) approaches. The power of detecting an increase in CV a is suboptimal under routine internal QC procedures. The AoN technique almost always had the highest average number of patient results affected before error detection (ANPed), indicating that it had generally the worst capability for detecting an increased CV a . On the other hand, the movSD and movSO approaches were able to detect an increased CV a at significantly lower ANPed, particularly for measurands that displayed a relatively small ratio of biological variation to CV a. CONCLUSION: The movSD and movSO approaches are effective in detecting an increase in CV a for high-risk measurands with small biological variation. Their performance is relatively poor when the biological variation is large. However, the clinical risks of an increase in analytical imprecision is attenuated for these measurands as an increased analytical imprecision will only add marginally to the total variation and less likely to impact on the clinical care. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  14. A Dirichlet process model for classifying and forecasting epidemic curves

    PubMed Central

    2014-01-01

    Background A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. Methods The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997–2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). Results We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods’ performance was comparable. Conclusions Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial. PMID:24405642

  15. Verification of examination procedures in clinical laboratory for imprecision, trueness and diagnostic accuracy according to ISO 15189:2012: a pragmatic approach.

    PubMed

    Antonelli, Giorgia; Padoan, Andrea; Aita, Ada; Sciacovelli, Laura; Plebani, Mario

    2017-08-28

    Background The International Standard ISO 15189 is recognized as a valuable guide in ensuring high quality clinical laboratory services and promoting the harmonization of accreditation programmes in laboratory medicine. Examination procedures must be verified in order to guarantee that their performance characteristics are congruent with the intended scope of the test. The aim of the present study was to propose a practice model for implementing procedures employed for the verification of validated examination procedures already used for at least 2 years in our laboratory, in agreement with the ISO 15189 requirement at the Section 5.5.1.2. Methods In order to identify the operative procedure to be used, approved documents were identified, together with the definition of performance characteristics to be evaluated for the different methods; the examination procedures used in laboratory were analyzed and checked for performance specifications reported by manufacturers. Then, operative flow charts were identified to compare the laboratory performance characteristics with those declared by manufacturers. Results The choice of performance characteristics for verification was based on approved documents used as guidance, and the specific purpose tests undertaken, a consideration being made of: imprecision and trueness for quantitative methods; diagnostic accuracy for qualitative methods; imprecision together with diagnostic accuracy for semi-quantitative methods. Conclusions The described approach, balancing technological possibilities, risks and costs and assuring the compliance of the fundamental component of result accuracy, appears promising as an easily applicable and flexible procedure helping laboratories to comply with the ISO 15189 requirements.

  16. Selecting Statistical Quality Control Procedures for Limiting the Impact of Increases in Analytical Random Error on Patient Safety.

    PubMed

    Yago, Martín

    2017-05-01

    QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.

  17. An approach for estimating measurement uncertainty in medical laboratories using data from long-term quality control and external quality assessment schemes.

    PubMed

    Padoan, Andrea; Antonelli, Giorgia; Aita, Ada; Sciacovelli, Laura; Plebani, Mario

    2017-10-26

    The present study was prompted by the ISO 15189 requirements that medical laboratories should estimate measurement uncertainty (MU). The method used to estimate MU included the: a) identification of quantitative tests, b) classification of tests in relation to their clinical purpose, and c) identification of criteria to estimate the different MU components. Imprecision was estimated using long-term internal quality control (IQC) results of the year 2016, while external quality assessment schemes (EQAs) results obtained in the period 2015-2016 were used to estimate bias and bias uncertainty. A total of 263 measurement procedures (MPs) were analyzed. On the basis of test purpose, in 51 MPs imprecision only was used to estimate MU; in the remaining MPs, the bias component was not estimable for 22 MPs because EQAs results did not provide reliable statistics. For a total of 28 MPs, two or more MU values were calculated on the basis of analyte concentration levels. Overall, results showed that uncertainty of bias is a minor factor contributing to MU, the bias component being the most relevant contributor to all the studied sample matrices. The model chosen for MU estimation allowed us to derive a standardized approach for bias calculation, with respect to the fitness-for-purpose of test results. Measurement uncertainty estimation could readily be implemented in medical laboratories as a useful tool in monitoring the analytical quality of test results since they are calculated using a combination of both the long-term imprecision IQC results and bias, on the basis of EQAs results.

  18. Increase in dance imprecision with decreasing foraging distance in the honey bee Apis mellifera L. is partly explained by physical constraints.

    PubMed

    Beekman, Madeleine; Doyen, Laurent; Oldroyd, Benjamin P

    2005-12-01

    Honey bee foragers communicate the direction and distance of both food sources and new nest sites to nest mates by means of a symbolic dance language. Interestingly, the precision by which dancers transfer directional information is negatively correlated with the distance to the advertised food source. The 'tuned-error' hypothesis suggests that colonies benefit from this imprecision as it spreads recruits out over a patch of constant size irrespective of the distance to the advertised site. An alternative to the tuned-error hypothesis is that dancers are physically incapable of dancing with great precision for nearby sources. Here we revisit the tuned-error hypothesis by studying the change in dance precision with increasing foraging distance over relatively short distances while controlling for environmental influences. We show that bees indeed increase their dance precision with the increase in foraging distance. However, we also show that dance performed by swarm-scouts for a nearby (30 m) nest site, where there could be no benefit to imprecision, are either without or with only limited directional information. This result suggests that imprecision in dance communication is caused primarily by physical constraints in the ability of dancers to turn around quickly enough when the advertised site is nearby.

  19. Learning topic models by belief propagation.

    PubMed

    Zeng, Jia; Cheung, William K; Liu, Jiming

    2013-05-01

    Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model for probabilistic topic modeling, which attracts worldwide interest and touches on many important applications in text mining, computer vision and computational biology. This paper represents the collapsed LDA as a factor graph, which enables the classic loopy belief propagation (BP) algorithm for approximate inference and parameter estimation. Although two commonly used approximate inference methods, such as variational Bayes (VB) and collapsed Gibbs sampling (GS), have gained great success in learning LDA, the proposed BP is competitive in both speed and accuracy, as validated by encouraging experimental results on four large-scale document datasets. Furthermore, the BP algorithm has the potential to become a generic scheme for learning variants of LDA-based topic models in the collapsed space. To this end, we show how to learn two typical variants of LDA-based topic models, such as author-topic models (ATM) and relational topic models (RTM), using BP based on the factor graph representations.

  20. Boundary conditions in Chebyshev and Legendre methods

    NASA Technical Reports Server (NTRS)

    Canuto, C.

    1984-01-01

    Two different ways of treating non-Dirichlet boundary conditions in Chebyshev and Legendre collocation methods are discussed for second order differential problems. An error analysis is provided. The effect of preconditioning the corresponding spectral operators by finite difference matrices is also investigated.

  1. Simple diffusion can support the pitchfork, the flip bifurcations, and the chaos

    NASA Astrophysics Data System (ADS)

    Meng, Lili; Li, Xinfu; Zhang, Guang

    2017-12-01

    In this paper, a discrete rational fration population model with the Dirichlet boundary conditions will be considered. According to the discrete maximum principle and the sub- and supper-solution method, the necessary and sufficient conditions of uniqueness and existence of positive steady state solutions will be obtained. In addition, the dynamical behavior of a special two patch metapopulation model is investigated by using the bifurcation method, the center manifold theory, the bifurcation diagrams and the largest Lyapunov exponent. The results show that there exist the pitchfork, the flip bifurcations, and the chaos. Clearly, these phenomena are caused by the simple diffusion. The theoretical analysis of chaos is very imortant, unfortunately, there is not any results in this hand. However, some open problems are given.

  2. Investigation occurrences of turing pattern in Schnakenberg and Gierer-Meinhardt equation

    NASA Astrophysics Data System (ADS)

    Nurahmi, Annisa Fitri; Putra, Prama Setia; Nuraini, Nuning

    2018-03-01

    There are several types of animals with unusual, varied patterns on their skin. The skin pigmentation system influences this in the animal. On the other side, in 1950 Alan Turing formulated the mathematical theory of morphogenesis, where this model can bring up a spatial pattern or so-called Turing pattern. This research discusses the identification of Turing's model that can produce animal skin pattern. Investigations conducted on two types of equations: Schnakenberg (1979), and Gierer-Meinhardt (1972). In this research, parameters were explored to produce Turing's patter on that both equation. The numerical simulation in this research done using Neumann Homogeneous and Dirichlet Homogeneous boundary condition. The investigation of Schnakenberg equation yielded poison dart frog (Andinobates dorisswansonae) and ladybird (Coccinellidae septempunctata) pattern while skin fish pattern was showed by Gierer-Meinhardt equation.

  3. Discontinuous Galerkin Methods for Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Collis, S. Scott

    2002-01-01

    A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.

  4. Two-scale homogenization to determine effective parameters of thin metallic-structured films

    PubMed Central

    Marigo, Jean-Jacques

    2016-01-01

    We present a homogenization method based on matched asymptotic expansion technique to derive effective transmission conditions of thin structured films. The method leads unambiguously to effective parameters of the interface which define jump conditions or boundary conditions at an equivalent zero thickness interface. The homogenized interface model is presented in the context of electromagnetic waves for metallic inclusions associated with Neumann or Dirichlet boundary conditions for transverse electric or transverse magnetic wave polarization. By comparison with full-wave simulations, the model is shown to be valid for thin interfaces up to thicknesses close to the wavelength. We also compare our effective conditions with the two-sided impedance conditions obtained in transmission line theory and to the so-called generalized sheet transition conditions. PMID:27616916

  5. Knowledge-based probabilistic representations of branching ratios in chemical networks: The case of dissociative recombinations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal

    Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information.more » As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.« less

  6. Knowledge-based probabilistic representations of branching ratios in chemical networks: the case of dissociative recombinations.

    PubMed

    Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal

    2010-10-07

    Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information. As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.

  7. [Research on the Application of Fuzzy Logic to Systems Analysis and Control

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Research conducted with the support of NASA Grant NCC2-275 has been focused in the main on the development of fuzzy logic and soft computing methodologies and their applications to systems analysis and control. with emphasis 011 problem areas which are of relevance to NASA's missions. One of the principal results of our research has been the development of a new methodology called Computing with Words (CW). Basically, in CW words drawn from a natural language are employed in place of numbers for computing and reasoning. There are two major imperatives for computing with words. First, computing with words is a necessity when the available information is too imprecise to justify the use of numbers, and second, when there is a tolerance for imprecision which can be exploited to achieve tractability, robustness, low solution cost, and better rapport with reality. Exploitation of the tolerance for imprecision is an issue of central importance in CW.

  8. Excluding joint probabilities from quantum theory

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Danageozian, Arshag

    2018-03-01

    Quantum theory does not provide a unique definition for the joint probability of two noncommuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e.g., via quasiprobabilities or via hidden-variable theories. After reviewing open issues of the joint probability, we relate it to quantum imprecise probabilities, which are noncontextual and are consistent with all constraints expected from a quantum probability. We study two noncommuting observables in a two-dimensional Hilbert space and show that there is no precise joint probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts with theorems by Bell and Kochen-Specker that exclude joint probabilities for more than two noncommuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, joint probabilities are not excluded anymore, but they are still constrained by imprecise probabilities.

  9. Creative brains: designing in the real world†

    PubMed Central

    Goel, Vinod

    2014-01-01

    The process of designing artifacts is a creative activity. It is proposed that, at the cognitive level, one key to understanding design creativity is to understand the array of symbol systems designers utilize. These symbol systems range from being vague, imprecise, abstract, ambiguous, and indeterminate (like conceptual sketches), to being very precise, concrete, unambiguous, and determinate (like contract documents). The former types of symbol systems support associative processes that facilitate lateral (or divergent) transformations that broaden the problem space, while the latter types of symbol systems support inference processes facilitating vertical (or convergent) transformations that deepen of the problem space. The process of artifact design requires the judicious application of both lateral and vertical transformations. This leads to a dual mechanism model of design problem-solving comprising of an associative engine and an inference engine. It is further claimed that this dual mechanism model is supported by an interesting hemispheric dissociation in human prefrontal cortex. The associative engine and neural structures that support imprecise, ambiguous, abstract, indeterminate representations are lateralized in the right prefrontal cortex, while the inference engine and neural structures that support precise, unambiguous, determinant representations are lateralized in the left prefrontal cortex. At the brain level, successful design of artifacts requires a delicate balance between the two hemispheres of prefrontal cortex. PMID:24817846

  10. Demography of the Pacific walrus (Odobenus rosmarus divergens): 1974-2006

    USGS Publications Warehouse

    Taylor, Rebecca L.; Udevitz, Mark S.

    2015-01-01

    Global climate change may fundamentally alter population dynamics of many species for which baseline population parameter estimates are imprecise or lacking. Historically, the Pacific walrus is thought to have been limited by harvest, but it may become limited by global warming-induced reductions in sea ice. Loss of sea ice, on which walruses rest between foraging bouts, may reduce access to food, thus lowering vital rates. Rigorous walrus survival rate estimates do not exist, and other population parameter estimates are out of date or have well-documented bias and imprecision. To provide useful population parameter estimates we developed a Bayesian, hidden process demographic model of walrus population dynamics from 1974 through 2006 that combined annual age-specific harvest estimates with five population size estimates, six standing age structure estimates, and two reproductive rate estimates. Median density independent natural survival was high for juveniles (0.97) and adults (0.99), and annual density dependent vital rates rose from 0.06 to 0.11 for reproduction, 0.31 to 0.59 for survival of neonatal calves, and 0.39 to 0.85 for survival of older calves, concomitant with a population decline. This integrated population model provides a baseline for estimating changing population dynamics resulting from changing harvests or sea ice.

  11. A numerical technique for linear elliptic partial differential equations in polygonal domains.

    PubMed

    Hashemzadeh, P; Fokas, A S; Smitheman, S A

    2015-03-08

    Integral representations for the solution of linear elliptic partial differential equations (PDEs) can be obtained using Green's theorem. However, these representations involve both the Dirichlet and the Neumann values on the boundary, and for a well-posed boundary-value problem (BVPs) one of these functions is unknown. A new transform method for solving BVPs for linear and integrable nonlinear PDEs usually referred to as the unified transform ( or the Fokas transform ) was introduced by the second author in the late Nineties. For linear elliptic PDEs, this method can be considered as the analogue of Green's function approach but now it is formulated in the complex Fourier plane instead of the physical plane. It employs two global relations also formulated in the Fourier plane which couple the Dirichlet and the Neumann boundary values. These relations can be used to characterize the unknown boundary values in terms of the given boundary data, yielding an elegant approach for determining the Dirichlet to Neumann map . The numerical implementation of the unified transform can be considered as the counterpart in the Fourier plane of the well-known boundary integral method which is formulated in the physical plane. For this implementation, one must choose (i) a suitable basis for expanding the unknown functions and (ii) an appropriate set of complex values, which we refer to as collocation points, at which to evaluate the global relations. Here, by employing a variety of examples we present simple guidelines of how the above choices can be made. Furthermore, we provide concrete rules for choosing the collocation points so that the condition number of the matrix of the associated linear system remains low.

  12. A multiscale model for reinforced concrete with macroscopic variation of reinforcement slip

    NASA Astrophysics Data System (ADS)

    Sciegaj, Adam; Larsson, Fredrik; Lundgren, Karin; Nilenius, Filip; Runesson, Kenneth

    2018-06-01

    A single-scale model for reinforced concrete, comprising the plain concrete continuum, reinforcement bars and the bond between them, is used as a basis for deriving a two-scale model. The large-scale problem, representing the "effective" reinforced concrete solid, is enriched by an effective reinforcement slip variable. The subscale problem on a Representative Volume Element (RVE) is defined by Dirichlet boundary conditions. The response of the RVEs of different sizes was investigated by means of pull-out tests. The resulting two-scale formulation was used in an FE^2 analysis of a deep beam. Load-deflection relations, crack widths, and strain fields were compared to those obtained from a single-scale analysis. Incorporating the independent macroscopic reinforcement slip variable resulted in a more pronounced localisation of the effective strain field. This produced a more accurate estimation of the crack widths than the two-scale formulation neglecting the effective reinforcement slip variable.

  13. Bayesian Ensemble Trees (BET) for Clustering and Prediction in Heterogeneous Data

    PubMed Central

    Duan, Leo L.; Clancy, John P.; Szczesniak, Rhonda D.

    2016-01-01

    We propose a novel “tree-averaging” model that utilizes the ensemble of classification and regression trees (CART). Each constituent tree is estimated with a subset of similar data. We treat this grouping of subsets as Bayesian Ensemble Trees (BET) and model them as a Dirichlet process. We show that BET determines the optimal number of trees by adapting to the data heterogeneity. Compared with the other ensemble methods, BET requires much fewer trees and shows equivalent prediction accuracy using weighted averaging. Moreover, each tree in BET provides variable selection criterion and interpretation for each subset. We developed an efficient estimating procedure with improved estimation strategies in both CART and mixture models. We demonstrate these advantages of BET with simulations and illustrate the approach with a real-world data example involving regression of lung function measurements obtained from patients with cystic fibrosis. Supplemental materials are available online. PMID:27524872

  14. DUTIR at TREC 2009: Chemical IR Track

    DTIC Science & Technology

    2009-11-01

    We set the Dirichlet prior empirically at 1,500 as recommended in [2]. For example, Topic 15 “ Betaines for peripheral arterial disease” is...converted into the following Indri query: # (combine betaines for peripheral arterial disease ) which produces results rank-equivalent to a simple query

  15. Modifications to holographic entanglement entropy in warped CFT

    NASA Astrophysics Data System (ADS)

    Song, Wei; Wen, Qiang; Xu, Jianfei

    2017-02-01

    In [1] it was observed that asymptotic boundary conditions play an important role in the study of holographic entanglement beyond AdS/CFT. In particular, the Ryu-Takayanagi proposal must be modified for warped AdS3 (WAdS3) with Dirichlet boundary conditions. In this paper, we consider AdS3 and WAdS3 with Dirichlet-Neumann boundary conditions. The conjectured holographic duals are warped conformal field theories (WCFTs), featuring a Virasoro-Kac-Moody algebra. We provide a holographic calculation of the entanglement entropy and Rényi entropy using AdS3/WCFT and WAdS3/WCFT dualities. Our bulk results are consistent with the WCFT results derived by Castro-Hofman-Iqbal using the Rindler method. Comparing with [1], we explicitly show that the holographic entanglement entropy is indeed affected by boundary conditions. Both results differ from the Ryu-Takayanagi proposal, indicating new relations between spacetime geometry and quantum entanglement for holographic dualities beyond AdS/CFT.

  16. Repeated Red-Black ordering

    NASA Astrophysics Data System (ADS)

    Ciarlet, P.

    1994-09-01

    Hereafter, we describe and analyze, from both a theoretical and a numerical point of view, an iterative method for efficiently solving symmetric elliptic problems with possibly discontinuous coefficients. In the following, we use the Preconditioned Conjugate Gradient method to solve the symmetric positive definite linear systems which arise from the finite element discretization of the problems. We focus our interest on sparse and efficient preconditioners. In order to define the preconditioners, we perform two steps: first we reorder the unknowns and then we carry out a (modified) incomplete factorization of the original matrix. We study numerically and theoretically two preconditioners, the second preconditioner corresponding to the one investigated by Brand and Heinemann [2]. We prove convergence results about the Poisson equation with either Dirichlet or periodic boundary conditions. For a meshsizeh, Brand proved that the condition number of the preconditioned system is bounded byO(h-1/2) for Dirichlet boundary conditions. By slightly modifying the preconditioning process, we prove that the condition number is bounded byO(h-1/3).

  17. Poisson Coordinates.

    PubMed

    Li, Xian-Ying; Hu, Shi-Min

    2013-02-01

    Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

  18. Positivity and Almost Positivity of Biharmonic Green's Functions under Dirichlet Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Grunau, Hans-Christoph; Robert, Frédéric

    2010-03-01

    In general, for higher order elliptic equations and boundary value problems like the biharmonic equation and the linear clamped plate boundary value problem, neither a maximum principle nor a comparison principle or—equivalently—a positivity preserving property is available. The problem is rather involved since the clamped boundary conditions prevent the boundary value problem from being reasonably written as a system of second order boundary value problems. It is shown that, on the other hand, for bounded smooth domains {Ω subsetmathbb{R}^n} , the negative part of the corresponding Green’s function is “small” when compared with its singular positive part, provided {n≥q 3} . Moreover, the biharmonic Green’s function in balls {Bsubsetmathbb{R}^n} under Dirichlet (that is, clamped) boundary conditions is known explicitly and is positive. It has been known for some time that positivity is preserved under small regular perturbations of the domain, if n = 2. In the present paper, such a stability result is proved for {n≥q 3}.

  19. Exact harmonic solutions to Guyer-Krumhansl-type equation and application to heat transport in thin films

    NASA Astrophysics Data System (ADS)

    Zhukovsky, K.; Oskolkov, D.

    2018-03-01

    A system of hyperbolic-type inhomogeneous differential equations (DE) is considered for non-Fourier heat transfer in thin films. Exact harmonic solutions to Guyer-Krumhansl-type heat equation and to the system of inhomogeneous DE are obtained in Cauchy- and Dirichlet-type conditions. The contribution of the ballistic-type heat transport, of the Cattaneo heat waves and of the Fourier heat diffusion is discussed and compared with each other in various conditions. The application of the study to the ballistic heat transport in thin films is performed. Rapid evolution of the ballistic quasi-temperature component in low-dimensional systems is elucidated and compared with slow evolution of its diffusive counterpart. The effect of the ballistic quasi-temperature component on the evolution of the complete quasi-temperature is explored. In this context, the influence of the Knudsen number and of Cauchy- and Dirichlet-type conditions on the evolution of the temperature distribution is explored. The comparative analysis of the obtained solutions is performed.

  20. Evaluation of the technical performance of novel holotranscobalamin (holoTC) assays in a multicenter European demonstration project.

    PubMed

    Morkbak, Anne L; Heimdal, Randi M; Emmens, Kathleen; Molloy, Anne; Hvas, Anne-Mette; Schneede, Joern; Clarke, Robert; Scott, John M; Ueland, Per M; Nexo, Ebba

    2005-01-01

    A commercially available holotranscobalamin (holo-TC) radioimmunoassay (RIA) (Axis-Shield, Dundee, Scotland) was evaluated in four laboratories and compared with a holoTC ELISA run in one laboratory. The performance of the holoTC RIA assay was comparable in three of the four participating laboratories. The results from these three laboratories, involving at least 20 initial runs of "low", "medium" and "high" serum-based controls (mean holoTC concentrations 34, 60 and 110 pmol/L, respectively) yielded an intra-laboratory imprecision of 6-10%. No systematic inter-laboratory deviations were observed on runs involving 72 patient samples (holoTC concentration range 10-160 pmol/L). A fourth laboratory demonstrated higher assay imprecision for control samples and systematic deviation of results for the patient samples. Measurement of holoTC by ELISA showed an imprecision of 4-5%, and slightly higher mean values for the controls (mean holoTC concentrations 40, 70 and 114 pmol/L, respectively). Comparable results were obtained for the patient samples. The long-term intra-laboratory imprecision was 12% for the holoTC RIA and 6% for the ELISA. In conclusion, it would be prudent to check the calibration and precision prior to starting to use these holoTC assays in research or clinical practice. The results obtained using the holoTC RIA were similar to those obtained using the holoTC ELISA assay.

  1. Evaluation of the Olympus AU-510 analyser.

    PubMed

    Farré, C; Velasco, J; Ramón, F

    1991-01-01

    The selective multitest Olympus AU-510 analyser was evaluated according to the recommendations of the Comision de Instrumentacion de la Sociedad Española de Quimica Clinica and the European Committee for Clinical Laboratory Standards. The evaluation was carried out in two stages: an examination of the analytical units and then an evaluation in routine work conditions. The operational characteristics of the system were also studied.THE FIRST STAGE INCLUDED A PHOTOMETRIC STUDY: dependent on the absorbance, the inaccuracy varies between +0.5% to -0.6% at 405 nm and from -5.6% to 10.6% at 340 nm; the imprecision ranges between -0.22% and 0.56% at 405 nm and between 0.09% and 2.74% at 340 nm. Linearity was acceptable, apart from a very low absorbance for NADH at 340 nm; and the imprecision of the serum sample pipetter was satisfactory.TWELVE SERUM ANALYTES WERE STUDIED UNDER ROUTINE CONDITIONS: glucose, urea urate, cholesterol, triglycerides, total bilirubin, creatinine, phosphate, iron, aspartate aminotransferase, alanine aminotransferase and gamma-glutamyl transferase.The within-run imprecision (CV%) ranged from 0.67% for phosphate to 2.89% for iron and the between-run imprecision from 0.97% for total bilirubin to 7.06% for iron. There was no carryover in a study of the serum sample pipetter. Carry-over studies with the reagent and sample pipetters shows some cross contamination in the iron assay.

  2. Theoretical aspect of suitable spatial boundary condition specified for adjoint model on limited area

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Wu, Rongsheng

    2001-12-01

    Theoretical argumentation for so-called suitable spatial condition is conducted by the aid of homotopy framework to demonstrate that the proposed boundary condition does guarantee that the over-specification boundary condition resulting from an adjoint model on a limited-area is no longer an issue, and yet preserve its well-poseness and optimal character in the boundary setting. The ill-poseness of over-specified spatial boundary condition is in a sense, inevitable from an adjoint model since data assimilation processes have to adapt prescribed observations that used to be over-specified at the spatial boundaries of the modeling domain. In the view of pragmatic implement, the theoretical framework of our proposed condition for spatial boundaries indeed can be reduced to the hybrid formulation of nudging filter, radiation condition taking account of ambient forcing, together with Dirichlet kind of compatible boundary condition to the observations prescribed in data assimilation procedure. All of these treatments, no doubt, are very familiar to mesoscale modelers.

  3. A systematic review of the asymmetric inheritance of cellular organelles in eukaryotes: A critique of basic science validity and imprecision

    PubMed Central

    Collins, Anne; Ross, Janine

    2017-01-01

    We performed a systematic review to identify all original publications describing the asymmetric inheritance of cellular organelles in normal animal eukaryotic cells and to critique the validity and imprecision of the evidence. Searches were performed in Embase, MEDLINE and Pubmed up to November 2015. Screening of titles, abstracts and full papers was performed by two independent reviewers. Data extraction and validity were performed by one reviewer and checked by a second reviewer. Study quality was assessed using the SYRCLE risk of bias tool, for animal studies and by developing validity tools for the experimental model, organelle markers and imprecision. A narrative data synthesis was performed. We identified 31 studies (34 publications) of the asymmetric inheritance of organelles after mitotic or meiotic division. Studies for the asymmetric inheritance of centrosomes (n = 9); endosomes (n = 6), P granules (n = 4), the midbody (n = 3), mitochondria (n = 3), proteosomes (n = 2), spectrosomes (n = 2), cilia (n = 2) and endoplasmic reticulum (n = 2) were identified. Asymmetry was defined and quantified by variable methods. Assessment of the statistical reliability of the results indicated only two studies (7%) were judged to have low concern, the majority of studies (77%) were 'unclear' and five (16%) were judged to have 'high concerns'; the main reasons were low technical repeats (<10). Assessment of model validity indicated that the majority of studies (61%) were judged to be valid, ten studies (32%) were unclear and two studies (7%) were judged to have 'high concerns'; both described 'stem cells' without providing experimental evidence to confirm this (pluripotency and self-renewal). Assessment of marker validity indicated that no studies had low concern, most studies were unclear (96.5%), indicating there were insufficient details to judge if the markers were appropriate. One study had high concern for marker validity due to the contradictory results of two markers for the same organelle. For most studies the validity and imprecision of results could not be confirmed. In particular, data were limited due to a lack of reporting of interassay variability, sample size calculations, controls and functional validation of organelle markers. An evaluation of 16 systematic reviews containing cell assays found that only 50% reported adherence to PRISMA or ARRIVE reporting guidelines and 38% reported a formal risk of bias assessment. 44% of the reviews did not consider how relevant or valid the models were to the research question. 75% reviews did not consider how valid the markers were. 69% of reviews did not consider the impact of the statistical reliability of the results. Future systematic reviews in basic or preclinical research should ensure the rigorous reporting of the statistical reliability of the results in addition to the validity of the methods. Increased awareness of the importance of reporting guidelines and validation tools is needed for the scientific community. PMID:28562636

  4. Mechanisms for the target patterns formation in a stochastic bistable excitable medium

    NASA Astrophysics Data System (ADS)

    Verisokin, Andrey Yu.; Verveyko, Darya V.; Postnov, Dmitry E.

    2018-04-01

    We study the features of formation and evolution of spatiotemporal chaotic regime generated by autonomous pacemakers in excitable deterministic and stochastic bistable active media using the example of the FitzHugh - Nagumo biological neuron model under discrete medium conditions. The following possible mechanisms for the formation of autonomous pacemakers have been studied: 1) a temporal external force applied to a small region of the medium, 2) geometry of the solution region (the medium contains regions with Dirichlet or Neumann boundaries). In our work we explore the conditions for the emergence of pacemakers inducing target patterns in a stochastic bistable excitable system and propose the algorithm for their analysis.

  5. Impulsive synchronization of stochastic reaction-diffusion neural networks with mixed time delays.

    PubMed

    Sheng, Yin; Zeng, Zhigang

    2018-07-01

    This paper discusses impulsive synchronization of stochastic reaction-diffusion neural networks with Dirichlet boundary conditions and hybrid time delays. By virtue of inequality techniques, theories of stochastic analysis, linear matrix inequalities, and the contradiction method, sufficient criteria are proposed to ensure exponential synchronization of the addressed stochastic reaction-diffusion neural networks with mixed time delays via a designed impulsive controller. Compared with some recent studies, the neural network models herein are more general, some restrictions are relaxed, and the obtained conditions enhance and generalize some published ones. Finally, two numerical simulations are performed to substantiate the validity and merits of the developed theoretical analysis. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Hamiltonian models for the propagation of irrotational surface gravity waves over a variable bottom

    NASA Astrophysics Data System (ADS)

    Compelli, A.; Ivanov, R.; Todorov, M.

    2017-12-01

    A single incompressible, inviscid, irrotational fluid medium bounded by a free surface and varying bottom is considered. The Hamiltonian of the system is expressed in terms of the so-called Dirichlet-Neumann operators. The equations for the surface waves are presented in Hamiltonian form. Specific scaling of the variables is selected which leads to approximations of Boussinesq and Korteweg-de Vries (KdV) types, taking into account the effect of the slowly varying bottom. The arising KdV equation with variable coefficients is studied numerically when the initial condition is in the form of the one-soliton solution for the initial depth. This article is part of the theme issue 'Nonlinear water waves'.

  7. A ligand predication tool based on modeling and reasoning with imprecise probabilistic knowledge.

    PubMed

    Liu, Weiru; Yue, Anbu; Timson, David J

    2010-04-01

    Ligand prediction has been driven by a fundamental desire to understand more about how biomolecules recognize their ligands and by the commercial imperative to develop new drugs. Most of the current available software systems are very complex and time-consuming to use. Therefore, developing simple and efficient tools to perform initial screening of interesting compounds is an appealing idea. In this paper, we introduce our tool for very rapid screening for likely ligands (either substrates or inhibitors) based on reasoning with imprecise probabilistic knowledge elicited from past experiments. Probabilistic knowledge is input to the system via a user-friendly interface showing a base compound structure. A prediction of whether a particular compound is a substrate is queried against the acquired probabilistic knowledge base and a probability is returned as an indication of the prediction. This tool will be particularly useful in situations where a number of similar compounds have been screened experimentally, but information is not available for all possible members of that group of compounds. We use two case studies to demonstrate how to use the tool. 2009 Elsevier Ireland Ltd. All rights reserved.

  8. Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data

    NASA Astrophysics Data System (ADS)

    Glüsenkamp, Thorsten

    2018-06-01

    Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.

  9. Distinguishing Antimicrobial Models with Different Resistance Mechanisms via Population Pharmacodynamic Modeling

    PubMed Central

    Jacobs, Matthieu; Grégoire, Nicolas; Couet, William; Bulitta, Jurgen B.

    2016-01-01

    Semi-mechanistic pharmacokinetic-pharmacodynamic (PK-PD) modeling is increasingly used for antimicrobial drug development and optimization of dosage regimens, but systematic simulation-estimation studies to distinguish between competing PD models are lacking. This study compared the ability of static and dynamic in vitro infection models to distinguish between models with different resistance mechanisms and support accurate and precise parameter estimation. Monte Carlo simulations (MCS) were performed for models with one susceptible bacterial population without (M1) or with a resting stage (M2), a one population model with adaptive resistance (M5), models with pre-existing susceptible and resistant populations without (M3) or with (M4) inter-conversion, and a model with two pre-existing populations with adaptive resistance (M6). For each model, 200 datasets of the total bacterial population were simulated over 24h using static antibiotic concentrations (256-fold concentration range) or over 48h under dynamic conditions (dosing every 12h; elimination half-life: 1h). Twelve-hundred random datasets (each containing 20 curves for static or four curves for dynamic conditions) were generated by bootstrapping. Each dataset was estimated by all six models via population PD modeling to compare bias and precision. For M1 and M3, most parameter estimates were unbiased (<10%) and had good imprecision (<30%). However, parameters for adaptive resistance and inter-conversion for M2, M4, M5 and M6 had poor bias and large imprecision under static and dynamic conditions. For datasets that only contained viable counts of the total population, common statistical criteria and diagnostic plots did not support sound identification of the true resistance mechanism. Therefore, it seems advisable to quantify resistant bacteria and characterize their MICs and resistance mechanisms to support extended simulations and translate from in vitro experiments to animal infection models and ultimately patients. PMID:26967893

  10. Segmental analysis of amphetamines in hair using a sensitive UHPLC-MS/MS method.

    PubMed

    Jakobsson, Gerd; Kronstrand, Robert

    2014-06-01

    A sensitive and robust ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine and 3,4-methylenedioxy methamphetamine in hair samples. Segmented hair (10 mg) was incubated in 2M sodium hydroxide (80°C, 10 min) before liquid-liquid extraction with isooctane followed by centrifugation and evaporation of the organic phase to dryness. The residue was reconstituted in methanol:formate buffer pH 3 (20:80). The total run time was 4 min and after optimization of UHPLC-MS/MS-parameters validation included selectivity, matrix effects, recovery, process efficiency, calibration model and range, lower limit of quantification, precision and bias. The calibration curve ranged from 0.02 to 12.5 ng/mg, and the recovery was between 62 and 83%. During validation the bias was less than ±7% and the imprecision was less than 5% for all analytes. In routine analysis, fortified control samples demonstrated an imprecision <13% and control samples made from authentic hair demonstrated an imprecision <26%. The method was applied to samples from a controlled study of amphetamine intake as well as forensic hair samples previously analyzed with an ultra high performance liquid chromatography time of flight mass spectrometry (UHPLC-TOF-MS) screening method. The proposed method was suitable for quantification of these drugs in forensic cases including violent crimes, autopsy cases, drug testing and re-granting of driving licences. This study also demonstrated that if hair samples are divided into several short segments, the time point for intake of a small dose of amphetamine can be estimated, which might be useful when drug facilitated crimes are investigated. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Bayesian hierarchical functional data analysis via contaminated informative priors.

    PubMed

    Scarpa, Bruno; Dunson, David B

    2009-09-01

    A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.

  12. Dimension Reduction for the Landau-de Gennes Model in Planar Nematic Thin Films

    NASA Astrophysics Data System (ADS)

    Golovaty, Dmitry; Montero, José Alberto; Sternberg, Peter

    2015-12-01

    We use the method of Γ -convergence to study the behavior of the Landau-de Gennes model for a nematic liquid crystalline film in the limit of vanishing thickness. In this asymptotic regime, surface energy plays a greater role, and we take particular care in understanding its influence on the structure of the minimizers of the derived two-dimensional energy. We assume general weak anchoring conditions on the top and the bottom surfaces of the film and the strong Dirichlet boundary conditions on the lateral boundary of the film. The constants in the weak anchoring conditions are chosen so as to enforce that a surface-energy-minimizing nematic Q-tensor has the normal to the film as one of its eigenvectors. We establish a general convergence result and then discuss the limiting problem in several parameter regimes.

  13. Mathematical and computational aspects of nonuniform frictional slip modeling

    NASA Astrophysics Data System (ADS)

    Gorbatikh, Larissa

    2004-07-01

    A mechanics-based model of non-uniform frictional sliding is studied from the mathematical/computational analysis point of view. This problem is of a key importance for a number of applications (particularly geomechanical ones), where materials interfaces undergo partial frictional sliding under compression and shear. We show that the problem is reduced to Dirichlet's problem for monotonic loading and to Riemman's problem for cyclic loading. The problem may look like a traditional crack interaction problem, however, it is confounded by the fact that locations of n sliding intervals are not known. They are to be determined from the condition for the stress intensity factors: KII=0 at the ends of the sliding zones. Computationally, it reduces to solving a system of 2n coupled non-linear algebraic equations involving singular integrals with unknown limits of integration.

  14. Bounded Partial Sums?

    ERIC Educational Resources Information Center

    Brilleslyper, Michael A.; Wolverton, Robert H.

    2008-01-01

    In this article we consider an example suitable for investigation in many mid and upper level undergraduate mathematics courses. Fourier series provide an excellent example of the differences between uniform and non-uniform convergence. We use Dirichlet's test to investigate the convergence of the Fourier series for a simple periodic saw tooth…

  15. UTOPIAN: user-driven topic modeling based on interactive nonnegative matrix factorization.

    PubMed

    Choo, Jaegul; Lee, Changhyun; Reddy, Chandan K; Park, Haesun

    2013-12-01

    Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.

  16. Imprecise (fuzzy) information in geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardossy, A.; Bogardi, I.; Kelly, W.E.

    1988-05-01

    A methodology based on fuzzy set theory for the utilization of imprecise data in geostatistics is presented. A common problem preventing a broader use of geostatistics has been the insufficient amount of accurate measurement data. In certain cases, additional but uncertain (soft) information is available and can be encoded as subjective probabilities, and then the soft kriging method can be applied (Journal, 1986). In other cases, a fuzzy encoding of soft information may be more realistic and simplify the numerical calculations. Imprecise (fuzzy) spatial information on the possible variogram is integrated into a single variogram which is used in amore » fuzzy kriging procedure. The overall uncertainty of prediction is represented by the estimation variance and the calculated membership function for each kriged point. The methodology is applied to the permeability prediction of a soil liner for hazardous waste containment. The available number of hard measurement data (20) was not enough for a classical geostatistical analysis. An additional 20 soft data made it possible to prepare kriged contour maps using the fuzzy geostatistical procedure.« less

  17. Quasi-equilibrium theory for the distribution of rare alleles in a subdivided population: justification and implications.

    PubMed

    Burr, T L

    2000-05-01

    This paper examines a quasi-equilibrium theory of rare alleles for subdivided populations that follow an island-model version of the Wright-Fisher model of evolution. All mutations are assumed to create new alleles. We present four results: (1) conditions for the theory to apply are formally established using properties of the moments of the binomial distribution; (2) approximations currently in the literature can be replaced with exact results that are in better agreement with our simulations; (3) a modified maximum likelihood estimator of migration rate exhibits the same good performance on island-model data or on data simulated from the multinomial mixed with the Dirichlet distribution, and (4) a connection between the rare-allele method and the Ewens Sampling Formula for the infinite-allele mutation model is made. This introduces a new and simpler proof for the expected number of alleles implied by the Ewens Sampling Formula. Copyright 2000 Academic Press.

  18. Infinite hidden conditional random fields for human behavior analysis.

    PubMed

    Bousmalis, Konstantinos; Zafeiriou, Stefanos; Morency, Louis-Philippe; Pantic, Maja

    2013-01-01

    Hidden conditional random fields (HCRFs) are discriminative latent variable models that have been shown to successfully learn the hidden structure of a given classification problem (provided an appropriate validation of the number of hidden states). In this brief, we present the infinite HCRF (iHCRF), which is a nonparametric model based on hierarchical Dirichlet processes and is capable of automatically learning the optimal number of hidden states for a classification task. We show how we learn the model hyperparameters with an effective Markov-chain Monte Carlo sampling technique, and we explain the process that underlines our iHCRF model with the Restaurant Franchise Rating Agencies analogy. We show that the iHCRF is able to converge to a correct number of represented hidden states, and outperforms the best finite HCRFs--chosen via cross-validation--for the difficult tasks of recognizing instances of agreement, disagreement, and pain. Moreover, the iHCRF manages to achieve this performance in significantly less total training, validation, and testing time.

  19. Using Bayesian Nonparametric Hidden Semi-Markov Models to Disentangle Affect Processes during Marital Interaction

    PubMed Central

    Griffin, William A.; Li, Xun

    2016-01-01

    Sequential affect dynamics generated during the interaction of intimate dyads, such as married couples, are associated with a cascade of effects—some good and some bad—on each partner, close family members, and other social contacts. Although the effects are well documented, the probabilistic structures associated with micro-social processes connected to the varied outcomes remain enigmatic. Using extant data we developed a method of classifying and subsequently generating couple dynamics using a Hierarchical Dirichlet Process Hidden semi-Markov Model (HDP-HSMM). Our findings indicate that several key aspects of existing models of marital interaction are inadequate: affect state emissions and their durations, along with the expected variability differences between distressed and nondistressed couples are present but highly nuanced; and most surprisingly, heterogeneity among highly satisfied couples necessitate that they be divided into subgroups. We review how this unsupervised learning technique generates plausible dyadic sequences that are sensitive to relationship quality and provide a natural mechanism for computational models of behavioral and affective micro-social processes. PMID:27187319

  20. Robin Gravity

    NASA Astrophysics Data System (ADS)

    Krishnan, Chethan; Maheshwari, Shubham; Bala Subramanian, P. N.

    2017-08-01

    We write down a Robin boundary term for general relativity. The construction relies on the Neumann result of arXiv:1605.01603 in an essential way. This is unlike in mechanics and (polynomial) field theory, where two formulations of the Robin problem exist: one with Dirichlet as the natural limiting case, and another with Neumann.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manjunath, Naren; Samajdar, Rhine; Jain, Sudhir R., E-mail: srjain@barc.gov.in

    Recently, the nodal domain counts of planar, integrable billiards with Dirichlet boundary conditions were shown to satisfy certain difference equations in Samajdar and Jain (2014). The exact solutions of these equations give the number of domains explicitly. For complete generality, we demonstrate this novel formulation for three additional separable systems and thus extend the statement to all integrable billiards.

  2. A weighted anisotropic variant of the Caffarelli-Kohn-Nirenberg inequality and applications

    NASA Astrophysics Data System (ADS)

    Bahrouni, Anouar; Rădulescu, Vicenţiu D.; Repovš, Dušan D.

    2018-04-01

    We present a weighted version of the Caffarelli-Kohn-Nirenberg inequality in the framework of variable exponents. The combination of this inequality with a variant of the fountain theorem, yields the existence of infinitely many solutions for a class of non-homogeneous problems with Dirichlet boundary condition.

  3. The use of MACSYMA for solving elliptic boundary value problems

    NASA Technical Reports Server (NTRS)

    Thejll, Peter; Gilbert, Robert P.

    1990-01-01

    A boundary method is presented for the solution of elliptic boundary value problems. An approach based on the use of complete systems of solutions is emphasized. The discussion is limited to the Dirichlet problem, even though the present method can possibly be adapted to treat other boundary value problems.

  4. Solution of a Nonlinear Heat Conduction Equation for a Curvilinear Region with Dirichlet Conditions by the Fast-Expansion Method

    NASA Astrophysics Data System (ADS)

    Chernyshov, A. D.

    2018-05-01

    The analytical solution of the nonlinear heat conduction problem for a curvilinear region is obtained with the use of the fast-expansion method together with the method of extension of boundaries and pointwise technique of computing Fourier coefficients.

  5. Pig Data and Bayesian Inference on Multinomial Probabilities

    ERIC Educational Resources Information Center

    Kern, John C.

    2006-01-01

    Bayesian inference on multinomial probabilities is conducted based on data collected from the game Pass the Pigs[R]. Prior information on these probabilities is readily available from the instruction manual, and is easily incorporated in a Dirichlet prior. Posterior analysis of the scoring probabilities quantifies the discrepancy between empirical…

  6. Comment Data Mining to Estimate Student Performance Considering Consecutive Lessons

    ERIC Educational Resources Information Center

    Sorour, Shaymaa E.; Goda, Kazumasa; Mine, Tsunenori

    2017-01-01

    The purpose of this study is to examine different formats of comment data to predict student performance. Having students write comment data after every lesson can reflect students' learning attitudes, tendencies and learning activities involved with the lesson. In this research, Latent Dirichlet Allocation (LDA) and Probabilistic Latent Semantic…

  7. A fuzzy Bayesian approach to flood frequency estimation with imprecise historical information

    PubMed Central

    Kiss, Andrea; Viglione, Alberto; Viertl, Reinhard; Blöschl, Günter

    2016-01-01

    Abstract This paper presents a novel framework that links imprecision (through a fuzzy approach) and stochastic uncertainty (through a Bayesian approach) in estimating flood probabilities from historical flood information and systematic flood discharge data. The method exploits the linguistic characteristics of historical source material to construct membership functions, which may be wider or narrower, depending on the vagueness of the statements. The membership functions are either included in the prior distribution or the likelihood function to obtain a fuzzy version of the flood frequency curve. The viability of the approach is demonstrated by three case studies that differ in terms of their hydromorphological conditions (from an Alpine river with bedrock profile to a flat lowland river with extensive flood plains) and historical source material (including narratives, town and county meeting protocols, flood marks and damage accounts). The case studies are presented in order of increasing fuzziness (the Rhine at Basel, Switzerland; the Werra at Meiningen, Germany; and the Tisza at Szeged, Hungary). Incorporating imprecise historical information is found to reduce the range between the 5% and 95% Bayesian credibility bounds of the 100 year floods by 45% and 61% for the Rhine and Werra case studies, respectively. The strengths and limitations of the framework are discussed relative to alternative (non‐fuzzy) methods. The fuzzy Bayesian inference framework provides a flexible methodology that fits the imprecise nature of linguistic information on historical floods as available in historical written documentation. PMID:27840456

  8. A fuzzy Bayesian approach to flood frequency estimation with imprecise historical information

    NASA Astrophysics Data System (ADS)

    Salinas, José Luis; Kiss, Andrea; Viglione, Alberto; Viertl, Reinhard; Blöschl, Günter

    2016-09-01

    This paper presents a novel framework that links imprecision (through a fuzzy approach) and stochastic uncertainty (through a Bayesian approach) in estimating flood probabilities from historical flood information and systematic flood discharge data. The method exploits the linguistic characteristics of historical source material to construct membership functions, which may be wider or narrower, depending on the vagueness of the statements. The membership functions are either included in the prior distribution or the likelihood function to obtain a fuzzy version of the flood frequency curve. The viability of the approach is demonstrated by three case studies that differ in terms of their hydromorphological conditions (from an Alpine river with bedrock profile to a flat lowland river with extensive flood plains) and historical source material (including narratives, town and county meeting protocols, flood marks and damage accounts). The case studies are presented in order of increasing fuzziness (the Rhine at Basel, Switzerland; the Werra at Meiningen, Germany; and the Tisza at Szeged, Hungary). Incorporating imprecise historical information is found to reduce the range between the 5% and 95% Bayesian credibility bounds of the 100 year floods by 45% and 61% for the Rhine and Werra case studies, respectively. The strengths and limitations of the framework are discussed relative to alternative (non-fuzzy) methods. The fuzzy Bayesian inference framework provides a flexible methodology that fits the imprecise nature of linguistic information on historical floods as available in historical written documentation.

  9. Characterization and Validation of the LT-SYS Copper Assay on a Roche Cobas 8000 c502 Analyzer.

    PubMed

    Kraus, F Bernhard; Mischereit, Marlies; Eller, Christoph; Ludwig-Kraus, Beatrice

    2017-02-01

    Validation of the LT-SYS quantitative in vitro copper assay on a Roche Cobas 8000 c502 analyzer and comparison with a BIOMED assay on a Roche Cobas Mira analyzer. Imprecision and bias were quantified at different concentration levels (serum and plasma) over a 20-day period. Linearity was assessed covering a range from 4.08 µmol/L to 33.8 µmol/L. Limit of blank (LoB) and limit of detection (LoD) were established based on a total of 120 blank and low-level samples. The method comparison was based on 58 plasma samples. Within-run imprecision ranged from 0.7% to 1.2% and within-laboratory imprecision from 1.4% to 3.3%. Relative bias for the 2 serum pools with known target values was less than 2.5%. The assay did not deviate from linearity over the tested measuring range. LoB and LoD were 0.12 µmol/L and 0.23 µmol/L, respectively. The method comparison revealed an average deviation of 11.5% (2.016 µmol/L), and the linear regression fit was y = 1.464 + 0.795x. The LT-SYS copper assay characterized in this study showed a fully acceptable performance with good degrees of imprecision and bias, no deviation from linearity in the relevant measuring rangem, and very low LoB and LoD. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. A fuzzy Bayesian approach to flood frequency estimation with imprecise historical information.

    PubMed

    Salinas, José Luis; Kiss, Andrea; Viglione, Alberto; Viertl, Reinhard; Blöschl, Günter

    2016-09-01

    This paper presents a novel framework that links imprecision (through a fuzzy approach) and stochastic uncertainty (through a Bayesian approach) in estimating flood probabilities from historical flood information and systematic flood discharge data. The method exploits the linguistic characteristics of historical source material to construct membership functions, which may be wider or narrower, depending on the vagueness of the statements. The membership functions are either included in the prior distribution or the likelihood function to obtain a fuzzy version of the flood frequency curve. The viability of the approach is demonstrated by three case studies that differ in terms of their hydromorphological conditions (from an Alpine river with bedrock profile to a flat lowland river with extensive flood plains) and historical source material (including narratives, town and county meeting protocols, flood marks and damage accounts). The case studies are presented in order of increasing fuzziness (the Rhine at Basel, Switzerland; the Werra at Meiningen, Germany; and the Tisza at Szeged, Hungary). Incorporating imprecise historical information is found to reduce the range between the 5% and 95% Bayesian credibility bounds of the 100 year floods by 45% and 61% for the Rhine and Werra case studies, respectively. The strengths and limitations of the framework are discussed relative to alternative (non-fuzzy) methods. The fuzzy Bayesian inference framework provides a flexible methodology that fits the imprecise nature of linguistic information on historical floods as available in historical written documentation.

  11. 3D Numerical Simulation of the Wave and Current Loads on a Truss Foundation of the Offshore Wind Turbine During the Extreme Typhoon Event

    NASA Astrophysics Data System (ADS)

    Lin, C. W.; Wu, T. R.; Chuang, M. H.; Tsai, Y. L.

    2015-12-01

    The wind in Taiwan Strait is strong and stable which offers an opportunity to build offshore wind farms. However, frequently visited typhoons and strong ocean current require more attentions on the wave force and local scour around the foundation of the turbine piles. In this paper, we introduce an in-house, multi-phase CFD model, Splash3D, for solving the flow field with breaking wave, strong turbulent, and scour phenomena. Splash3D solves Navier-Stokes Equation with Large-Eddy Simulation (LES) for the fluid domain, and uses volume of fluid (VOF) with piecewise linear interface reconstruction (PLIC) method to describe the break free-surface. The waves were generated inside the computational domain by internal wave maker with a mass-source function. This function is designed to adequately simulate the wave condition under observed extreme events based on JONSWAP spectrum and dispersion relationship. Dirichlet velocity boundary condition is assigned at the upper stream boundary to induce the ocean current. At the downstream face, the sponge-layer method combined with pressure Dirichlet boundary condition is specified for dissipating waves and conducting current out of the domain. Numerical pressure gauges are uniformly set on the structure surface to obtain the force distribution on the structure. As for the local scour around the foundation, we developed Discontinuous Bi-viscous Model (DBM) for the development of the scour hole. Model validations were presented as well. The force distribution under observed irregular wave condition was extracted by the irregular-surface force extraction (ISFE) method, which provides a fast and elegant way to integrate the force acting on the surface of irregular structure. From the Simulation results, we found that the total force is mainly induced by the impinging waves, and the force from the ocean current is about 2 order of magnitude smaller than the wave force. We also found the dynamic pressure, wave height, and the projection area of the structure are the main factors to the total force. Detailed results and discussion are presented as well.

  12. The Analytical Quality of Point-of-Care Testing in the ‘QAAMS’ Model for Diabetes Management in Australian Aboriginal Medical Services

    PubMed Central

    Shephard, Mark DS; Gill, Janice P

    2006-01-01

    Type 2 diabetes mellitus and its major complication, renal disease, represent one of the most significant contemporary health problems facing Australia’s Indigenous Aboriginal People. The Australian Government-funded Quality Assurance for Aboriginal Medical Services Program (QAAMS) provides a framework by which on-site point-of-care testing (POCT) for haemoglobin A1c (HbA1c) and now urine albumin:creatinine ratio (ACR) can be performed to facilitate better diabetes management in Aboriginal medical services. This paper provides updated evidence for the analytical quality of POCT in the QAAMS Program. The median imprecision for point-of-care (POC) HbA1c and urine ACR quality assurance (QA) testing has continually improved over the past six and half years, stabilising at approximately 3% for both analytes and proving analytically sound in Aboriginal hands. For HbA1c, there was no statistical difference between the imprecision achieved by QAAMS and laboratory users of the Bayer DCA 2000 since the QAAMS program commenced (QAAMS CV 3.6% ± 0.52, laboratory CV 3.4% ± 0.42; p = 0.21, paired t-test). The Western Pacific Island of Tonga recently joined the QAAMS HbA1c Program indicating that the QAAMS model can also be applied internationally in other settings where the prevalence of diabetes is high. PMID:17581642

  13. Statechart-based design controllers for FPGA partial reconfiguration

    NASA Astrophysics Data System (ADS)

    Łabiak, Grzegorz; Wegrzyn, Marek; Rosado Muñoz, Alfredo

    2015-09-01

    Statechart diagram and UML technique can be a vital part of early conceptual modeling. At the present time there is no much support in hardware design methodologies for reconfiguration features of reprogrammable devices. Authors try to bridge the gap between imprecise UML model and formal HDL description. The key concept in author's proposal is to describe the behavior of the digital controller by statechart diagrams and to map some parts of the behavior into reprogrammable logic by means of group of states which forms sequential automaton. The whole process is illustrated by the example with experimental results.

  14. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Hamiltonian models for the propagation of irrotational surface gravity waves over a variable bottom.

    PubMed

    Compelli, A; Ivanov, R; Todorov, M

    2018-01-28

    A single incompressible, inviscid, irrotational fluid medium bounded by a free surface and varying bottom is considered. The Hamiltonian of the system is expressed in terms of the so-called Dirichlet-Neumann operators. The equations for the surface waves are presented in Hamiltonian form. Specific scaling of the variables is selected which leads to approximations of Boussinesq and Korteweg-de Vries (KdV) types, taking into account the effect of the slowly varying bottom. The arising KdV equation with variable coefficients is studied numerically when the initial condition is in the form of the one-soliton solution for the initial depth.This article is part of the theme issue 'Nonlinear water waves'. © 2017 The Author(s).

  16. Detection of dominant flow and abnormal events in surveillance video

    NASA Astrophysics Data System (ADS)

    Kwak, Sooyeong; Byun, Hyeran

    2011-02-01

    We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.

  17. Topic Model for Graph Mining.

    PubMed

    Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Luo, Xiangfeng

    2015-12-01

    Graph mining has been a popular research area because of its numerous application scenarios. Many unstructured and structured data can be represented as graphs, such as, documents, chemical molecular structures, and images. However, an issue in relation to current research on graphs is that they cannot adequately discover the topics hidden in graph-structured data which can be beneficial for both the unsupervised learning and supervised learning of the graphs. Although topic models have proved to be very successful in discovering latent topics, the standard topic models cannot be directly applied to graph-structured data due to the "bag-of-word" assumption. In this paper, an innovative graph topic model (GTM) is proposed to address this issue, which uses Bernoulli distributions to model the edges between nodes in a graph. It can, therefore, make the edges in a graph contribute to latent topic discovery and further improve the accuracy of the supervised and unsupervised learning of graphs. The experimental results on two different types of graph datasets show that the proposed GTM outperforms the latent Dirichlet allocation on classification by using the unveiled topics of these two models to represent graphs.

  18. Infrared length scale and extrapolations for the no-core shell model

    DOE PAGES

    Wendt, K. A.; Forssén, C.; Papenbrock, T.; ...

    2015-06-03

    In this paper, we precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the A-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of A nucleons in the NCSM space to that of A nucleons in a 3(A-1)-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for 6Li. We apply our result and perform accurate IR extrapolations for bound statesmore » of 4He, 6He, 6Li, and 7Li. Finally, we also attempt to extrapolate NCSM results for 10B and 16O with bare interactions from chiral effective field theory over tens of MeV.« less

  19. Nonparametric Bayesian models for a spatial covariance.

    PubMed

    Reich, Brian J; Fuentes, Montserrat

    2012-01-01

    A crucial step in the analysis of spatial data is to estimate the spatial correlation function that determines the relationship between a spatial process at two locations. The standard approach to selecting the appropriate correlation function is to use prior knowledge or exploratory analysis, such as a variogram analysis, to select the correct parametric correlation function. Rather that selecting a particular parametric correlation function, we treat the covariance function as an unknown function to be estimated from the data. We propose a flexible prior for the correlation function to provide robustness to the choice of correlation function. We specify the prior for the correlation function using spectral methods and the Dirichlet process prior, which is a common prior for an unknown distribution function. Our model does not require Gaussian data or spatial locations on a regular grid. The approach is demonstrated using a simulation study as well as an analysis of California air pollution data.

  20. Recommender system based on scarce information mining.

    PubMed

    Lu, Wei; Chung, Fu-Lai; Lai, Kunfeng; Zhang, Liang

    2017-09-01

    Guessing what user may like is now a typical interface for video recommendation. Nowadays, the highly popular user generated content sites provide various sources of information such as tags for recommendation tasks. Motivated by a real world online video recommendation problem, this work targets at the long tail phenomena of user behavior and the sparsity of item features. A personalized compound recommendation framework for online video recommendation called Dirichlet mixture probit model for information scarcity (DPIS) is hence proposed. Assuming that each clicking sample is generated from a representation of user preferences, DPIS models the sample level topic proportions as a multinomial item vector, and utilizes topical clustering on the user part for recommendation through a probit classifier. As demonstrated by the real-world application, the proposed DPIS achieves better performance in accuracy, perplexity as well as diversity in coverage than traditional methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. The Riemann-Hilbert approach to the Helmholtz equation in a quarter-plane: Neumann, Robin and Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Its, Alexander; Its, Elizabeth

    2018-04-01

    We revisit the Helmholtz equation in a quarter-plane in the framework of the Riemann-Hilbert approach to linear boundary value problems suggested in late 1990s by A. Fokas. We show the role of the Sommerfeld radiation condition in Fokas' scheme.

  2. Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers

    ERIC Educational Resources Information Center

    Anaya, Leticia H.

    2011-01-01

    In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed.…

  3. Vectorized multigrid Poisson solver for the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Barkai, D.; Brandt, M. A.

    1984-01-01

    The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.

  4. ANALYTICAL SOLUTIONS OF THE ATMOSPHERIC DIFFUSION EQUATION WITH MULTIPLE SOURCES AND HEIGHT-DEPENDENT WIND SPEED AND EDDY DIFFUSIVITIES. (R825689C072)

    EPA Science Inventory

    Abstract

    Three-dimensional analytical solutions of the atmospheric diffusion equation with multiple sources and height-dependent wind speed and eddy diffusivities are derived in a systematic fashion. For homogeneous Neumann (total reflection), Dirichlet (total adsorpti...

  5. ANALYTICAL SOLUTIONS OF THE ATMOSPHERIC DIFFUSION EQUATION WITH MULTIPLE SOURCES AND HEIGHT-DEPENDENT WIND SPEED AND EDDY DIFFUSIVITIES. (R825689C048)

    EPA Science Inventory

    Abstract

    Three-dimensional analytical solutions of the atmospheric diffusion equation with multiple sources and height-dependent wind speed and eddy diffusivities are derived in a systematic fashion. For homogeneous Neumann (total reflection), Dirichlet (total adsorpti...

  6. Nonparametric Bayesian predictive distributions for future order statistics

    Treesearch

    Richard A. Johnson; James W. Evans; David W. Green

    1999-01-01

    We derive the predictive distribution for a specified order statistic, determined from a future random sample, under a Dirichlet process prior. Two variants of the approach are treated and some limiting cases studied. A practical application to monitoring the strength of lumber is discussed including choices of prior expectation and comparisons made to a Bayesian...

  7. Inverse scattering for an exterior Dirichlet program

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.

    1981-01-01

    Scattering due to a metallic cylinder which is in the field of a wire carrying a periodic current is considered. The location and shape of the cylinder is obtained with a far field measurement in between the wire and the cylinder. The same analysis is applicable in acoustics in the situation that the cylinder is a soft wall body and the wire is a line source. The associated direct problem in this situation is an exterior Dirichlet problem for the Helmholtz equation in two dimensions. An improved low frequency estimate for the solution of this problem using integral equation methods is presented. The far field measurements are related to the solutions of boundary integral equations in the low frequency situation. These solutions are expressed in terms of mapping function which maps the exterior of the unknown curve onto the exterior of a unit disk. The coefficients of the Laurent expansion of the conformal transformations are related to the far field coefficients. The first far field coefficient leads to the calculation of the distance between the source and the cylinder.

  8. Interactions Between Mathematics and Physics: The History of the Concept of Function—Teaching with and About Nature of Mathematics

    NASA Astrophysics Data System (ADS)

    Kjeldsen, Tinne Hoff; Lützen, Jesper

    2015-07-01

    In this paper, we discuss the history of the concept of function and emphasize in particular how problems in physics have led to essential changes in its definition and application in mathematical practices. Euler defined a function as an analytic expression, whereas Dirichlet defined it as a variable that depends in an arbitrary manner on another variable. The change was required when mathematicians discovered that analytic expressions were not sufficient to represent physical phenomena such as the vibration of a string (Euler) and heat conduction (Fourier and Dirichlet). The introduction of generalized functions or distributions is shown to stem partly from the development of new theories of physics such as electrical engineering and quantum mechanics that led to the use of improper functions such as the delta function that demanded a proper foundation. We argue that the development of student understanding of mathematics and its nature is enhanced by embedding mathematical concepts and theories, within an explicit-reflective framework, into a rich historical context emphasizing its interaction with other disciplines such as physics. Students recognize and become engaged with meta-discursive rules governing mathematics. Mathematics teachers can thereby teach inquiry in mathematics as it occurs in the sciences, as mathematical practice aimed at obtaining new mathematical knowledge. We illustrate such a historical teaching and learning of mathematics within an explicit and reflective framework by two examples of student-directed, problem-oriented project work following the Roskilde Model, in which the connection to physics is explicit and provides a learning space where the nature of mathematics and mathematical practices are linked to natural science.

  9. Efficient fuzzy Bayesian inference algorithms for incorporating expert knowledge in parameter estimation

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad

    2016-05-01

    Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert elicitation methodology is developed and applied to the real-world test case in order to provide a road map for the use of fuzzy Bayesian inference in groundwater modeling applications.

  10. Differential Topic Models.

    PubMed

    Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan

    2015-02-01

    In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections.

  11. PFA toolbox: a MATLAB tool for Metabolic Flux Analysis.

    PubMed

    Morales, Yeimy; Bosque, Gabriel; Vehí, Josep; Picó, Jesús; Llaneras, Francisco

    2016-07-11

    Metabolic Flux Analysis (MFA) is a methodology that has been successfully applied to estimate metabolic fluxes in living cells. However, traditional frameworks based on this approach have some limitations, particularly when measurements are scarce and imprecise. This is very common in industrial environments. The PFA Toolbox can be used to face those scenarios. Here we present the PFA (Possibilistic Flux Analysis) Toolbox for MATLAB, which simplifies the use of Interval and Possibilistic Metabolic Flux Analysis. The main features of the PFA Toolbox are the following: (a) It provides reliable MFA estimations in scenarios where only a few fluxes can be measured or those available are imprecise. (b) It provides tools to easily plot the results as interval estimates or flux distributions. (c) It is composed of simple functions that MATLAB users can apply in flexible ways. (d) It includes a Graphical User Interface (GUI), which provides a visual representation of the measurements and their uncertainty. (e) It can use stoichiometric models in COBRA format. In addition, the PFA Toolbox includes a User's Guide with a thorough description of its functions and several examples. The PFA Toolbox for MATLAB is a freely available Toolbox that is able to perform Interval and Possibilistic MFA estimations.

  12. Behavior coordination of mobile robotics using supervisory control of fuzzy discrete event systems.

    PubMed

    Jayasiri, Awantha; Mann, George K I; Gosine, Raymond G

    2011-10-01

    In order to incorporate the uncertainty and impreciseness present in real-world event-driven asynchronous systems, fuzzy discrete event systems (DESs) (FDESs) have been proposed as an extension to crisp DESs. In this paper, first, we propose an extension to the supervisory control theory of FDES by redefining fuzzy controllable and uncontrollable events. The proposed supervisor is capable of enabling feasible uncontrollable and controllable events with different possibilities. Then, the extended supervisory control framework of FDES is employed to model and control several navigational tasks of a mobile robot using the behavior-based approach. The robot has limited sensory capabilities, and the navigations have been performed in several unmodeled environments. The reactive and deliberative behaviors of the mobile robotic system are weighted through fuzzy uncontrollable and controllable events, respectively. By employing the proposed supervisory controller, a command-fusion-type behavior coordination is achieved. The observability of fuzzy events is incorporated to represent the sensory imprecision. As a systematic analysis of the system, a fuzzy-state-based controllability measure is introduced. The approach is implemented in both simulation and real time. A performance evaluation is performed to quantitatively estimate the validity of the proposed approach over its counterparts.

  13. Uncertainty, imprecision, and the precautionary principle in climate change assessment.

    PubMed

    Borsuk, M E; Tomassini, L

    2005-01-01

    Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.

  14. The impact of the rate prior on Bayesian estimation of divergence times with multiple Loci.

    PubMed

    Dos Reis, Mario; Zhu, Tianqi; Yang, Ziheng

    2014-07-01

    Bayesian methods provide a powerful way to estimate species divergence times by combining information from molecular sequences with information from the fossil record. With the explosive increase of genomic data, divergence time estimation increasingly uses data of multiple loci (genes or site partitions). Widely used computer programs to estimate divergence times use independent and identically distributed (i.i.d.) priors on the substitution rates for different loci. The i.i.d. prior is problematic. As the number of loci (L) increases, the prior variance of the average rate across all loci goes to zero at the rate 1/L. As a consequence, the rate prior dominates posterior time estimates when many loci are analyzed, and if the rate prior is misspecified, the estimated divergence times will converge to wrong values with very narrow credibility intervals. Here we develop a new prior on the locus rates based on the Dirichlet distribution that corrects the problematic behavior of the i.i.d. prior. We use computer simulation and real data analysis to highlight the differences between the old and new priors. For a dataset for six primate species, we show that with the old i.i.d. prior, if the prior rate is too high (or too low), the estimated divergence times are too young (or too old), outside the bounds imposed by the fossil calibrations. In contrast, with the new Dirichlet prior, posterior time estimates are insensitive to the rate prior and are compatible with the fossil calibrations. We re-analyzed a phylogenomic data set of 36 mammal species and show that using many fossil calibrations can alleviate the adverse impact of a misspecified rate prior to some extent. We recommend the use of the new Dirichlet prior in Bayesian divergence time estimation. [Bayesian inference, divergence time, relaxed clock, rate prior, partition analysis.]. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  15. Atmospheric effect in three-space scenario for the Stokes-Helmert method of geoid determination

    NASA Astrophysics Data System (ADS)

    Yang, H.; Tenzer, R.; Vanicek, P.; Santos, M.

    2004-05-01

    : According to the Stokes-Helmert method for the geoid determination by Vanicek and Martinec (1994) and Vanicek et al. (1999), the Helmert gravity anomalies are computed at the earth surface. To formulate the fundamental formula of physical geodesy, Helmert's gravity anomalies are then downward continued from the earth surface onto the geoid. This procedure, i.e., the inverse Dirichlet's boundary value problem, is realized by solving the Poisson integral equation. The above mentioned "classical" approach can be modified so that the inverse Dirichlet's boundary value problem is solved in the No Topography (NT) space (Vanicek et al., 2004) instead of in the Helmert (H) space. This technique has been introduced by Vanicek et al. (2003) and was used by Tenzer and Vanicek (2003) for the determination of the geoid in the region of the Canadian Rocky Mountains. According to this new approach, the gravity anomalies referred to the earth surface are first transformed into the NT-space. This transformation is realized by subtracting the gravitational attraction of topographical and atmospheric masses from the gravity anomalies at the earth surface. Since the NT-anomalies are harmonic above the geoid, the Dirichlet boundary value problem is solved in the NT-space instead of the Helmert space according to the standard formulation. After being obtained on the geoid, the NT-anomalies are transformed into the H-space to minimize the indirect effect on the geoidal heights. This step, i.e., transformation from NT-space to H-space is realized by adding the gravitational attraction of condensed topographical and condensed atmospheric masses to the NT-anomalies at the geoid. The effects of atmosphere in the standard Stokes-Helmert method was intensively investigated by Sjöberg (1998 and 1999), and Novák (2000). In this presentation, the effect of the atmosphere in the three-space scenario for the Stokes-Helmert method is discussed and the numerical results over Canada are shown. Key words: Atmosphere - Geoid - Gravity

  16. Exploiting Language Models to Classify Events from Twitter

    PubMed Central

    Vo, Duc-Thuan; Hai, Vo Thuan; Ock, Cheol-Young

    2015-01-01

    Classifying events is challenging in Twitter because tweets texts have a large amount of temporal data with a lot of noise and various kinds of topics. In this paper, we propose a method to classify events from Twitter. We firstly find the distinguishing terms between tweets in events and measure their similarities with learning language models such as ConceptNet and a latent Dirichlet allocation method for selectional preferences (LDA-SP), which have been widely studied based on large text corpora within computational linguistic relations. The relationship of term words in tweets will be discovered by checking them under each model. We then proposed a method to compute the similarity between tweets based on tweets' features including common term words and relationships among their distinguishing term words. It will be explicit and convenient for applying to k-nearest neighbor techniques for classification. We carefully applied experiments on the Edinburgh Twitter Corpus to show that our method achieves competitive results for classifying events. PMID:26451139

  17. Mapping annotations with textual evidence using an scLDA model.

    PubMed

    Jin, Bo; Chen, Vicky; Chen, Lujia; Lu, Xinghua

    2011-01-01

    Most of the knowledge regarding genes and proteins is stored in biomedical literature as free text. Extracting information from complex biomedical texts demands techniques capable of inferring biological concepts from local text regions and mapping them to controlled vocabularies. To this end, we present a sentence-based correspondence latent Dirichlet allocation (scLDA) model which, when trained with a corpus of PubMed documents with known GO annotations, performs the following tasks: 1) learning major biological concepts from the corpus, 2) inferring the biological concepts existing within text regions (sentences), and 3) identifying the text regions in a document that provides evidence for the observed annotations. When applied to new gene-related documents, a trained scLDA model is capable of predicting GO annotations and identifying text regions as textual evidence supporting the predicted annotations. This study uses GO annotation data as a testbed; the approach can be generalized to other annotated data, such as MeSH and MEDLINE documents.

  18. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  19. Complex temporal topic evolution modelling using the Kullback-Leibler divergence and the Bhattacharyya distance.

    PubMed

    Andrei, Victor; Arandjelović, Ognjen

    2016-12-01

    The rapidly expanding corpus of medical research literature presents major challenges in the understanding of previous work, the extraction of maximum information from collected data, and the identification of promising research directions. We present a case for the use of advanced machine learning techniques as an aide in this task and introduce a novel methodology that is shown to be capable of extracting meaningful information from large longitudinal corpora and of tracking complex temporal changes within it. Our framework is based on (i) the discretization of time into epochs, (ii) epoch-wise topic discovery using a hierarchical Dirichlet process-based model, and (iii) a temporal similarity graph which allows for the modelling of complex topic changes. More specifically, this is the first work that discusses and distinguishes between two groups of particularly challenging topic evolution phenomena: topic splitting and speciation and topic convergence and merging, in addition to the more widely recognized emergence and disappearance and gradual evolution. The proposed framework is evaluated on a public medical literature corpus.

  20. Spectral decompositions of multiple time series: a Bayesian non-parametric approach.

    PubMed

    Macaro, Christian; Prado, Raquel

    2014-01-01

    We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.

  1. Skin Effect Modeling in Conductors of Arbitrary Shape Through a Surface Admittance Operator and the Contour Integral Method

    NASA Astrophysics Data System (ADS)

    Patel, Utkarsh R.; Triverio, Piero

    2016-09-01

    An accurate modeling of skin effect inside conductors is of capital importance to solve transmission line and scattering problems. This paper presents a surface-based formulation to model skin effect in conductors of arbitrary cross section, and compute the per-unit-length impedance of a multiconductor transmission line. The proposed formulation is based on the Dirichlet-Neumann operator that relates the longitudinal electric field to the tangential magnetic field on the boundary of a conductor. We demonstrate how the surface operator can be obtained through the contour integral method for conductors of arbitrary shape. The proposed algorithm is simple to implement, efficient, and can handle arbitrary cross-sections, which is a main advantage over the existing approach based on eigenfunctions, which is available only for canonical conductor's shapes. The versatility of the method is illustrated through a diverse set of examples, which includes transmission lines with trapezoidal, curved, and V-shaped conductors. Numerical results demonstrate the accuracy, versatility, and efficiency of the proposed technique.

  2. The MUSIC algorithm for impedance tomography of small inclusions from discrete data

    NASA Astrophysics Data System (ADS)

    Lechleiter, A.

    2015-09-01

    We consider a point-electrode model for electrical impedance tomography and show that current-to-voltage measurements from finitely many electrodes are sufficient to characterize the positions of a finite number of point-like inclusions. More precisely, we consider an asymptotic expansion with respect to the size of the small inclusions of the relative Neumann-to-Dirichlet operator in the framework of the point electrode model. This operator is naturally finite-dimensional and models difference measurements by finitely many small electrodes of the electric potential with and without the small inclusions. Moreover, its leading-order term explicitly characterizes the centers of the small inclusions if the (finite) number of point electrodes is large enough. This characterization is based on finite-dimensional test vectors and leads naturally to a MUSIC algorithm for imaging the inclusion centers. We show both the feasibility and limitations of this imaging technique via two-dimensional numerical experiments, considering in particular the influence of the number of point electrodes on the algorithm’s images.

  3. An Empirical Study of Re-sampling Techniques as a Method for Improving Error Estimates in Split-plot Designs

    DTIC Science & Technology

    2010-03-01

    sufficient replications often lead to models that lack precision in error estimation and thus imprecision in corresponding conclusions. This work develops...v Preface This work is dedicated to all who gave and continue to give in order for me to achieve some semblance of success. Benjamin M. Lee vi...develop, examine and test methodologies for an- alyzing test results from split-plot designs. In particular, this work determines the applicability

  4. A neurite quality index and machine vision software for improved quantification of neurodegeneration.

    PubMed

    Romero, Peggy; Miller, Ted; Garakani, Arman

    2009-12-01

    Current methods to assess neurodegradation in dorsal root ganglion cultures as a model for neurodegenerative diseases are imprecise and time-consuming. Here we describe two new methods to quantify neuroprotection in these cultures. The neurite quality index (NQI) builds upon earlier manual methods, incorporating additional morphological events to increase detection sensitivity for the detection of early degeneration events. Neurosight is a machine vision-based method that recapitulates many of the strengths of NQI while enabling high-throughput screening applications with decreased costs.

  5. Identifying biological concepts from a protein-related corpus with a probabilistic topic model

    PubMed Central

    Zheng, Bin; McLean, David C; Lu, Xinghua

    2006-01-01

    Background Biomedical literature, e.g., MEDLINE, contains a wealth of knowledge regarding functions of proteins. Major recurring biological concepts within such text corpora represent the domains of this body of knowledge. The goal of this research is to identify the major biological topics/concepts from a corpus of protein-related MEDLINE© titles and abstracts by applying a probabilistic topic model. Results The latent Dirichlet allocation (LDA) model was applied to the corpus. Based on the Bayesian model selection, 300 major topics were extracted from the corpus. The majority of identified topics/concepts was found to be semantically coherent and most represented biological objects or concepts. The identified topics/concepts were further mapped to the controlled vocabulary of the Gene Ontology (GO) terms based on mutual information. Conclusion The major and recurring biological concepts within a collection of MEDLINE documents can be extracted by the LDA model. The identified topics/concepts provide parsimonious and semantically-enriched representation of the texts in a semantic space with reduced dimensionality and can be used to index text. PMID:16466569

  6. Multiscale modeling of electroosmotic flow: Effects of discrete ion, enhanced viscosity, and surface friction

    NASA Astrophysics Data System (ADS)

    Bhadauria, Ravi; Aluru, N. R.

    2017-05-01

    We propose an isothermal, one-dimensional, electroosmotic flow model for slit-shaped nanochannels. Nanoscale confinement effects are embedded into the transport model by incorporating the spatially varying solvent and ion concentration profiles that correspond to the electrochemical potential of mean force. The local viscosity is dependent on the solvent local density and is modeled using the local average density method. Excess contributions to the local viscosity are included using the Onsager-Fuoss expression that is dependent on the local ionic strength. A Dirichlet-type boundary condition is provided in the form of the slip velocity that is dependent on the macroscopic interfacial friction. This solvent-surface specific interfacial friction is estimated using a dynamical generalized Langevin equation based framework. The electroosmotic flow of Na+ and Cl- as single counterions and NaCl salt solvated in Extended Simple Point Charge (SPC/E) water confined between graphene and silicon slit-shaped nanochannels are considered as examples. The proposed model yields a good quantitative agreement with the solvent velocity profiles obtained from the non-equilibrium molecular dynamics simulations.

  7. Cardiovascular disease testing on the Dimension Vista system: biomarkers of acute coronary syndromes.

    PubMed

    Kelley, Walter E; Lockwood, Christina M; Cervelli, Denise R; Sterner, Jamie; Scott, Mitchell G; Duh, Show-Hong; Christenson, Robert H

    2009-09-01

    Performance characteristics of the LOCI cTnI, CK-MB, MYO, NTproBNP and hsCRP methods on the Dimension Vista System were evaluated. Imprecision (following CLSI EP05-A2 guidelines), limit of quantitation (cTnI), limit of blank, linearity on dilution, serum versus plasma matrix studies (cTnI), and method comparison studies were conducted. Method imprecision of 1.8 to 9.7% (cTnI), 1.8 to 5.7% (CK-MB), 2.1 to 2.2% (MYO), 1.6 to 3.3% (NTproBNP), and 3.5 to 4.2% (hsCRP) were demonstrated. The manufacturer's claimed imprecision, detection limits and upper measurement limits were met. Limit of Quantitation was 0.040 ng/mL for the cTnI assay. Agreement of serum and plasma values for cTnI (r=0.99) was shown. Method comparison study results were acceptable. The Dimension Vista cTnI, CK-MB, MYO, NTproBNP, and hsCRP methods demonstrate acceptable performance characteristics for use as an aid in the diagnosis and risk assessment of patients presenting with suspected acute coronary syndromes.

  8. Characterizing Twitter Discussions About HPV Vaccines Using Topic Modeling and Community Detection.

    PubMed

    Surian, Didi; Nguyen, Dat Quoc; Kennedy, Georgina; Johnson, Mark; Coiera, Enrico; Dunn, Adam G

    2016-08-29

    In public health surveillance, measuring how information enters and spreads through online communities may help us understand geographical variation in decision making associated with poor health outcomes. Our aim was to evaluate the use of community structure and topic modeling methods as a process for characterizing the clustering of opinions about human papillomavirus (HPV) vaccines on Twitter. The study examined Twitter posts (tweets) collected between October 2013 and October 2015 about HPV vaccines. We tested Latent Dirichlet Allocation and Dirichlet Multinomial Mixture (DMM) models for inferring topics associated with tweets, and community agglomeration (Louvain) and the encoding of random walks (Infomap) methods to detect community structure of the users from their social connections. We examined the alignment between community structure and topics using several common clustering alignment measures and introduced a statistical measure of alignment based on the concentration of specific topics within a small number of communities. Visualizations of the topics and the alignment between topics and communities are presented to support the interpretation of the results in context of public health communication and identification of communities at risk of rejecting the safety and efficacy of HPV vaccines. We analyzed 285,417 Twitter posts (tweets) about HPV vaccines from 101,519 users connected by 4,387,524 social connections. Examining the alignment between the community structure and the topics of tweets, the results indicated that the Louvain community detection algorithm together with DMM produced consistently higher alignment values and that alignments were generally higher when the number of topics was lower. After applying the Louvain method and DMM with 30 topics and grouping semantically similar topics in a hierarchy, we characterized 163,148 (57.16%) tweets as evidence and advocacy, and 6244 (2.19%) tweets describing personal experiences. Among the 4548 users who posted experiential tweets, 3449 users (75.84%) were found in communities where the majority of tweets were about evidence and advocacy. The use of community detection in concert with topic modeling appears to be a useful way to characterize Twitter communities for the purpose of opinion surveillance in public health applications. Our approach may help identify online communities at risk of being influenced by negative opinions about public health interventions such as HPV vaccines.

  9. Characterizing Twitter Discussions About HPV Vaccines Using Topic Modeling and Community Detection

    PubMed Central

    Nguyen, Dat Quoc; Kennedy, Georgina; Johnson, Mark; Coiera, Enrico; Dunn, Adam G

    2016-01-01

    Background In public health surveillance, measuring how information enters and spreads through online communities may help us understand geographical variation in decision making associated with poor health outcomes. Objective Our aim was to evaluate the use of community structure and topic modeling methods as a process for characterizing the clustering of opinions about human papillomavirus (HPV) vaccines on Twitter. Methods The study examined Twitter posts (tweets) collected between October 2013 and October 2015 about HPV vaccines. We tested Latent Dirichlet Allocation and Dirichlet Multinomial Mixture (DMM) models for inferring topics associated with tweets, and community agglomeration (Louvain) and the encoding of random walks (Infomap) methods to detect community structure of the users from their social connections. We examined the alignment between community structure and topics using several common clustering alignment measures and introduced a statistical measure of alignment based on the concentration of specific topics within a small number of communities. Visualizations of the topics and the alignment between topics and communities are presented to support the interpretation of the results in context of public health communication and identification of communities at risk of rejecting the safety and efficacy of HPV vaccines. Results We analyzed 285,417 Twitter posts (tweets) about HPV vaccines from 101,519 users connected by 4,387,524 social connections. Examining the alignment between the community structure and the topics of tweets, the results indicated that the Louvain community detection algorithm together with DMM produced consistently higher alignment values and that alignments were generally higher when the number of topics was lower. After applying the Louvain method and DMM with 30 topics and grouping semantically similar topics in a hierarchy, we characterized 163,148 (57.16%) tweets as evidence and advocacy, and 6244 (2.19%) tweets describing personal experiences. Among the 4548 users who posted experiential tweets, 3449 users (75.84%) were found in communities where the majority of tweets were about evidence and advocacy. Conclusions The use of community detection in concert with topic modeling appears to be a useful way to characterize Twitter communities for the purpose of opinion surveillance in public health applications. Our approach may help identify online communities at risk of being influenced by negative opinions about public health interventions such as HPV vaccines. PMID:27573910

  10. Parallel computers - Estimate errors caused by imprecise data

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne

    1991-01-01

    A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.

  11. Local recovery of the compressional and shear speeds from the hyperbolic DN map

    NASA Astrophysics Data System (ADS)

    Stefanov, Plamen; Uhlmann, Gunther; Vasy, Andras

    2018-01-01

    We study the isotropic elastic wave equation in a bounded domain with boundary. We show that local knowledge of the Dirichlet-to-Neumann map determines uniquely the speed of the p-wave locally if there is a strictly convex foliation with respect to it, and similarly for the s-wave speed.

  12. Quantum field between moving mirrors: A three dimensional example

    NASA Technical Reports Server (NTRS)

    Hacyan, S.; Jauregui, Roco; Villarreal, Carlos

    1995-01-01

    The scalar quantum field uniformly moving plates in three dimensional space is studied. Field equations for Dirichlet boundary conditions are solved exactly. Comparison of the resulting wavefunctions with their instantaneous static counterpart is performed via Bogolubov coefficients. Unlike the one dimensional problem, 'particle' creation as well as squeezing may occur. The time dependent Casimir energy is also evaluated.

  13. Fuzzy logic, neural networks, and soft computing

    NASA Technical Reports Server (NTRS)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial intelligence. In the years ahead, this may well become a widely held position.

  14. Identifying synonymy between relational phrases using word embeddings.

    PubMed

    Nguyen, Nhung T H; Miwa, Makoto; Tsuruoka, Yoshimasa; Tojo, Satoshi

    2015-08-01

    Many text mining applications in the biomedical domain benefit from automatic clustering of relational phrases into synonymous groups, since it alleviates the problem of spurious mismatches caused by the diversity of natural language expressions. Most of the previous work that has addressed this task of synonymy resolution uses similarity metrics between relational phrases based on textual strings or dependency paths, which, for the most part, ignore the context around the relations. To overcome this shortcoming, we employ a word embedding technique to encode relational phrases. We then apply the k-means algorithm on top of the distributional representations to cluster the phrases. Our experimental results show that this approach outperforms state-of-the-art statistical models including latent Dirichlet allocation and Markov logic networks. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. An Eigenvalue Analysis of finite-difference approximations for hyperbolic IBVPs

    NASA Technical Reports Server (NTRS)

    Warming, Robert F.; Beam, Richard M.

    1989-01-01

    The eigenvalue spectrum associated with a linear finite-difference approximation plays a crucial role in the stability analysis and in the actual computational performance of the discrete approximation. The eigenvalue spectrum associated with the Lax-Wendroff scheme applied to a model hyperbolic equation was investigated. For an initial-boundary-value problem (IBVP) on a finite domain, the eigenvalue or normal mode analysis is analytically intractable. A study of auxiliary problems (Dirichlet and quarter-plane) leads to asymptotic estimates of the eigenvalue spectrum and to an identification of individual modes as either benign or unstable. The asymptotic analysis establishes an intuitive as well as quantitative connection between the algebraic tests in the theory of Gustafsson, Kreiss, and Sundstrom and Lax-Richtmyer L(sub 2) stability on a finite domain.

  16. Robust boundary treatment for open-channel flows in divergence-free incompressible SPH

    NASA Astrophysics Data System (ADS)

    Pahar, Gourabananda; Dhar, Anirban

    2017-03-01

    A robust Incompressible Smoothed Particle Hydrodynamics (ISPH) framework is developed to simulate specified inflow and outflow boundary conditions for open-channel flow. Being purely divergence-free, the framework offers smoothed and structured pressure distribution. An implicit treatment of Pressure Poison Equation and Dirichlet boundary condition is applied on free-surface to minimize error in velocity-divergence. Beyond inflow and outflow threshold, multiple layers of dummy particles are created according to specified boundary condition. Inflow boundary acts as a soluble wave-maker. Fluid particles beyond outflow threshold are removed and replaced with dummy particles with specified boundary velocity. The framework is validated against different cases of open channel flow with different boundary conditions. The model can efficiently capture flow evolution and vortex generation for random geometry and variable boundary conditions.

  17. Discrete cosine and sine transforms generalized to honeycomb lattice

    NASA Astrophysics Data System (ADS)

    Hrivnák, Jiří; Motlochová, Lenka

    2018-06-01

    The discrete cosine and sine transforms are generalized to a triangular fragment of the honeycomb lattice. The honeycomb point sets are constructed by subtracting the root lattice from the weight lattice points of the crystallographic root system A2. The two-variable orbit functions of the Weyl group of A2, discretized simultaneously on the weight and root lattices, induce a novel parametric family of extended Weyl orbit functions. The periodicity and von Neumann and Dirichlet boundary properties of the extended Weyl orbit functions are detailed. Three types of discrete complex Fourier-Weyl transforms and real-valued Hartley-Weyl transforms are described. Unitary transform matrices and interpolating behavior of the discrete transforms are exemplified. Consequences of the developed discrete transforms for transversal eigenvibrations of the mechanical graphene model are discussed.

  18. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  19. On two mathematical problems of canonical quantization. IV

    NASA Astrophysics Data System (ADS)

    Kirillov, A. I.

    1992-11-01

    A method for solving the problem of reconstructing a measure beginning with its logarithmic derivative is presented. The method completes that of solving the stochastic differential equation via Dirichlet forms proposed by S. Albeverio and M. Rockner. As a result one obtains the mathematical apparatus for the stochastic quantization. The apparatus is applied to prove the existence of the Feynman-Kac measure of the sine-Gordon and λφ2n/(1 + K2φ2n)-models. A synthesis of both mathematical problems of canonical quantization is obtained in the form of a second-order martingale problem for vacuum noise. It is shown that in stochastic mechanics the martingale problem is an analog of Newton's second law and enables us to find the Nelson's stochastic trajectories without determining the wave functions.

  20. Determination of serum calcium levels by 42Ca isotope dilution inductively coupled plasma mass spectrometry.

    PubMed

    Han, Bingqing; Ge, Menglei; Zhao, Haijian; Yan, Ying; Zeng, Jie; Zhang, Tianjiao; Zhou, Weiyan; Zhang, Jiangtao; Wang, Jing; Zhang, Chuanbao

    2017-11-27

    Serum calcium level is an important clinical index that reflects pathophysiological states. However, detection accuracy in laboratory tests is not ideal; as such, a high accuracy method is needed. We developed a reference method for measuring serum calcium levels by isotope dilution inductively coupled plasma mass spectrometry (ID ICP-MS), using 42Ca as the enriched isotope. Serum was digested with 69% ultrapure nitric acid and diluted to a suitable concentration. The 44Ca/42Ca ratio was detected in H2 mode; spike concentration was calibrated by reverse IDMS using standard reference material (SRM) 3109a, and sample concentration was measured by a bracketing procedure. We compared the performance of ID ICP-MS with those of three other reference methods in China using the same serum and aqueous samples. The relative expanded uncertainty of the sample concentration was 0.414% (k=2). The range of repeatability (within-run imprecision), intermediate imprecision (between-run imprecision), and intra-laboratory imprecision were 0.12%-0.19%, 0.07%-0.09%, and 0.16%-0.17%, respectively, for two of the serum samples. SRM909bI, SRM909bII, SRM909c, and GBW09152 were found to be within the certified value interval, with mean relative bias values of 0.29%, -0.02%, 0.10%, and -0.19%, respectively. The range of recovery was 99.87%-100.37%. Results obtained by ID ICP-MS showed a better accuracy than and were highly correlated with those of other reference methods. ID ICP-MS is a simple and accurate candidate reference method for serum calcium measurement and can be used to establish and improve serum calcium reference system in China.

  1. Evaluation of a next generation direct whole blood enzymatic assay for hemoglobin A1c on the ARCHITECT c8000 chemistry system.

    PubMed

    Teodoro-Morrison, Tracy; Janssen, Marcel J W; Mols, Jasper; Hendrickx, Ben H E; Velmans, Mathieu H; Lotz, Johannes; Lackner, Karl; Lennartz, Lieselotte; Armbruster, David; Maine, Gregory; Yip, Paul M

    2015-01-01

    The utility of HbA1c for the diagnosis of type 2 diabetes requires an accurate, precise and robust test measurement system. Currently, immunoassay and HPLC are the most popular methods for HbA1c quantification, noting however the limitations associated with some platforms, such as imprecision or interference from common hemoglobin variants. Abbott Diagnostics has introduced a fully automated direct enzymatic method for the quantification of HbA1c from whole blood on the ARCHITECT chemistry system. Here we completed a method evaluation of the ARCHITECT HbA1c enzymatic assay for imprecision, accuracy, method comparison, interference from hemoglobin variants and specimen stability. This was completed at three independent clinical laboratories in North America and Europe. The total imprecision ranged from 0.5% to 2.2% CV with low and high level control materials. Around the diagnostic cut-off of 48 mmol/mol, the total imprecision was 0.6% CV. Mean bias using reference samples from IFCC and CAP ranged from -1.1 to 1.0 mmol/mol. The enzymatic assay also showed excellent agreement with HPLC methods, with slopes of 1.01 and correlation coefficients ranging from 0.984 to 0.996 compared to Menarini Adams HA-8160, Bio-Rad Variant II and Variant II Turbo instruments. Finally, no significant effect was observed for erythrocyte sedimentation or interference from common hemoglobin variants in patient samples containing heterozygous HbS, HbC, HbD, HbE, and up to 10% HbF. The ARCHITECT enzymatic assay for HbA1c is a robust and fully automated method that meets the performance requirements to support the diagnosis of type 2 diabetes.

  2. Importance of the efficiency of double-stranded DNA formation in cDNA synthesis for the imprecision of microarray expression analysis.

    PubMed

    Thormar, Hans G; Gudmundsson, Bjarki; Eiriksdottir, Freyja; Kil, Siyoen; Gunnarsson, Gudmundur H; Magnusson, Magnus Karl; Hsu, Jason C; Jonsson, Jon J

    2013-04-01

    The causes of imprecision in microarray expression analysis are poorly understood, limiting the use of this technology in molecular diagnostics. Two-dimensional strandness-dependent electrophoresis (2D-SDE) separates nucleic acid molecules on the basis of length and strandness, i.e., double-stranded DNA (dsDNA), single-stranded DNA (ssDNA), and RNA·DNA hybrids. We used 2D-SDE to measure the efficiency of cDNA synthesis and its importance for the imprecision of an in vitro transcription-based microarray expression analysis. The relative amount of double-stranded cDNA formed in replicate experiments that used the same RNA sample template was highly variable, ranging between 0% and 72% of the total DNA. Microarray experiments showed an inverse relationship between the difference between sample pairs in probe variance and the relative amount of dsDNA. Approximately 15% of probes showed between-sample variation (P < 0.05) when the dsDNA percentage was between 12% and 35%. In contrast, only 3% of probes showed between-sample variation when the dsDNA percentage was 69% and 72%. Replication experiments of the 35% dsDNA and 72% dsDNA samples were used to separate sample variation from probe replication variation. The estimated SD of the sample-to-sample variation and of the probe replicates was lower in 72% dsDNA samples than in 35% dsDNA samples. Variation in the relative amount of double-stranded cDNA synthesized can be an important component of the imprecision in T7 RNA polymerase-based microarray expression analysis. © 2013 American Association for Clinical Chemistry

  3. Hybrid generative-discriminative human action recognition by combining spatiotemporal words with supervised topic models

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Wang, Cheng; Wang, Boliang

    2011-02-01

    We present a hybrid generative-discriminative learning method for human action recognition from video sequences. Our model combines a bag-of-words component with supervised latent topic models. A video sequence is represented as a collection of spatiotemporal words by extracting space-time interest points and describing these points using both shape and motion cues. The supervised latent Dirichlet allocation (sLDA) topic model, which employs discriminative learning using labeled data under a generative framework, is introduced to discover the latent topic structure that is most relevant to action categorization. The proposed algorithm retains most of the desirable properties of generative learning while increasing the classification performance though a discriminative setting. It has also been extended to exploit both labeled data and unlabeled data to learn human actions under a unified framework. We test our algorithm on three challenging data sets: the KTH human motion data set, the Weizmann human action data set, and a ballet data set. Our results are either comparable to or significantly better than previously published results on these data sets and reflect the promise of hybrid generative-discriminative learning approaches.

  4. Leveraging constraints and biotelemetry data to pinpoint repetitively used spatial features

    USGS Publications Warehouse

    Brost, Brian M.; Hooten, Mevin B.; Small, Robert J.

    2016-01-01

    Satellite telemetry devices collect valuable information concerning the sites visited by animals, including the location of central places like dens, nests, rookeries, or haul‐outs. Existing methods for estimating the location of central places from telemetry data require user‐specified thresholds and ignore common nuances like measurement error. We present a fully model‐based approach for locating central places from telemetry data that accounts for multiple sources of uncertainty and uses all of the available locational data. Our general framework consists of an observation model to account for large telemetry measurement error and animal movement, and a highly flexible mixture model specified using a Dirichlet process to identify the location of central places. We also quantify temporal patterns in central place use by incorporating ancillary behavioral data into the model; however, our framework is also suitable when no such behavioral data exist. We apply the model to a simulated data set as proof of concept. We then illustrate our framework by analyzing an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that exhibits fidelity to terrestrial haul‐out sites.

  5. Concurrent multiscale modeling of microstructural effects on localization behavior in finite deformation solid mechanics

    DOE PAGES

    Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro; ...

    2017-11-06

    The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. In order to resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. Here, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled withmore » a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.« less

  6. Estimating the Term Structure With a Semiparametric Bayesian Hierarchical Model: An Application to Corporate Bonds.

    PubMed

    Cruz-Marcelo, Alejandro; Ensor, Katherine B; Rosner, Gary L

    2011-06-01

    The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material.

  7. Estimating the Term Structure With a Semiparametric Bayesian Hierarchical Model: An Application to Corporate Bonds1

    PubMed Central

    Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.

    2011-01-01

    The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566

  8. A tree-parenchyma coupled model for lung ventilation simulation.

    PubMed

    Pozin, Nicolas; Montesantos, Spyridon; Katz, Ira; Pichelin, Marine; Vignon-Clementel, Irene; Grandmont, Céline

    2017-11-01

    In this article, we develop a lung ventilation model. The parenchyma is described as an elastic homogenized media. It is irrigated by a space-filling dyadic resistive pipe network, which represents the tracheobronchial tree. In this model, the tree and the parenchyma are strongly coupled. The tree induces an extra viscous term in the system constitutive relation, which leads, in the finite element framework, to a full matrix. We consider an efficient algorithm that takes advantage of the tree structure to enable a fast matrix-vector product computation. This framework can be used to model both free and mechanically induced respiration, in health and disease. Patient-specific lung geometries acquired from computed tomography scans are considered. Realistic Dirichlet boundary conditions can be deduced from surface registration on computed tomography images. The model is compared to a more classical exit compartment approach. Results illustrate the coupling between the tree and the parenchyma, at global and regional levels, and how conditions for the purely 0D model can be inferred. Different types of boundary conditions are tested, including a nonlinear Robin model of the surrounding lung structures. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Concurrent multiscale modeling of microstructural effects on localization behavior in finite deformation solid mechanics

    NASA Astrophysics Data System (ADS)

    Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro; Lim, Hojun; Littlewood, David J.

    2018-02-01

    The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. To resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. In this study, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled with a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.

  10. Concurrent multiscale modeling of microstructural effects on localization behavior in finite deformation solid mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro

    The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. In order to resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. Here, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled withmore » a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.« less

  11. Rotationally symmetric viscous gas flows

    NASA Astrophysics Data System (ADS)

    Weigant, W.; Plotnikov, P. I.

    2017-03-01

    The Dirichlet boundary value problem for the Navier-Stokes equations of a barotropic viscous compressible fluid is considered. The flow region and the data of the problem are assumed to be invariant under rotations about a fixed axis. The existence of rotationally symmetric weak solutions for all adiabatic exponents from the interval (γ*,∞) with a critical exponent γ* < 4/3 is proved.

  12. Thermoelectric DC conductivities in hyperscaling violating Lifshitz theories

    NASA Astrophysics Data System (ADS)

    Cremonini, Sera; Cvetič, Mirjam; Papadimitriou, Ioannis

    2018-04-01

    We analytically compute the thermoelectric conductivities at zero frequency (DC) in the holographic dual of a four dimensional Einstein-Maxwell-Axion-Dilaton theory that admits a class of asymptotically hyperscaling violating Lifshitz backgrounds with a dynamical exponent z and hyperscaling violating parameter θ. We show that the heat current in the dual Lifshitz theory involves the energy flux, which is an irrelevant operator for z > 1. The linearized fluctuations relevant for computing the thermoelectric conductivities turn on a source for this irrelevant operator, leading to several novel and non-trivial aspects in the holographic renormalization procedure and the identification of the physical observables in the dual theory. Moreover, imposing Dirichlet or Neumann boundary conditions on the spatial components of one of the two Maxwell fields present leads to different thermoelectric conductivities. Dirichlet boundary conditions reproduce the thermoelectric DC conductivities obtained from the near horizon analysis of Donos and Gauntlett, while Neumann boundary conditions result in a new set of DC conductivities. We make preliminary analytical estimates for the temperature behavior of the thermoelectric matrix in appropriate regions of parameter space. In particular, at large temperatures we find that the only case which could lead to a linear resistivity ρ ˜ T corresponds to z = 4 /3.

  13. Synthesis and X-ray Crystallography of [Mg(H2O)6][AnO2(C2H5COO)3]2 (An = U, Np, or Pu).

    PubMed

    Serezhkin, Viktor N; Grigoriev, Mikhail S; Abdulmyanov, Aleksey R; Fedoseev, Aleksandr M; Savchenkov, Anton V; Serezhkina, Larisa B

    2016-08-01

    Synthesis and X-ray crystallography of single crystals of [Mg(H2O)6][AnO2(C2H5COO)3]2, where An = U (I), Np (II), or Pu (III), are reported. Compounds I-III are isostructural and crystallize in the trigonal crystal system. The structures of I-III are built of hydrated magnesium cations [Mg(H2O)6](2+) and mononuclear [AnO2(C2H5COO)3](-) complexes, which belong to the AB(01)3 crystallochemical group of uranyl complexes (A = AnO2(2+), B(01) = C2H5COO(-)). Peculiarities of intermolecular interactions in the structures of [Mg(H2O)6][UO2(L)3]2 complexes depending on the carboxylate ion L (acetate, propionate, or n-butyrate) are investigated using the method of molecular Voronoi-Dirichlet polyhedra. Actinide contraction in the series of U(VI)-Np(VI)-Pu(VI) in compounds I-III is reflected in a decrease in the mean An═O bond lengths and in the volume and sphericity degree of Voronoi-Dirichlet polyhedra of An atoms.

  14. Imprecise intron losses are less frequent than precise intron losses but are not rare in plants.

    PubMed

    Ma, Ming-Yue; Zhu, Tao; Li, Xue-Nan; Lan, Xin-Ran; Liu, Heng-Yuan; Yang, Yu-Fei; Niu, Deng-Ke

    2015-05-27

    In this study, we identified 19 intron losses, including 11 precise intron losses (PILs), six imprecise intron losses (IILs), one de-exonization, and one exon deletion in tomato and potato, and 17 IILs in Arabidopsis thaliana. Comparative analysis of related genomes confirmed that all of the IILs have been fixed during evolution. Consistent with previous studies, our results indicate that PILs are a major type of intron loss. However, at least in plants, IILs are unlikely to be as rare as previously reported. This article was reviewed by Jun Yu and Zhang Zhang. For complete reviews, see the Reviewers' Reports section.

  15. Adaptive hybrid optimal quantum control for imprecisely characterized systems.

    PubMed

    Egger, D J; Wilhelm, F K

    2014-06-20

    Optimal quantum control theory carries a huge promise for quantum technology. Its experimental application, however, is often hindered by imprecise knowledge of the input variables, the quantum system's parameters. We show how to overcome this by adaptive hybrid optimal control, using a protocol named Ad-HOC. This protocol combines open- and closed-loop optimal control by first performing a gradient search towards a near-optimal control pulse and then an experimental fidelity estimation with a gradient-free method. For typical settings in solid-state quantum information processing, adaptive hybrid optimal control enhances gate fidelities by an order of magnitude, making optimal control theory applicable and useful.

  16. The proposed terminology 'A(1c)-derived average glucose' is inherently imprecise and should not be adopted.

    PubMed

    Bloomgarden, Z T; Inzucchi, S E; Karnieli, E; Le Roith, D

    2008-07-01

    The proposed use of a more precise standard for glycated (A(1c)) and non-glycated haemoglobin would lead to an A(1c) value, when expressed as a percentage, that is lower than that currently in use. One approach advocated to address the potential confusion that would ensue is to replace 'HbA(1c)' with a new term, 'A(1c)-derived average glucose.' We review evidence from several sources suggesting that A(1c) is, in fact, inherently imprecise as a measure of average glucose, so that the proposed terminology should not be adopted.

  17. Reducing preference reversals: The role of preference imprecision and nontransparent methods.

    PubMed

    Pinto-Prades, José Luis; Sánchez-Martínez, Fernando Ignacio; Abellán-Perpiñán, José María; Martínez-Pérez, Jorge E

    2018-05-16

    Preferences elicited with matching and choice usually diverge (as characterised by preference reversals), violating a basic rationality requirement, namely, procedure invariance. We report the results of an experiment that shows that preference reversals between matching (Standard Gamble in our case) and choice are reduced when the matching task is conducted using nontransparent methods. Our results suggest that techniques based on nontransparent methods are less influenced by biases (i.e., compatibility effects) than transparent methods. We also observe that imprecision of preferences influences the degree of preference reversals. The preference reversal phenomenon is less strong in subjects with more precise preferences. Copyright © 2018 John Wiley & Sons, Ltd.

  18. Improving the estimation of flavonoid intake for study of health outcomes

    PubMed Central

    Dwyer, Johanna T.; Jacques, Paul F.; McCullough, Marjorie L.

    2015-01-01

    Imprecision in estimating intakes of non-nutrient bioactive compounds such as flavonoids is a challenge in epidemiologic studies of health outcomes. The sources of this imprecision, using flavonoids as an example, include the variability of bioactive compounds in foods due to differences in growing conditions and processing, the challenges in laboratory quantification of flavonoids in foods, the incompleteness of flavonoid food composition tables, and the lack of adequate dietary assessment instruments. Steps to improve databases of bioactive compounds and to increase the accuracy and precision of the estimation of bioactive compound intakes in studies of health benefits and outcomes are suggested. PMID:26084477

  19. Do new concepts for deriving permissible limits for analytical imprecision and bias have any advantages over existing consensus?

    PubMed

    Petersen, Per Hyltoft; Sandberg, Sverre; Fraser, Callum G

    2011-04-01

    The Stockholm conference held in 1999 on "Strategies to set global analytical quality specifications (AQS) in laboratory medicine" reached a consensus and advocated the ubiquitous application of a hierarchical structure of approaches to setting AQS. This approach has been widely used over the last decade, although several issues remain unanswered. A number of new suggestions have been recently proposed for setting AQS. One of these recommendations is described by Haeckel and Wosniok in this issue of Clinical Chemistry and Laboratory Medicine. Their concept is to estimate the increase in false-positive results using conventional population-based reference intervals, the delta false-positive rate due to analytical imprecision and bias, and relate the results directly to the current analytical quality attained. Thus, the actual estimates in the laboratory for imprecision and bias are compared to the AQS. These values are classified in a ranking system according to the closeness to the AQS, and this combination is the new idea of the proposal. Other new ideas have been proposed recently. We wait, with great interest, as should others, to see if these newer approaches become widely used and worthy of incorporation into the hierarchy.

  20. Detecting Hotspot Information Using Multi-Attribute Based Topic Model

    PubMed Central

    Wang, Jing; Li, Li; Tan, Feng; Zhu, Ying; Feng, Weisi

    2015-01-01

    Microblogging as a kind of social network has become more and more important in our daily lives. Enormous amounts of information are produced and shared on a daily basis. Detecting hot topics in the mountains of information can help people get to the essential information more quickly. However, due to short and sparse features, a large number of meaningless tweets and other characteristics of microblogs, traditional topic detection methods are often ineffective in detecting hot topics. In this paper, we propose a new topic model named multi-attribute latent dirichlet allocation (MA-LDA), in which the time and hashtag attributes of microblogs are incorporated into LDA model. By introducing time attribute, MA-LDA model can decide whether a word should appear in hot topics or not. Meanwhile, compared with the traditional LDA model, applying hashtag attribute in MA-LDA model gives the core words an artificially high ranking in results meaning the expressiveness of outcomes can be improved. Empirical evaluations on real data sets demonstrate that our method is able to detect hot topics more accurately and efficiently compared with several baselines. Our method provides strong evidence of the importance of the temporal factor in extracting hot topics. PMID:26496635

  1. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  2. Traffic Behavior Recognition Using the Pachinko Allocation Model

    PubMed Central

    Huynh-The, Thien; Banos, Oresti; Le, Ba-Vui; Bui, Dinh-Mao; Yoon, Yongik; Lee, Sungyoung

    2015-01-01

    CCTV-based behavior recognition systems have gained considerable attention in recent years in the transportation surveillance domain for identifying unusual patterns, such as traffic jams, accidents, dangerous driving and other abnormal behaviors. In this paper, a novel approach for traffic behavior modeling is presented for video-based road surveillance. The proposed system combines the pachinko allocation model (PAM) and support vector machine (SVM) for a hierarchical representation and identification of traffic behavior. A background subtraction technique using Gaussian mixture models (GMMs) and an object tracking mechanism based on Kalman filters are utilized to firstly construct the object trajectories. Then, the sparse features comprising the locations and directions of the moving objects are modeled by PAM into traffic topics, namely activities and behaviors. As a key innovation, PAM captures not only the correlation among the activities, but also among the behaviors based on the arbitrary directed acyclic graph (DAG). The SVM classifier is then utilized on top to train and recognize the traffic activity and behavior. The proposed model shows more flexibility and greater expressive power than the commonly-used latent Dirichlet allocation (LDA) approach, leading to a higher recognition accuracy in the behavior classification. PMID:26151213

  3. Development and Utilization of Regional Oceanic Modeling System (ROMS). Delicacy, Imprecision, and Uncertainty of Oceanic Simulations: An Investigation with the Regional Oceanic Modeling System (ROMS). Submesoscale Flows and Mixing in the Ocean Surface Layer Using the Regional Oceanic Modeling System (ROMS). Eddy Effects in General Circulation, Spanning Mean Currents, Mesoscale Eddies, and Topographic Generation, including Submesoscale Nests

    DTIC Science & Technology

    2012-09-30

    unbalanced motions is likely to occur. Due to an rapidly expanding set of investigation on oceanic flows at submesoscales, it is increasingly clear...Uchiyama, E. M. Lane, J. M. Restrepo, & J. C. McWilliams, 2011: A vortex force analysis of the interaction of rip currents and gravity waves. J. Geophys...particular topographic features, the torque is pervasively positive (cyclonic) along the Stream, in opposition to the anticyclonic wind curl in the

  4. Durability reliability analysis for corroding concrete structures under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Hao

    2018-02-01

    This paper presents a durability reliability analysis of reinforced concrete structures subject to the action of marine chloride. The focus is to provide insight into the role of epistemic uncertainties on durability reliability. The corrosion model involves a number of variables whose probabilistic characteristics cannot be fully determined due to the limited availability of supporting data. All sources of uncertainty, both aleatory and epistemic, should be included in the reliability analysis. Two methods are available to formulate the epistemic uncertainty: the imprecise probability-based method and the purely probabilistic method in which the epistemic uncertainties are modeled as random variables. The paper illustrates how the epistemic uncertainties are modeled and propagated in the two methods, and shows how epistemic uncertainties govern the durability reliability.

  5. Fuzzy Petri nets to model vision system decisions within a flexible manufacturing system

    NASA Astrophysics Data System (ADS)

    Hanna, Moheb M.; Buck, A. A.; Smith, R.

    1994-10-01

    The paper presents a Petri net approach to modelling, monitoring and control of the behavior of an FMS cell. The FMS cell described comprises a pick and place robot, vision system, CNC-milling machine and 3 conveyors. The work illustrates how the block diagrams in a hierarchical structure can be used to describe events at different levels of abstraction. It focuses on Fuzzy Petri nets (Fuzzy logic with Petri nets) including an artificial neural network (Fuzzy Neural Petri nets) to model and control vision system decisions and robot sequences within an FMS cell. This methodology can be used as a graphical modelling tool to monitor and control the imprecise, vague and uncertain situations, and determine the quality of the output product of an FMS cell.

  6. Estimating potency for the Emax-model without attaining maximal effects.

    PubMed

    Schoemaker, R C; van Gerven, J M; Cohen, A F

    1998-10-01

    The most widely applied model relating drug concentrations to effects is the Emax model. In practice, concentration-effect relationships often deviate from a simple linear relationship but without reaching a clear maximum because a further increase in concentration might be associated with unacceptable or distorting side effects. The parameters for the Emax model can only be estimated with reasonable precision if the curve shows sign of reaching a maximum, otherwise both EC50 and Emax estimates may be extremely imprecise. This paper provides a solution by introducing a new parameter (S0) equal to Emax/EC50 that can be used to characterize potency adequately even if there are no signs of a clear maximum. Simulations are presented to investigate the nature of the new parameter and published examples are used as illustration.

  7. Empirical study of fuzzy compatibility measures and aggregation operators

    NASA Astrophysics Data System (ADS)

    Cross, Valerie V.; Sudkamp, Thomas A.

    1992-02-01

    Two fundamental requirements for the generation of support using incomplete and imprecise information are the ability to measure the compatibility of discriminatory information with domain knowledge and the ability to fuse information obtained from disparate sources. A generic architecture utilizing the generalized fuzzy relational database model has been developed to empirically investigate the support generation capabilities of various compatibility measures and aggregation operators. This paper examines the effectiveness of combinations of compatibility measures from the set-theoretic, geometric distance, and logic- based classes paired with t-norm and generalized mean families of aggregation operators.

  8. A Deep and Autoregressive Approach for Topic Modeling of Multimodal Data.

    PubMed

    Zheng, Yin; Zhang, Yu-Jin; Larochelle, Hugo

    2016-06-01

    Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.

  9. An Integrated Approach of Fuzzy Linguistic Preference Based AHP and Fuzzy COPRAS for Machine Tool Evaluation.

    PubMed

    Nguyen, Huu-Tho; Md Dawal, Siti Zawiah; Nukman, Yusoff; Aoyama, Hideki; Case, Keith

    2015-01-01

    Globalization of business and competitiveness in manufacturing has forced companies to improve their manufacturing facilities to respond to market requirements. Machine tool evaluation involves an essential decision using imprecise and vague information, and plays a major role to improve the productivity and flexibility in manufacturing. The aim of this study is to present an integrated approach for decision-making in machine tool selection. This paper is focused on the integration of a consistent fuzzy AHP (Analytic Hierarchy Process) and a fuzzy COmplex PRoportional ASsessment (COPRAS) for multi-attribute decision-making in selecting the most suitable machine tool. In this method, the fuzzy linguistic reference relation is integrated into AHP to handle the imprecise and vague information, and to simplify the data collection for the pair-wise comparison matrix of the AHP which determines the weights of attributes. The output of the fuzzy AHP is imported into the fuzzy COPRAS method for ranking alternatives through the closeness coefficient. Presentation of the proposed model application is provided by a numerical example based on the collection of data by questionnaire and from the literature. The results highlight the integration of the improved fuzzy AHP and the fuzzy COPRAS as a precise tool and provide effective multi-attribute decision-making for evaluating the machine tool in the uncertain environment.

  10. Commutability of NIST SRM 1955 Homocysteine and Folate in Frozen Human Serum with selected total homocysteine immunoassays and enzymatic assays.

    PubMed

    Nelson, Bryant C; Pfeiffer, Christine M; Zhang, Ming; Duewer, David L; Sharpless, Katherine E; Lippa, Katrice A

    2008-09-01

    The National Institute of Standards and Technology (NIST) has recently developed Standard Reference Material (SRM) 1955 Homocysteine and Folate in Frozen Human Serum with certified values for total homocysteine (tHcy) and 5-methyl-tetrahydrofolic acid. NIST has performed an international, interlaboratory assessment of SRM 1955 commutability; results are reported for tHcy only. Total Hcy was measured in 20 patient sera and in 3 levels of SRM 1955 using 14 immunoassays and/or enzymatic assays. Liquid chromatography/tandem mass spectrometry was utilized as the reference assay. An "errors-in-variables" statistical model was utilized to assess the commutability of SRM 1955. Normalized residuals ranged from -2.65 to 2.19 for SRM 1955. The median interlaboratory/interassay imprecision (CV) was approximately 4% for patient specimens and ranged from approximately 3% to approximately 7% for SRM 1955. The median intra-assay imprecision ranged from approximately 1% to approximately 13%. Orthogonal residuals, as a descriptor of assay accuracy, ranged from 0.29 to 7.71 and from 0.20 to 2.22 for patient specimens and SRM 1955 samples, respectively. The current study suggests that SRM 1955 is commutable with the investigated tHcy assays; however, a broader specimen set needs to be evaluated to completely substantiate this conclusion.

  11. Decoding brain activity using a large-scale probabilistic functional-anatomical atlas of human cognition

    PubMed Central

    Jones, Michael N.

    2017-01-01

    A central goal of cognitive neuroscience is to decode human brain activity—that is, to infer mental processes from observed patterns of whole-brain activation. Previous decoding efforts have focused on classifying brain activity into a small set of discrete cognitive states. To attain maximal utility, a decoding framework must be open-ended, systematic, and context-sensitive—that is, capable of interpreting numerous brain states, presented in arbitrary combinations, in light of prior information. Here we take steps towards this objective by introducing a probabilistic decoding framework based on a novel topic model—Generalized Correspondence Latent Dirichlet Allocation—that learns latent topics from a database of over 11,000 published fMRI studies. The model produces highly interpretable, spatially-circumscribed topics that enable flexible decoding of whole-brain images. Importantly, the Bayesian nature of the model allows one to “seed” decoder priors with arbitrary images and text—enabling researchers, for the first time, to generate quantitative, context-sensitive interpretations of whole-brain patterns of brain activity. PMID:29059185

  12. Estimation of population trajectories from count data

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1997-01-01

    Monitoring of changes in animal population size is rarely possible through complete censuses; frequently, the only feasible means of monitoring changes in population size is to use counts of animals obtained by skilled observers as indices to abundance. Analysis of changes in population size can be severely biased if factors related to the acquisition of data are not adequately controlled for. In particular we identify two types of observer effects: these correspond to baseline differences in observer competence, and to changes through time in the ability of individual observers. We present a family of models for count data in which the first of these observer effects is treated as a nuisance parameter. Conditioning on totals of negative binomial counts yields a Dirichlet compound multinomial vector for each observer. Quasi-likelihood is used to estimate parameters related to population trajectory and other parameters of interest; model selection is carried out on the basis of Akaike's information criterion. An example is presented using data on Wood thrush from the North American Breeding Bird Survey.

  13. How and how much does RAD-seq bias genetic diversity estimates?

    PubMed

    Cariou, Marie; Duret, Laurent; Charlat, Sylvain

    2016-11-08

    RAD-seq is a powerful tool, increasingly used in population genomics. However, earlier studies have raised red flags regarding possible biases associated with this technique. In particular, polymorphism on restriction sites results in preferential sampling of closely related haplotypes, so that RAD data tends to underestimate genetic diversity. Here we (1) clarify the theoretical basis of this bias, highlighting the potential confounding effects of population structure and selection, (2) confront predictions to real data from in silico digestion of full genomes and (3) provide a proof of concept toward an ABC-based correction of the RAD-seq bias. Under a neutral and panmictic model, we confirm the previously established relationship between the true polymorphism and its RAD-based estimation, showing a more pronounced bias when polymorphism is high. Using more elaborate models, we show that selection, resulting in heterogeneous levels of polymorphism along the genome, exacerbates the bias and leads to a more pronounced underestimation. On the contrary, spatial genetic structure tends to reduce the bias. We confront the neutral and panmictic model to "ideal" empirical data (in silico RAD-sequencing) using full genomes from natural populations of the fruit fly Drosophila melanogaster and the fungus Shizophyllum commune, harbouring respectively moderate and high genetic diversity. In D. melanogaster, predictions fit the model, but the small difference between the true and RAD polymorphism makes this comparison insensitive to deviations from the model. In the highly polymorphic fungus, the model captures a large part of the bias but makes inaccurate predictions. Accordingly, ABC corrections based on this model improve the estimations, albeit with some imprecisions. The RAD-seq underestimation of genetic diversity associated with polymorphism in restriction sites becomes more pronounced when polymorphism is high. In practice, this means that in many systems where polymorphism does not exceed 2 %, the bias is of minor importance in the face of other sources of uncertainty, such as heterogeneous bases composition or technical artefacts. The neutral panmictic model provides a practical mean to correct the bias through ABC, albeit with some imprecisions. More elaborate ABC methods might integrate additional parameters, such as population structure and selection, but their opposite effects could hinder accurate corrections.

  14. Water Flow in Karst Aquifer Considering Dynamically Variable Saturation Conduit

    NASA Astrophysics Data System (ADS)

    Tan, Chaoqun; Hu, Bill X.

    2017-04-01

    The karst system is generally conceptualized as dual-porosity system, which is characterized by low conductivity and high storage continuum matrix and high conductivity and quick flow conduit networks. And so far, a common numerical model for simulating flow in karst aquifer is MODFLOW2005-CFP, which is released by USGS in 2008. However, the steady-state approach for conduit flow in CFP is physically impractical when simulating very dynamic hydraulics with variable saturation conduit. So, we adopt the method proposed by Reimann et al. (2011) to improve current model, in which Saint-Venant equations are used to model the flow in conduit. Considering the actual background that the conduit is very big and varies along flow path and the Dirichlet boundary varies with rainfall in our study area in Southwest China, we further investigate the influence of conduit diameter and outflow boundary on numerical model. And we also analyze the hydraulic process in multi-precipitation events. We find that the numerical model here corresponds well with CFP for saturated conduit, and it could depict the interaction between matrix and conduit during very dynamic hydraulics pretty well compare with CFP.

  15. Multiclass Data Segmentation using Diffuse Interface Methods on Graphs

    DTIC Science & Technology

    2014-01-01

    37] that performs interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general...izations of the graph Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph...continuous setting carry over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence

  16. Sine-gordon type field in spacetime of arbitrary dimension. II: Stochastic quantization

    NASA Astrophysics Data System (ADS)

    Kirillov, A. I.

    1995-11-01

    Using the theory of Dirichlet forms, we prove the existence of a distribution-valued diffusion process such that the Nelson measure of a field with a bounded interaction density is its invariant probability measure. A Langevin equation in mathematically correct form is formulated which is satisfied by the process. The drift term of the equation is interpreted as a renormalized Euclidean current operator.

  17. On the Effective Construction of Compactly Supported Wavelets Satisfying Homogenous Boundary Conditions on the Interval

    NASA Technical Reports Server (NTRS)

    Chiavassa, G.; Liandrat, J.

    1996-01-01

    We construct compactly supported wavelet bases satisfying homogeneous boundary conditions on the interval (0,1). The maximum features of multiresolution analysis on the line are retained, including polynomial approximation and tree algorithms. The case of H(sub 0)(sup 1)(0, 1)is detailed, and numerical values, required for the implementation, are provided for the Neumann and Dirichlet boundary conditions.

  18. Interactions between Mathematics and Physics: The History of the Concept of Function--Teaching with and about Nature of Mathematics

    ERIC Educational Resources Information Center

    Kjeldsen, Tinne Hoff; Lützen, Jesper

    2015-01-01

    In this paper, we discuss the history of the concept of function and emphasize in particular how problems in physics have led to essential changes in its definition and application in mathematical practices. Euler defined a function as an analytic expression, whereas Dirichlet defined it as a variable that depends in an arbitrary manner on another…

  19. The accurate solution of Poisson's equation by expansion in Chebyshev polynomials

    NASA Technical Reports Server (NTRS)

    Haidvogel, D. B.; Zang, T.

    1979-01-01

    A Chebyshev expansion technique is applied to Poisson's equation on a square with homogeneous Dirichlet boundary conditions. The spectral equations are solved in two ways - by alternating direction and by matrix diagonalization methods. Solutions are sought to both oscillatory and mildly singular problems. The accuracy and efficiency of the Chebyshev approach compare favorably with those of standard second- and fourth-order finite-difference methods.

  20. The tunneling effect for a class of difference operators

    NASA Astrophysics Data System (ADS)

    Klein, Markus; Rosenberger, Elke

    We analyze a general class of self-adjoint difference operators H𝜀 = T𝜀 + V𝜀 on ℓ2((𝜀ℤ)d), where V𝜀 is a multi-well potential and 𝜀 is a small parameter. We give a coherent review of our results on tunneling up to new sharp results on the level of complete asymptotic expansions (see [30-35]).Our emphasis is on general ideas and strategy, possibly of interest for a broader range of readers, and less on detailed mathematical proofs. The wells are decoupled by introducing certain Dirichlet operators on regions containing only one potential well. Then the eigenvalue problem for the Hamiltonian H𝜀 is treated as a small perturbation of these comparison problems. After constructing a Finslerian distance d induced by H𝜀, we show that Dirichlet eigenfunctions decay exponentially with a rate controlled by this distance to the well. It follows with microlocal techniques that the first n eigenvalues of H𝜀 converge to the first n eigenvalues of the direct sum of harmonic oscillators on ℝd located at several wells. In a neighborhood of one well, we construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low-lying eigenvalues of H𝜀. These are obtained from eigenfunctions or quasimodes for the operator H𝜀, acting on L2(ℝd), via restriction to the lattice (𝜀ℤ)d. Tunneling is then described by a certain interaction matrix, similar to the analysis for the Schrödinger operator (see [22]), the remainder is exponentially small and roughly quadratic compared with the interaction matrix. We give weighted ℓ2-estimates for the difference of eigenfunctions of Dirichlet-operators in neighborhoods of the different wells and the associated WKB-expansions at the wells. In the last step, we derive full asymptotic expansions for interactions between two “wells” (minima) of the potential energy, in particular for the discrete tunneling effect. Here we essentially use analysis on phase space, complexified in the momentum variable. These results are as sharp as the classical results for the Schrödinger operator in [22].

  1. Nonparametric Hierarchical Bayesian Model for Functional Brain Parcellation

    PubMed Central

    Lashkari, Danial; Sridharan, Ramesh; Vul, Edward; Hsieh, Po-Jang; Kanwisher, Nancy; Golland, Polina

    2011-01-01

    We develop a method for unsupervised analysis of functional brain images that learns group-level patterns of functional response. Our algorithm is based on a generative model that comprises two main layers. At the lower level, we express the functional brain response to each stimulus as a binary activation variable. At the next level, we define a prior over the sets of activation variables in all subjects. We use a Hierarchical Dirichlet Process as the prior in order to simultaneously learn the patterns of response that are shared across the group, and to estimate the number of these patterns supported by data. Inference based on this model enables automatic discovery and characterization of salient and consistent patterns in functional signals. We apply our method to data from a study that explores the response of the visual cortex to a collection of images. The discovered profiles of activation correspond to selectivity to a number of image categories such as faces, bodies, and scenes. More generally, our results appear superior to the results of alternative data-driven methods in capturing the category structure in the space of stimuli. PMID:21841977

  2. Spreading and vanishing in a West Nile virus model with expanding fronts

    NASA Astrophysics Data System (ADS)

    Tarboush, Abdelrazig K.; Lin, ZhiGui; Zhang, MengYun

    2017-05-01

    In this paper, we study a simplified version of a West Nile virus model discussed by Lewis et al. [28], which was considered as a first approximation for the spatial spread of WNv. The basic reproduction number $R_0$ for the non-spatial epidemic model is defined and a threshold parameter $R_0 ^D$ for the corresponding problem with null Dirichlet boundary condition is introduced. We consider a free boundary problem with coupled system, which describes the diffusion of birds by a PDE and the movement of mosquitoes by a ODE. The risk index $R_0^F (t)$ associated with the disease in spatial setting is represented. Sufficient conditions for the WNv to eradicate or to spread are given. The asymptotic behavior of the solution to system when the spreading occurs are considered. It is shown that the initial number of infected populations, the diffusion rate of birds and the length of initial habitat exhibit important impacts on the vanishing or spreading of the virus. Numerical simulations are presented to illustrate the analytical results.

  3. An Optimization-based Atomistic-to-Continuum Coupling Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less

  4. Birkhoff Normal Form for Some Nonlinear PDEs

    NASA Astrophysics Data System (ADS)

    Bambusi, Dario

    We consider the problem of extending to PDEs Birkhoff normal form theorem on Hamiltonian systems close to nonresonant elliptic equilibria. As a model problem we take the nonlinear wave equation with Dirichlet boundary conditions on [0,π] g is an analytic skewsymmetric function which vanishes for u=0 and is periodic with period 2π in the x variable. We prove, under a nonresonance condition which is fulfilled for most g's, that for any integer M there exists a canonical transformation that puts the Hamiltonian in Birkhoff normal form up to a reminder of order M. The canonical transformation is well defined in a neighbourhood of the origin of a Sobolev type phase space of sufficiently high order. Some dynamical consequences are obtained. The technique of proof is applicable to quite general semilinear equations in one space dimension.

  5. Some new results for the one-loop mass correction to the compactified λϕ4 theory

    NASA Astrophysics Data System (ADS)

    Fucci, Guglielmo; Kirsten, Klaus

    2018-03-01

    In this work, we consider the one-loop effective action of a self-interacting λϕ4 field propagating in a D dimensional Euclidean space endowed with d ≤ D compact dimensions. The main purpose of this paper is to compute the corrections to the mass of the field due to the presence of the compactified dimensions. Although the results of the one-loop correction to the mass of a λϕ4 field are very well known for compactified toroidal spaces, where the field obeys periodic boundary conditions, similar results do not appear to be readily available for cases in which the scalar field is subject to Dirichlet and Neumann boundary conditions. We apply the results of the one-loop mass correction to the study of the critical temperature in Ginzburg-Landau models.

  6. Analysis of the Westland Data Set

    NASA Technical Reports Server (NTRS)

    Wen, Fang; Willett, Peter; Deb, Somnath

    2001-01-01

    The "Westland" set of empirical accelerometer helicopter data with seeded and labeled faults is analyzed with the aim of condition monitoring. The autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; and it has also been found that augmentation of these by harmonic and other parameters call improve classification significantly. Several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior oil training data and is thus able to quantify probability of error in all exact manner, such that features may be discarded or coarsened appropriately.

  7. Microblogging as an extension of science reporting.

    PubMed

    Büchi, Moritz

    2017-11-01

    Mass media have long provided general publics with science news. New media such as Twitter have entered this system and provide an additional platform for the dissemination of science information. Based on automated collection and analysis of >900 news articles and 70,000 tweets, this study explores the online communication of current science news. Topic modeling (latent Dirichlet allocation) was used to extract five broad themes of science reporting: space missions, the US government shutdown, cancer research, Nobel Prizes, and climate change. Using content and network analysis, Twitter was found to extend public science communication by providing additional voices and contextualizations of science issues. It serves a recommender role by linking to web resources, connecting users, and directing users' attention. This article suggests that microblogging adds a new and relevant layer to the public communication of science.

  8. Frequency analysis of uncertain structures using imprecise probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modares, Mehdi; Bergerson, Joshua

    2015-01-01

    Two new methods for finite element based frequency analysis of a structure with uncertainty are developed. An imprecise probability formulation based on enveloping p-boxes is used to quantify the uncertainty present in the mechanical characteristics of the structure. For each element, independent variations are considered. Using the two developed methods, P-box Frequency Analysis (PFA) and Interval Monte-Carlo Frequency Analysis (IMFA), sharp bounds on natural circular frequencies at different probability levels are obtained. These methods establish a framework for handling incomplete information in structural dynamics. Numerical example problems are presented that illustrate the capabilities of the new methods along with discussionsmore » on their computational efficiency.« less

  9. Using Transmural Regularization and Dynamic Modeling for Non-Invasive Cardiac Potential Imaging of Endocardial Pacing with Imprecise Thoracic Geometry

    PubMed Central

    Erem, Burak; Coll-Font, Jaume; Orellana, Ramon Martinez; Štóvíček, Petr; Brooks, Dana H.

    2014-01-01

    Cardiac electrical imaging from body surface potential measurements is increasingly being seen as a technology with the potential for use in the clinic, for example for pre-procedure planning or during-treatment guidance for ventricular arrhythmia ablation procedures. However several important impediments to widespread adoption of this technology remain to be effectively overcome. Here we address two of these impediments: the difficulty of reconstructing electric potentials on the inner (endocardial) as well as outer (epicardial) surfaces of the ventricles, and the need for full anatomical imaging of the subject’s thorax to build an accurate subject-specific geometry. We introduce two new features in our reconstruction algorithm: a non-linear low-order dynamic parameterization derived from the measured body surface signals, and a technique to jointly regularize both surfaces. With these methodological innovations in combination, it is possible to reconstruct endocardial activation from clinically acquired measurements with an imprecise thorax geometry. In particular we test the method using body surface potentials acquired from three subjects during clinical procedures where the subjects’ hearts were paced on their endocardia using a catheter device. Our geometric models were constructed using a set of CT scans limited in axial extent to the immediate region near the heart. The catheter system provides a reference location to which we compare our results. We compare our estimates of pacing site localization, in terms of both accuracy and stability, to those reported in a recent clinical publication [1], where a full set of CT scans were available and only epicardial potentials were reconstructed. PMID:24595345

  10. Multimedia models of human exposure to industrial emissions through food products: Identifying the critical pathways

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKone, T.E.; Maddalena, R.L.; Dowdy, D.L.

    1995-12-31

    The magnitude of potential human exposure through foods can be and has been assessed through food-product samples. Reducing these exposures requires that measuring not only the exposure media (food) concentrations but also defining the processes that link various contaminant sources to food. The authors present here a general approach for quantifying human exposure to contaminants in home-grown foods using four factors-ambient environmental concentrations in air and soil; bioconcentration factors between air, soil, leaves, and roots; translocations from leaves and roots to edible-plant parts; and human activity patterns associated with consumption of home-grown food products. The authors combined these factors inmore » a general model and used sensitivity analyses to assess the important contributions to the imprecision of exposure estimates. Despite large uncertainties about human activities, they found the major source of uncertainty (variance) in exposure estimates is attributable to imprecision in quantifying bioconcentration. This is especially evident for persistent and fat-soluble compounds--such as PCBs, PAHs, dioxins, and furans. The evaluation of analytical and theoretical methods used to characterize bioconcentration factors reveals that molecular connectivity indices (MCIs) based on molecular structure are better predictors of bioconcentration in plants than are the solubility factors currently in use. The authors describe ongoing laboratory experiments in exposure chambers with garden foods such as leafy vegetables and root crops. They then assess the ability of these methods to better define chemical mass transfers among the components of the home-garden system-air, soil-organic phases, soil solution, plant roots, and plant leaves.« less

  11. Optimizing decentralized production-distribution planning problem in a multi-period supply chain network under uncertainty

    NASA Astrophysics Data System (ADS)

    Nourifar, Raheleh; Mahdavi, Iraj; Mahdavi-Amiri, Nezam; Paydar, Mohammad Mahdi

    2017-09-01

    Decentralized supply chain management is found to be significantly relevant in today's competitive markets. Production and distribution planning is posed as an important optimization problem in supply chain networks. Here, we propose a multi-period decentralized supply chain network model with uncertainty. The imprecision related to uncertain parameters like demand and price of the final product is appropriated with stochastic and fuzzy numbers. We provide mathematical formulation of the problem as a bi-level mixed integer linear programming model. Due to problem's convolution, a structure to solve is developed that incorporates a novel heuristic algorithm based on Kth-best algorithm, fuzzy approach and chance constraint approach. Ultimately, a numerical example is constructed and worked through to demonstrate applicability of the optimization model. A sensitivity analysis is also made.

  12. Probabilistic interpretation of Peelle's pertinent puzzle and its resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, Kenneth M.; Kawano, T.; Talou, P.

    2004-01-01

    Peelle's Pertinent Puzzle (PPP) states a seemingly plausible set of measurements with their covariance matrix, which produce an implausible answer. To answer the PPP question, we describe a reasonable experimental situation that is consistent with the PPP solution. The confusion surrounding the PPP arises in part because of its imprecise statement, which permits to a variety of interpretations and resulting answers, some of which seem implausible. We emphasize the importance of basing the analysis on an unambiguous probabilistic model that reflects the experimental situation. We present several different models of how the measurements quoted in the PPP problem could bemore » obtained, and interpret their solution in terms of a detailed probabilistic analysis. We suggest a probabilistic approach to handling uncertainties about which model to use.« less

  13. Probabilistic Interpretation of Peelle's Pertinent Puzzle and its Resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, Kenneth M.; Kawano, Toshihiko; Talou, Patrick

    2005-05-24

    Peelle's Pertinent Puzzle (PPP) states a seemingly plausible set of measurements with their covariance matrix, which produce an implausible answer. To answer the PPP question, we describe a reasonable experimental situation that is consistent with the PPP solution. The confusion surrounding the PPP arises in part because of its imprecise statement, which permits to a variety of interpretations and resulting answers, some of which seem implausible. We emphasize the importance of basing the analysis on an unambiguous probabilistic model that reflects the experimental situation. We present several different models of how the measurements quoted in the PPP problem could bemore » obtained, and interpret their solution in terms of a detailed probabilistic analysis. We suggest a probabilistic approach to handling uncertainties about which model to use.« less

  14. Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot

    PubMed Central

    Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki

    2018-01-01

    In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes. PMID:29872389

  15. Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot.

    PubMed

    Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki

    2018-01-01

    In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback-Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes.

  16. Synthesis and crystal structure analysis of uranyl triple acetates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klepov, Vladislav V., E-mail: vladislavklepov@gmail.com; Department of Chemistry, Samara National Research University, 443086 Samara; Serezhkina, Larisa B.

    2016-12-15

    Single crystals of triple acetates NaR[UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 3}·6H{sub 2}O (R=Mg, Co, Ni, Zn), well-known for their use as reagents for sodium determination, were grown from aqueous solutions and their structural and spectroscopic properties were studied. Crystal structures of the mentioned phases are based upon (Na[UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 3}){sup 2–} clusters and [R(H{sub 2}O){sub 6}]{sup 2+} aqua-complexes. The cooling of a single crystal of NaMg[UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 3}·6H{sub 2}O from 300 to 100 K leads to a phase transition from trigonal to monoclinic crystal system. Intermolecular interactions between the structural units and their mutual packing were studiedmore » and compared from the point of view of the stereoatomic model of crystal structures based on Voronoi-Dirichlet tessellation. Using this method we compared the crystal structures of the triple acetates with Na[UO{sub 2}(CH{sub 3}COO){sub 3}] and [R(H{sub 2}O){sub 6}][UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 2} and proposed reasons of triple acetates stability. Infrared and Raman spectra were collected and their bands were assigned. - Graphical abstract: Single crystals of uranium based triple acetates, analytical reagents for sodium determination, were synthesized and structurally, spectroscopically and topologically characterized. The structures were compared with the structures of compounds from preceding families [M(H{sub 2}O){sub 6})][UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 2} (M = Mg, Co, Ni, Zn) and Na[UO{sub 2}(CH{sub 3}COO){sub 3}]. Analysis was performed with the method of molecular Voronoi-Dirichlet polyhedra to reveal a large contribution of the hydrogen bonds into intermolecular interactions which can be a reason of low solubility of studied complexes.« less

  17. Near-Field Integration of a SiN Nanobeam and a SiO2 Microcavity for Heisenberg-Limited Displacement Sensing

    NASA Astrophysics Data System (ADS)

    Schilling, R.; Schütz, H.; Ghadimi, A. H.; Sudhir, V.; Wilson, D. J.; Kippenberg, T. J.

    2016-05-01

    Placing a nanomechanical object in the evanescent near field of a high-Q optical microcavity gives access to strong gradient forces and quantum-limited displacement readout, offering an attractive platform for both precision sensing technology and basic quantum optics research. Robustly implementing this platform is challenging, however, as it requires integrating optically smooth surfaces separated by ≲λ /10 . Here we describe an exceptionally high-cooperativity, single-chip optonanomechanical transducer based on a high-stress Si3N4 nanobeam monolithically integrated into the evanescent near field of SiO2 microdisk cavity. Employing a vertical integration technique based on planarized sacrificial layers, we realize beam-disk gaps as little as 25 nm while maintaining mechanical Q f >1012 Hz and intrinsic optical Q ˜107. The combination of low loss, small gap, and parallel-plane geometry results in radio-frequency flexural modes with vacuum optomechanical coupling rates of 100 kHz, single-photon cooperativities in excess of unity, and large zero-point frequency (displacement) noise amplitudes of 10 kHz (fm )/√ Hz . In conjunction with the high power-handling capacity of SiO2 and low extraneous substrate noise, the transducer performs particularly well as a sensor, with recent deployment in a 4-K cryostat realizing a displacement imprecision 40 dB below that at the standard quantum limit (SQL) and an imprecision-backaction product <5 ℏ [Wilson et al., Nature (London) 524, 325 (2015)]. In this report, we provide a comprehensive description of device design, fabrication, and characterization, with an emphasis on extending Heisenberg-limited readout to room temperature. Towards this end, we describe a room-temperature experiment in which a displacement imprecision 32 dB below that at the SQL and an imprecision-backaction product <60 ℏ is achieved. Our results extend the outlook for measurement-based quantum control of nanomechanical oscillators and suggest an alternative platform for functionally integrated "hybrid" quantum optomechanics.

  18. Thundercloud electrification models in atmospheric electricity and meteorology

    NASA Technical Reports Server (NTRS)

    Parker, L. W.

    1980-01-01

    A survey is presented of presently-available theoretical models. The models are classified into three main groups: (1) convection models, (2) precipitation models, and (3) general models. The strengths and weaknesses of the models, their dimensionalities and degrees of sophistication, the nature of their inputs and outputs, and the various specific charging mechanisms treated by them, are considered. In results obtained to date, the convection models predict no significant electrification enhancement based on conductivity gradients and convection alone, with the assumed air circulation patterns. Results of the precipitation models show that the initial electrification can occur rapidly and stably through noninductive collision mechanisms involving ice, and breakdown-strength electric fields can relatively easily be achieved subsequently through the collisional-inductive mechanism. A critical difficulty of the collision mechanisms is imprecise knowledge of relaxation times versus contact times, which can easily lead to overestimates of electrification. The general model results tend to support those of the precipitation models in emphasizing the high potential effectiveness of the collisional-inductive mechanism.

  19. Multiclass Data Segmentation Using Diffuse Interface Methods on Graphs

    DTIC Science & Technology

    2014-01-01

    interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general- izations of the graph...Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph-based energy function that...over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence results for two algorithms

  20. Ages of Records in Random Walks

    NASA Astrophysics Data System (ADS)

    Szabó, Réka; Vető, Bálint

    2016-12-01

    We consider random walks with continuous and symmetric step distributions. We prove universal asymptotics for the average proportion of the age of the kth longest lasting record for k=1,2,ldots and for the probability that the record of the kth longest age is broken at step n. Due to the relation to the Chinese restaurant process, the ranked sequence of proportions of ages converges to the Poisson-Dirichlet distribution.

  1. Multispike solutions for the Brezis-Nirenberg problem in dimension three

    NASA Astrophysics Data System (ADS)

    Musso, Monica; Salazar, Dora

    2018-06-01

    We consider the problem Δu + λu +u5 = 0, u > 0, in a smooth bounded domain Ω in R3, under zero Dirichlet boundary conditions. We obtain solutions to this problem exhibiting multiple bubbling behavior at k different points of the domain as λ tends to a special positive value λ0, which we characterize in terms of the Green function of - Δ - λ.

  2. Visibility of quantum graph spectrum from the vertices

    NASA Astrophysics Data System (ADS)

    Kühn, Christian; Rohleder, Jonathan

    2018-03-01

    We investigate the relation between the eigenvalues of the Laplacian with Kirchhoff vertex conditions on a finite metric graph and a corresponding Titchmarsh-Weyl function (a parameter-dependent Neumann-to-Dirichlet map). We give a complete description of all real resonances, including multiplicities, in terms of the edge lengths and the connectivity of the graph, and apply it to characterize all eigenvalues which are visible for the Titchmarsh-Weyl function.

  3. A nonlinear ordinary differential equation associated with the quantum sojourn time

    NASA Astrophysics Data System (ADS)

    Benguria, Rafael D.; Duclos, Pierre; Fernández, Claudio; Sing-Long, Carlos

    2010-11-01

    We study a nonlinear ordinary differential equation on the half-line, with the Dirichlet boundary condition at the origin. This equation arises when studying the local maxima of the sojourn time for a free quantum particle whose states belong to an adequate subspace of the unit sphere of the corresponding Hilbert space. We establish several results concerning the existence and asymptotic behavior of the solutions.

  4. Locating Temporal Functional Dynamics of Visual Short-Term Memory Binding using Graph Modular Dirichlet Energy

    NASA Astrophysics Data System (ADS)

    Smith, Keith; Ricaud, Benjamin; Shahid, Nauman; Rhodes, Stephen; Starr, John M.; Ibáñez, Augustin; Parra, Mario A.; Escudero, Javier; Vandergheynst, Pierre

    2017-02-01

    Visual short-term memory binding tasks are a promising early marker for Alzheimer’s disease (AD). To uncover functional deficits of AD in these tasks it is meaningful to first study unimpaired brain function. Electroencephalogram recordings were obtained from encoding and maintenance periods of tasks performed by healthy young volunteers. We probe the task’s transient physiological underpinnings by contrasting shape only (Shape) and shape-colour binding (Bind) conditions, displayed in the left and right sides of the screen, separately. Particularly, we introduce and implement a novel technique named Modular Dirichlet Energy (MDE) which allows robust and flexible analysis of the functional network with unprecedented temporal precision. We find that connectivity in the Bind condition is less integrated with the global network than in the Shape condition in occipital and frontal modules during the encoding period of the right screen condition. Using MDE we are able to discern driving effects in the occipital module between 100-140 ms, coinciding with the P100 visually evoked potential, followed by a driving effect in the frontal module between 140-180 ms, suggesting that the differences found constitute an information processing difference between these modules. This provides temporally precise information over a heterogeneous population in promising tasks for the detection of AD.

  5. Laplace-Beltrami Eigenvalues and Topological Features of Eigenfunctions for Statistical Shape Analysis

    PubMed Central

    Reuter, Martin; Wolter, Franz-Erich; Shenton, Martha; Niethammer, Marc

    2009-01-01

    This paper proposes the use of the surface based Laplace-Beltrami and the volumetric Laplace eigenvalues and -functions as shape descriptors for the comparison and analysis of shapes. These spectral measures are isometry invariant and therefore allow for shape comparisons with minimal shape pre-processing. In particular, no registration, mapping, or remeshing is necessary. The discriminatory power of the 2D surface and 3D solid methods is demonstrated on a population of female caudate nuclei (a subcortical gray matter structure of the brain, involved in memory function, emotion processing, and learning) of normal control subjects and of subjects with schizotypal personality disorder. The behavior and properties of the Laplace-Beltrami eigenvalues and -functions are discussed extensively for both the Dirichlet and Neumann boundary condition showing advantages of the Neumann vs. the Dirichlet spectra in 3D. Furthermore, topological analyses employing the Morse-Smale complex (on the surfaces) and the Reeb graph (in the solids) are performed on selected eigenfunctions, yielding shape descriptors, that are capable of localizing geometric properties and detecting shape differences by indirectly registering topological features such as critical points, level sets and integral lines of the gradient field across subjects. The use of these topological features of the Laplace-Beltrami eigenfunctions in 2D and 3D for statistical shape analysis is novel. PMID:20161035

  6. Exactly solvable model of the two-dimensional electrical double layer.

    PubMed

    Samaj, L; Bajnok, Z

    2005-12-01

    We consider equilibrium statistical mechanics of a simplified model for the ideal conductor electrode in an interface contact with a classical semi-infinite electrolyte, modeled by the two-dimensional Coulomb gas of pointlike unit charges in the stability-against-collapse regime of reduced inverse temperatures 0< or = beta < 2. If there is a potential difference between the bulk interior of the electrolyte and the grounded electrode, the electrolyte region close to the electrode (known as the electrical double layer) carries some nonzero surface charge density. The model is mappable onto an integrable semi-infinite sine-Gordon theory with Dirichlet boundary conditions. The exact form-factor and boundary state information gained from the mapping provide asymptotic forms of the charge and number density profiles of electrolyte particles at large distances from the interface. The result for the asymptotic behavior of the induced electric potential, related to the charge density via the Poisson equation, confirms the validity of the concept of renormalized charge and the corresponding saturation hypothesis. It is documented on the nonperturbative result for the asymptotic density profile at a strictly nonzero beta that the Debye-Hückel beta-->0 limit is a delicate issue.

  7. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  8. Uncertainty and Risk Management in Cyber Situational Awareness

    NASA Astrophysics Data System (ADS)

    Li, Jason; Ou, Xinming; Rajagopalan, Raj

    Handling cyber threats unavoidably needs to deal with both uncertain and imprecise information. What we can observe as potential malicious activities can seldom give us 100% confidence on important questions we care about, e.g. what machines are compromised and what damage has been incurred. In security planning, we need information on how likely a vulnerability can lead to a successful compromise to better balance security and functionality, performance, and ease of use. These information are at best qualitative and are often vague and imprecise. In cyber situational awareness, we have to rely on such imperfect information to detect real attacks and to prevent an attack from happening through appropriate risk management. This chapter surveys existing technologies in handling uncertainty and risk management in cyber situational awareness.

  9. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  10. A neural model of hierarchical reinforcement learning.

    PubMed

    Rasmussen, Daniel; Voelker, Aaron; Eliasmith, Chris

    2017-01-01

    We develop a novel, biologically detailed neural model of reinforcement learning (RL) processes in the brain. This model incorporates a broad range of biological features that pose challenges to neural RL, such as temporally extended action sequences, continuous environments involving unknown time delays, and noisy/imprecise computations. Most significantly, we expand the model into the realm of hierarchical reinforcement learning (HRL), which divides the RL process into a hierarchy of actions at different levels of abstraction. Here we implement all the major components of HRL in a neural model that captures a variety of known anatomical and physiological properties of the brain. We demonstrate the performance of the model in a range of different environments, in order to emphasize the aim of understanding the brain's general reinforcement learning ability. These results show that the model compares well to previous modelling work and demonstrates improved performance as a result of its hierarchical ability. We also show that the model's behaviour is consistent with available data on human hierarchical RL, and generate several novel predictions.

  11. An Integrated Approach of Fuzzy Linguistic Preference Based AHP and Fuzzy COPRAS for Machine Tool Evaluation

    PubMed Central

    Nguyen, Huu-Tho; Md Dawal, Siti Zawiah; Nukman, Yusoff; Aoyama, Hideki; Case, Keith

    2015-01-01

    Globalization of business and competitiveness in manufacturing has forced companies to improve their manufacturing facilities to respond to market requirements. Machine tool evaluation involves an essential decision using imprecise and vague information, and plays a major role to improve the productivity and flexibility in manufacturing. The aim of this study is to present an integrated approach for decision-making in machine tool selection. This paper is focused on the integration of a consistent fuzzy AHP (Analytic Hierarchy Process) and a fuzzy COmplex PRoportional ASsessment (COPRAS) for multi-attribute decision-making in selecting the most suitable machine tool. In this method, the fuzzy linguistic reference relation is integrated into AHP to handle the imprecise and vague information, and to simplify the data collection for the pair-wise comparison matrix of the AHP which determines the weights of attributes. The output of the fuzzy AHP is imported into the fuzzy COPRAS method for ranking alternatives through the closeness coefficient. Presentation of the proposed model application is provided by a numerical example based on the collection of data by questionnaire and from the literature. The results highlight the integration of the improved fuzzy AHP and the fuzzy COPRAS as a precise tool and provide effective multi-attribute decision-making for evaluating the machine tool in the uncertain environment. PMID:26368541

  12. Proceedings of the Seventh International Symposium on Methodologies for Intelligent Systems (Poster Session)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harber, K.S.

    1993-05-01

    This report contains the following papers: Implications in vivid logic; a self-learning bayesian expert system; a natural language generation system for a heterogeneous distributed database system; competence-switching'' managed by intelligent systems; strategy acquisition by an artificial neural network: Experiments in learning to play a stochastic game; viewpoints and selective inheritance in object-oriented modeling; multivariate discretization of continuous attributes for machine learning; utilization of the case-based reasoning method to resolve dynamic problems; formalization of an ontology of ceramic science in CLASSIC; linguistic tools for intelligent systems; an application of rough sets in knowledge synthesis; and a relational model for imprecise queries.more » These papers have been indexed separately.« less

  13. Proceedings of the Seventh International Symposium on Methodologies for Intelligent Systems (Poster Session)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harber, K.S.

    1993-05-01

    This report contains the following papers: Implications in vivid logic; a self-learning Bayesian Expert System; a natural language generation system for a heterogeneous distributed database system; ``competence-switching`` managed by intelligent systems; strategy acquisition by an artificial neural network: Experiments in learning to play a stochastic game; viewpoints and selective inheritance in object-oriented modeling; multivariate discretization of continuous attributes for machine learning; utilization of the case-based reasoning method to resolve dynamic problems; formalization of an ontology of ceramic science in CLASSIC; linguistic tools for intelligent systems; an application of rough sets in knowledge synthesis; and a relational model for imprecise queries.more » These papers have been indexed separately.« less

  14. Variability in perceived satisfaction of reservoir management objectives

    USGS Publications Warehouse

    Owen, W.J.; Gates, T.K.; Flug, M.

    1997-01-01

    Fuzzy set theory provides a useful model to address imprecision in interpreting linguistically described objectives for reservoir management. Fuzzy membership functions can be used to represent degrees of objective satisfaction for different values of management variables. However, lack of background information, differing experiences and qualifications, and complex interactions of influencing factors can contribute to significant variability among membership functions derived from surveys of multiple experts. In the present study, probabilistic membership functions are used to model variability in experts' perceptions of satisfaction of objectives for hydropower generation, fish habitat, kayaking, rafting, and scenery preservation on the Green River through operations of Flaming Gorge Dam. Degree of variability in experts' perceptions differed among objectives but resulted in substantial uncertainty in estimation of optimal reservoir releases.

  15. Asymptotic analysis of the narrow escape problem in dendritic spine shaped domain: three dimensions

    NASA Astrophysics Data System (ADS)

    Li, Xiaofei; Lee, Hyundae; Wang, Yuliang

    2017-08-01

    This paper deals with the three-dimensional narrow escape problem in a dendritic spine shaped domain, which is composed of a relatively big head and a thin neck. The narrow escape problem is to compute the mean first passage time of Brownian particles traveling from inside the head to the end of the neck. The original model is to solve a mixed Dirichlet-Neumann boundary value problem for the Poisson equation in the composite domain, and is computationally challenging. In this paper we seek to transfer the original problem to a mixed Robin-Neumann boundary value problem by dropping the thin neck part, and rigorously derive the asymptotic expansion of the mean first passage time with high order terms. This study is a nontrivial three-dimensional generalization of the work in Li (2014 J. Phys. A: Math. Theor. 47 505202), where a two-dimensional analogue domain is considered.

  16. A local crack-tracking strategy to model three-dimensional crack propagation with embedded methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annavarapu, Chandrasekhar; Settgast, Randolph R.; Vitali, Efrem

    We develop a local, implicit crack tracking approach to propagate embedded failure surfaces in three-dimensions. We build on the global crack-tracking strategy of Oliver et al. (Int J. Numer. Anal. Meth. Geomech., 2004; 28:609–632) that tracks all potential failure surfaces in a problem at once by solving a Laplace equation with anisotropic conductivity. We discuss important modifications to this algorithm with a particular emphasis on the effect of the Dirichlet boundary conditions for the Laplace equation on the resultant crack path. Algorithmic and implementational details of the proposed method are provided. Finally, several three-dimensional benchmark problems are studied and resultsmore » are compared with available literature. Lastly, the results indicate that the proposed method addresses pathological cases, exhibits better behavior in the presence of closely interacting fractures, and provides a viable strategy to robustly evolve embedded failure surfaces in 3D.« less

  17. A local crack-tracking strategy to model three-dimensional crack propagation with embedded methods

    DOE PAGES

    Annavarapu, Chandrasekhar; Settgast, Randolph R.; Vitali, Efrem; ...

    2016-09-29

    We develop a local, implicit crack tracking approach to propagate embedded failure surfaces in three-dimensions. We build on the global crack-tracking strategy of Oliver et al. (Int J. Numer. Anal. Meth. Geomech., 2004; 28:609–632) that tracks all potential failure surfaces in a problem at once by solving a Laplace equation with anisotropic conductivity. We discuss important modifications to this algorithm with a particular emphasis on the effect of the Dirichlet boundary conditions for the Laplace equation on the resultant crack path. Algorithmic and implementational details of the proposed method are provided. Finally, several three-dimensional benchmark problems are studied and resultsmore » are compared with available literature. Lastly, the results indicate that the proposed method addresses pathological cases, exhibits better behavior in the presence of closely interacting fractures, and provides a viable strategy to robustly evolve embedded failure surfaces in 3D.« less

  18. Research of the multimodal brain-tumor segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Yisu; Chen, Wufan

    2015-12-01

    It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. A new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain tumor images, we developed the algorithm to segment multimodal brain tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated and compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance.

  19. Behavior Based Social Dimensions Extraction for Multi-Label Classification

    PubMed Central

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  20. Isomorphism of dimer configurations and spanning trees on finite square lattices

    NASA Astrophysics Data System (ADS)

    Brankov, J. G.

    1995-09-01

    One-to-one mappings of the close-packed dimer configurations on a finite square lattice with free boundaries L onto the spanning trees of a related graph (or two-graph) G are found. The graph (two-graph) G can be constructed from L by: (1) deleting all the vertices of L with arbitrarily fixed parity of the row and column numbers; (2) suppressing all the vertices of degree 2 except those of degree 2 in L; (3) merging all the vertices of degree 1 into a single vertex g. The matrix Kirchhoff theorem reduces the enumeration problem for the spanning trees on G to the eigenvalue problem for the discrete Laplacian on the square lattice L'=G g with mixed Dirichlet-Neumann boundary conditions in at least one direction. That fact explains some of the unusual finite-size properties of the dimer model.

  1. Significant figures.

    PubMed

    Badrick, Tony; Hickman, Peter E

    2008-08-01

    * For consistency of reporting the same number of significant figures should be used for results and reference intervals. * The choice of the reporting interval should be based on analytical imprecision (measurement uncertainty).

  2. Estimation of locomotion speed and directions changes to control a vehicle using neural signals from the motor cortex of rat.

    PubMed

    Fukayama, Osamu; Taniguchi, Noriyuki; Suzuki, Takafumi; Mabuchi, Kunihiko

    2006-01-01

    We have developed a brain-machine interface (BMI) in the form of a small vehicle, which we call the RatCar. In this system, we implanted wire electrodes in the motor cortices of rat's brain to continuously record neural signals. We applied a linear model to estimate the locomotion state (e.g., speed and directions) of a rat using a weighted summation model for the neural firing rates. With this information, we then determined the approximate movement of a rat. Although the estimation is still imprecise, results suggest that our model is able to control the system to some degree. In this paper, we give an overview of our system and describe the methods used, which include continuous neural recording, spike detection and a discrimination algorithm, and a locomotion estimation model minimizes the square error of the locomotion speed and changes in direction.

  3. Spatial Uncertainty Modeling of Fuzzy Information in Images for Pattern Classification

    PubMed Central

    Pham, Tuan D.

    2014-01-01

    The modeling of the spatial distribution of image properties is important for many pattern recognition problems in science and engineering. Mathematical methods are needed to quantify the variability of this spatial distribution based on which a decision of classification can be made in an optimal sense. However, image properties are often subject to uncertainty due to both incomplete and imprecise information. This paper presents an integrated approach for estimating the spatial uncertainty of vagueness in images using the theory of geostatistics and the calculus of probability measures of fuzzy events. Such a model for the quantification of spatial uncertainty is utilized as a new image feature extraction method, based on which classifiers can be trained to perform the task of pattern recognition. Applications of the proposed algorithm to the classification of various types of image data suggest the usefulness of the proposed uncertainty modeling technique for texture feature extraction. PMID:25157744

  4. UTD at TREC 2014: Query Expansion for Clinical Decision Support

    DTIC Science & Technology

    2014-11-01

    Description: A 62-year-old man sees a neurologist for progressive memory loss and jerking movements of the lower ex- tremities. Neurologic examination confirms...infiltration. Summary: 62-year-old man with progressive memory loss and in- voluntary leg movements. Brain MRI reveals cortical atrophy, and cortical...latent topics produced by the Latent Dirichlet Allocation (LDA) on the TREC-CDS corpus of scientific articles. The position of words “ loss ” and “ memory

  5. Nondestructive Testing and Target Identification

    DTIC Science & Technology

    2016-12-21

    Dirichlet obstacle coated by a thin layer of non-absorbing media, IMA J. Appl. Math , 80, 1063-1098, (2015). Abstract: We consider the transmission...F. Cakoni, I. De Teresa, H. Haddar and P. Monk, Nondestructive testing of the delami- nated interface between two materials, SIAM J. Appl. Math ., 76...then they form a discrete set. 22. F. Cakoni, D. Colton, S. Meng and P. Monk, Steklov eigenvalues in inverse scattering, SIAM J. Appl. Math . 76, 1737

  6. Single-grid spectral collocation for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Canuto, Claudio; Maday, Yvon; Metivet, Brigitte

    1988-01-01

    The aim of the paper is to study a collocation spectral method to approximate the Navier-Stokes equations: only one grid is used, which is built from the nodes of a Gauss-Lobatto quadrature formula, either of Legendre or of Chebyshev type. The convergence is proven for the Stokes problem provided with inhomogeneous Dirichlet conditions, then thoroughly analyzed for the Navier-Stokes equations. The practical implementation algorithm is presented, together with numerical results.

  7. Flexible link functions in nonparametric binary regression with Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Lin, Lizhen; Dey, Dipak K

    2016-09-01

    In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. © 2015, The International Biometric Society.

  8. Flexible Link Functions in Nonparametric Binary Regression with Gaussian Process Priors

    PubMed Central

    Li, Dan; Lin, Lizhen; Dey, Dipak K.

    2015-01-01

    Summary In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. PMID:26686333

  9. Fusion of multi-tracer PET images for dose painting.

    PubMed

    Lelandais, Benoît; Ruan, Su; Denœux, Thierry; Vera, Pierre; Gardin, Isabelle

    2014-10-01

    PET imaging with FluoroDesoxyGlucose (FDG) tracer is clinically used for the definition of Biological Target Volumes (BTVs) for radiotherapy. Recently, new tracers, such as FLuoroThymidine (FLT) or FluoroMisonidazol (FMiso), have been proposed. They provide complementary information for the definition of BTVs. Our work is to fuse multi-tracer PET images to obtain a good BTV definition and to help the radiation oncologist in dose painting. Due to the noise and the partial volume effect leading, respectively, to the presence of uncertainty and imprecision in PET images, the segmentation and the fusion of PET images is difficult. In this paper, a framework based on Belief Function Theory (BFT) is proposed for the segmentation of BTV from multi-tracer PET images. The first step is based on an extension of the Evidential C-Means (ECM) algorithm, taking advantage of neighboring voxels for dealing with uncertainty and imprecision in each mono-tracer PET image. Then, imprecision and uncertainty are, respectively, reduced using prior knowledge related to defects in the acquisition system and neighborhood information. Finally, a multi-tracer PET image fusion is performed. The results are represented by a set of parametric maps that provide important information for dose painting. The performances are evaluated on PET phantoms and patient data with lung cancer. Quantitative results show good performance of our method compared with other methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Plasma-equivalent glucose at the point-of-care: evaluation of Roche Accu-Chek Inform and Abbott Precision PCx glucose meters.

    PubMed

    Ghys, Timothy; Goedhuys, Wim; Spincemaille, Katrien; Gorus, Frans; Gerlo, Erik

    2007-01-01

    Glucose testing at the bedside has become an integral part of the management strategy in diabetes and of the careful maintenance of normoglycemia in all patients in intensive care units. We evaluated two point-of-care glucometers for the determination of plasma-equivalent blood glucose. The Precision PCx and the Accu-Chek Inform glucometers were evaluated. Imprecision and bias relative to the Vitros 950 system were determined using protocols of the Clinical Laboratory Standards Institute (CLSI). The effects of low, normal, and high hematocrit levels were investigated. Interference by maltose was also studied. Within-run precision for both instruments ranged from 2-5%. Total imprecision was less than 5% except for the Accu-Chek Inform at the low level (2.9 mmol/L). Both instruments correlated well with the comparison instrument and showed excellent recovery and linearity. Both systems reported at least 95% of their values within zone A of the Clarke Error Grid, and both fulfilled the CLSI quality criteria. The more stringent goals of the American Diabetes Association, however, were not reached. Both systems showed negative bias at high hematocrit levels. Maltose interfered with the glucose measurements on the Accu-Chek Inform but not on the Precision PCx. Both systems showed satisfactory imprecision and were reliable in reporting plasma-equivalent glucose concentrations. The most stringent performance goals were however not met.

  11. Optimal solution for travelling salesman problem using heuristic shortest path algorithm with imprecise arc length

    NASA Astrophysics Data System (ADS)

    Bakar, Sumarni Abu; Ibrahim, Milbah

    2017-08-01

    The shortest path problem is a popular problem in graph theory. It is about finding a path with minimum length between a specified pair of vertices. In any network the weight of each edge is usually represented in a form of crisp real number and subsequently the weight is used in the calculation of shortest path problem using deterministic algorithms. However, due to failure, uncertainty is always encountered in practice whereby the weight of edge of the network is uncertain and imprecise. In this paper, a modified algorithm which utilized heuristic shortest path method and fuzzy approach is proposed for solving a network with imprecise arc length. Here, interval number and triangular fuzzy number in representing arc length of the network are considered. The modified algorithm is then applied to a specific example of the Travelling Salesman Problem (TSP). Total shortest distance obtained from this algorithm is then compared with the total distance obtained from traditional nearest neighbour heuristic algorithm. The result shows that the modified algorithm can provide not only on the sequence of visited cities which shown to be similar with traditional approach but it also provides a good measurement of total shortest distance which is lesser as compared to the total shortest distance calculated using traditional approach. Hence, this research could contribute to the enrichment of methods used in solving TSP.

  12. Building flexible real-time systems using the Flex language

    NASA Technical Reports Server (NTRS)

    Kenny, Kevin B.; Lin, Kwei-Jay

    1991-01-01

    The design and implementation of a real-time programming language called Flex, which is a derivative of C++, are presented. It is shown how different types of timing requirements might be expressed and enforced in Flex, how they might be fulfilled in a flexible way using different program models, and how the programming environment can help in making binding and scheduling decisions. The timing constraint primitives in Flex are easy to use yet powerful enough to define both independent and relative timing constraints. Program models like imprecise computation and performance polymorphism can carry out flexible real-time programs. In addition, programmers can use a performance measurement tool that produces statistically correct timing models to predict the expected execution time of a program and to help make binding decisions. A real-time programming environment is also presented.

  13. Formal specification and design techniques for wireless sensor and actuator networks.

    PubMed

    Martínez, Diego; González, Apolinar; Blanes, Francisco; Aquino, Raúl; Simo, José; Crespo, Alfons

    2011-01-01

    A current trend in the development and implementation of industrial applications is to use wireless networks to communicate the system nodes, mainly to increase application flexibility, reliability and portability, as well as to reduce the implementation cost. However, the nondeterministic and concurrent behavior of distributed systems makes their analysis and design complex, often resulting in less than satisfactory performance in simulation and test bed scenarios, which is caused by using imprecise models to analyze, validate and design these systems. Moreover, there are some simulation platforms that do not support these models. This paper presents a design and validation method for Wireless Sensor and Actuator Networks (WSAN) which is supported on a minimal set of wireless components represented in Colored Petri Nets (CPN). In summary, the model presented allows users to verify the design properties and structural behavior of the system.

  14. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    NASA Astrophysics Data System (ADS)

    Khan, Tanvir R.; Perlinger, Judith A.

    2017-10-01

    Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.

  15. Probability bounds analysis for nonlinear population ecology models.

    PubMed

    Enszer, Joshua A; Andrei Măceș, D; Stadtherr, Mark A

    2015-09-01

    Mathematical models in population ecology often involve parameters that are empirically determined and inherently uncertain, with probability distributions for the uncertainties not known precisely. Propagating such imprecise uncertainties rigorously through a model to determine their effect on model outputs can be a challenging problem. We illustrate here a method for the direct propagation of uncertainties represented by probability bounds though nonlinear, continuous-time, dynamic models in population ecology. This makes it possible to determine rigorous bounds on the probability that some specified outcome for a population is achieved, which can be a core problem in ecosystem modeling for risk assessment and management. Results can be obtained at a computational cost that is considerably less than that required by statistical sampling methods such as Monte Carlo analysis. The method is demonstrated using three example systems, with focus on a model of an experimental aquatic food web subject to the effects of contamination by ionic liquids, a new class of potentially important industrial chemicals. Copyright © 2015. Published by Elsevier Inc.

  16. Spectral Learning for Supervised Topic Models.

    PubMed

    Ren, Yong; Wang, Yining; Zhu, Jun

    2018-03-01

    Supervised topic models simultaneously model the latent topic structure of large collections of documents and a response variable associated with each document. Existing inference methods are based on variational approximation or Monte Carlo sampling, which often suffers from the local minimum defect. Spectral methods have been applied to learn unsupervised topic models, such as latent Dirichlet allocation (LDA), with provable guarantees. This paper investigates the possibility of applying spectral methods to recover the parameters of supervised LDA (sLDA). We first present a two-stage spectral method, which recovers the parameters of LDA followed by a power update method to recover the regression model parameters. Then, we further present a single-phase spectral algorithm to jointly recover the topic distribution matrix as well as the regression weights. Our spectral algorithms are provably correct and computationally efficient. We prove a sample complexity bound for each algorithm and subsequently derive a sufficient condition for the identifiability of sLDA. Thorough experiments on synthetic and real-world datasets verify the theory and demonstrate the practical effectiveness of the spectral algorithms. In fact, our results on a large-scale review rating dataset demonstrate that our single-phase spectral algorithm alone gets comparable or even better performance than state-of-the-art methods, while previous work on spectral methods has rarely reported such promising performance.

  17. Heuristic Bayesian segmentation for discovery of coexpressed genes within genomic regions.

    PubMed

    Pehkonen, Petri; Wong, Garry; Törönen, Petri

    2010-01-01

    Segmentation aims to separate homogeneous areas from the sequential data, and plays a central role in data mining. It has applications ranging from finance to molecular biology, where bioinformatics tasks such as genome data analysis are active application fields. In this paper, we present a novel application of segmentation in locating genomic regions with coexpressed genes. We aim at automated discovery of such regions without requirement for user-given parameters. In order to perform the segmentation within a reasonable time, we use heuristics. Most of the heuristic segmentation algorithms require some decision on the number of segments. This is usually accomplished by using asymptotic model selection methods like the Bayesian information criterion. Such methods are based on some simplification, which can limit their usage. In this paper, we propose a Bayesian model selection to choose the most proper result from heuristic segmentation. Our Bayesian model presents a simple prior for the segmentation solutions with various segment numbers and a modified Dirichlet prior for modeling multinomial data. We show with various artificial data sets in our benchmark system that our model selection criterion has the best overall performance. The application of our method in yeast cell-cycle gene expression data reveals potential active and passive regions of the genome.

  18. Bayesian clustering of DNA sequences using Markov chains and a stochastic partition model.

    PubMed

    Jääskinen, Väinö; Parkkinen, Ville; Cheng, Lu; Corander, Jukka

    2014-02-01

    In many biological applications it is necessary to cluster DNA sequences into groups that represent underlying organismal units, such as named species or genera. In metagenomics this grouping needs typically to be achieved on the basis of relatively short sequences which contain different types of errors, making the use of a statistical modeling approach desirable. Here we introduce a novel method for this purpose by developing a stochastic partition model that clusters Markov chains of a given order. The model is based on a Dirichlet process prior and we use conjugate priors for the Markov chain parameters which enables an analytical expression for comparing the marginal likelihoods of any two partitions. To find a good candidate for the posterior mode in the partition space, we use a hybrid computational approach which combines the EM-algorithm with a greedy search. This is demonstrated to be faster and yield highly accurate results compared to earlier suggested clustering methods for the metagenomics application. Our model is fairly generic and could also be used for clustering of other types of sequence data for which Markov chains provide a reasonable way to compress information, as illustrated by experiments on shotgun sequence type data from an Escherichia coli strain.

  19. Movement planning reflects skill level and age changes in toddlers

    PubMed Central

    Chen, Yu-ping; Keen, Rachel; Rosander, Kerstin; von Hofsten, Claes

    2010-01-01

    Kinematic measures of children’s reaching were found to reflect stable differences in skill level for planning for future actions. Thirty-five toddlers (18–21 months) were engaged in building block towers (precise task) and in placing blocks into an open container (imprecise task). Sixteen children were re-tested on the same tasks a year later. Longer deceleration as the hand approached the block for pickup was found in the tower task compared to the imprecise task, indicating planning for the second movement. More skillful toddlers who could build high towers had a longer deceleration phase when placing blocks on the tower than toddlers who built low towers. Kinematic differences between the groups remained a year later when all children could build high towers. PMID:21077868

  20. Fluctuation-induced forces in confined ideal and imperfect Bose gases

    NASA Astrophysics Data System (ADS)

    Diehl, H. W.; Rutkevich, Sergei B.

    2017-06-01

    Fluctuation-induced ("Casimir") forces caused by thermal and quantum fluctuations are investigated for ideal and imperfect Bose gases confined to d -dimensional films of size ∞d -1×D under periodic (P), antiperiodic (A), Dirichlet-Dirichlet (DD), Neumann-Neumann (NN), and Robin (R) boundary conditions (BCs). The full scaling functions ΥdBC(xλ=D /λth ,xξ=D /ξ ) of the residual reduced grand potential per area φres,dBC(T ,μ ,D ) =D-(d -1 )ΥdBC(xλ,xξ) are determined for the ideal gas case with these BCs, where λth and ξ are the thermal de Broglie wavelength and the bulk correlation length, respectively. The associated limiting scaling functions ΘdBC(xξ) ≡ΥdBC(∞ ,xξ) describing the critical behavior at the bulk condensation transition are shown to agree with those previously determined from a massive free O (2 ) theory for BC=P,A,DD,DN,NN . For d =3 , they are expressed in closed analytical form in terms of polylogarithms. The analogous scaling functions ΥdBC(xλ,xξ,c1D ,c2D ) and ΘdR(xξ,c1D ,c2D ) under the RBCs (∂z-c1) ϕ |z=0=(∂z+c2) ϕ | z =D=0 with c1≥0 and c2≥0 are also determined. The corresponding scaling functions Υ∞,d P(xλ,xξ) and Θ∞,d P(xξ) for the imperfect Bose gas are shown to agree with those of the interacting Bose gas with n internal degrees of freedom in the limit n →∞ . Hence, for d =3 , Θ∞,d P(xξ) is known exactly in closed analytic form. To account for the breakdown of translation invariance in the direction perpendicular to the boundary planes implied by free BCs such as DDBCs, a modified imperfect Bose gas model is introduced that corresponds to the limit n →∞ of this interacting Bose gas. Numerically and analytically exact results for the scaling function Θ∞,3 DD(xξ) therefore follow from those of the O (2 n ) ϕ4 model for n →∞ .

  1. Fluctuation-induced forces in confined ideal and imperfect Bose gases.

    PubMed

    Diehl, H W; Rutkevich, Sergei B

    2017-06-01

    Fluctuation-induced ("Casimir") forces caused by thermal and quantum fluctuations are investigated for ideal and imperfect Bose gases confined to d-dimensional films of size ∞^{d-1}×D under periodic (P), antiperiodic (A), Dirichlet-Dirichlet (DD), Neumann-Neumann (NN), and Robin (R) boundary conditions (BCs). The full scaling functions Υ_{d}^{BC}(x_{λ}=D/λ_{th},x_{ξ}=D/ξ) of the residual reduced grand potential per area φ_{res,d}^{BC}(T,μ,D)=D^{-(d-1)}Υ_{d}^{BC}(x_{λ},x_{ξ}) are determined for the ideal gas case with these BCs, where λ_{th} and ξ are the thermal de Broglie wavelength and the bulk correlation length, respectively. The associated limiting scaling functions Θ_{d}^{BC}(x_{ξ})≡Υ_{d}^{BC}(∞,x_{ξ}) describing the critical behavior at the bulk condensation transition are shown to agree with those previously determined from a massive free O(2) theory for BC=P,A,DD,DN,NN. For d=3, they are expressed in closed analytical form in terms of polylogarithms. The analogous scaling functions Υ_{d}^{BC}(x_{λ},x_{ξ},c_{1}D,c_{2}D) and Θ_{d}^{R}(x_{ξ},c_{1}D,c_{2}D) under the RBCs (∂_{z}-c_{1})ϕ|_{z=0}=(∂_{z}+c_{2})ϕ|_{z=D}=0 with c_{1}≥0 and c_{2}≥0 are also determined. The corresponding scaling functions Υ_{∞,d}^{P}(x_{λ},x_{ξ}) and Θ_{∞,d}^{P}(x_{ξ}) for the imperfect Bose gas are shown to agree with those of the interacting Bose gas with n internal degrees of freedom in the limit n→∞. Hence, for d=3, Θ_{∞,d}^{P}(x_{ξ}) is known exactly in closed analytic form. To account for the breakdown of translation invariance in the direction perpendicular to the boundary planes implied by free BCs such as DDBCs, a modified imperfect Bose gas model is introduced that corresponds to the limit n→∞ of this interacting Bose gas. Numerically and analytically exact results for the scaling function Θ_{∞,3}^{DD}(x_{ξ}) therefore follow from those of the O(2n)ϕ^{4} model for n→∞.

  2. A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2010-08-01

    A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.

  3. Clinical progress of human papillomavirus genotypes and their persistent infection in subjects with atypical squamous cells of undetermined significance cytology: Statistical and latent Dirichlet allocation analysis

    PubMed Central

    Kim, Yee Suk; Lee, Sungin; Zong, Nansu; Kahng, Jimin

    2017-01-01

    The present study aimed to investigate differences in prognosis based on human papillomavirus (HPV) infection, persistent infection and genotype variations for patients exhibiting atypical squamous cells of undetermined significance (ASCUS) in their initial Papanicolaou (PAP) test results. A latent Dirichlet allocation (LDA)-based tool was developed that may offer a facilitated means of communication to be employed during patient-doctor consultations. The present study assessed 491 patients (139 HPV-positive and 352 HPV-negative cases) with a PAP test result of ASCUS with a follow-up period ≥2 years. Patients underwent PAP and HPV DNA chip tests between January 2006 and January 2009. The HPV-positive subjects were followed up with at least 2 instances of PAP and HPV DNA chip tests. The most common genotypes observed were HPV-16 (25.9%, 36/139), HPV-52 (14.4%, 20/139), HPV-58 (13.7%, 19/139), HPV-56 (11.5%, 16/139), HPV-51 (9.4%, 13/139) and HPV-18 (8.6%, 12/139). A total of 33.3% (12/36) patients positive for HPV-16 had cervical intraepithelial neoplasia (CIN)2 or a worse result, which was significantly higher than the prevalence of CIN2 of 1.8% (8/455) in patients negative for HPV-16 (P<0.001), while no significant association was identified for other genotypes in terms of genotype and clinical progress. There was a significant association between clearance and good prognosis (P<0.001). Persistent infection was higher in patients aged ≥51 years (38.7%) than in those aged ≤50 years (20.4%; P=0.036). Progression from persistent infection to CIN2 or worse (19/34, 55.9%) was higher than clearance (0/105, 0.0%; P<0.001). In the LDA analysis, using symmetric Dirichlet priors α=0.1 and β=0.01, and clusters (k)=5 or 10 provided the most meaningful groupings. Statistical and LDA analyses produced consistent results regarding the association between persistent infection of HPV-16, old age and long infection period with a clinical progression of CIN2 or worse. Therefore, LDA results may be presented as explanatory evidence during time-constrained patient-doctor consultations in order to deliver information regarding the patient's status. PMID:28587376

  4. Marginally specified priors for non-parametric Bayesian estimation

    PubMed Central

    Kessler, David C.; Hoff, Peter D.; Dunson, David B.

    2014-01-01

    Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813

  5. Mining Adverse Events of Dietary Supplements from Product Labels by Topic Modeling.

    PubMed

    Wang, Yefeng; Gunashekar, Divya R; Adam, Terrence J; Zhang, Rui

    2017-01-01

    The adverse events of the dietary supplements should be subject to scrutiny due to their growing clinical application and consumption among U.S. adults. An effective method for mining and grouping the adverse events of the dietary supplements is to evaluate product labeling for the rapidly increasing number of new products available in the market. In this study, the adverse events information was extracted from the product labels stored in the Dietary Supplement Label Data-base (DSLD) and analyzed by topic modeling techniques, specifically Latent Dirichlet Allocation (LDA). Among the 50 topics generated by LDA, eight topics were manually evaluated, with topic relatedness ranging from 58.8% to 100% on the product level, and 57.1% to 100% on the ingredient level. Five out of these eight topics were coherent groupings of the dietary supplements based on their adverse events. The results demonstrated that LDA is able to group supplements with similar adverse events based on the dietary supplement labels. Such information can be potentially used by consumers to more safely use dietary supplements.

  6. Organizing Books and Authors by Multilayer SOM.

    PubMed

    Zhang, Haijun; Chow, Tommy W S; Wu, Q M Jonathan

    2016-12-01

    This paper introduces a new framework for the organization of electronic books (e-books) and their corresponding authors using a multilayer self-organizing map (MLSOM). An author is modeled by a rich tree-structured representation, and an MLSOM-based system is used as an efficient solution to the organizational problem of structured data. The tree-structured representation formulates author features in a hierarchy of author biography, books, pages, and paragraphs. To efficiently tackle the tree-structured representation, we used an MLSOM algorithm that serves as a clustering technique to handle e-books and their corresponding authors. A book and author recommender system is then implemented using the proposed framework. The effectiveness of our approach was examined in a large-scale data set containing 3868 authors along with the 10500 e-books that they wrote. We also provided visualization results of MLSOM for revealing the relevance patterns hidden from presented author clusters. The experimental results corroborate that the proposed method outperforms other content-based models (e.g., rate adapting poisson, latent Dirichlet allocation, probabilistic latent semantic indexing, and so on) and offers a promising solution to book recommendation, author recommendation, and visualization.

  7. Evolutionary dynamics of language systems

    PubMed Central

    Wu, Chieh-Hsi; Hua, Xia; Dunn, Michael; Levinson, Stephen C.; Gray, Russell D.

    2017-01-01

    Understanding how and why language subsystems differ in their evolutionary dynamics is a fundamental question for historical and comparative linguistics. One key dynamic is the rate of language change. While it is commonly thought that the rapid rate of change hampers the reconstruction of deep language relationships beyond 6,000–10,000 y, there are suggestions that grammatical structures might retain more signal over time than other subsystems, such as basic vocabulary. In this study, we use a Dirichlet process mixture model to infer the rates of change in lexical and grammatical data from 81 Austronesian languages. We show that, on average, most grammatical features actually change faster than items of basic vocabulary. The grammatical data show less schismogenesis, higher rates of homoplasy, and more bursts of contact-induced change than the basic vocabulary data. However, there is a core of grammatical and lexical features that are highly stable. These findings suggest that different subsystems of language have differing dynamics and that careful, nuanced models of language change will be needed to extract deeper signal from the noise of parallel evolution, areal readaptation, and contact. PMID:29073028

  8. Mining Adverse Events of Dietary Supplements from Product Labels by Topic Modeling

    PubMed Central

    Wang, Yefeng; Gunashekar, Divya R.; Adam, Terrence J.; Zhang, Rui

    2018-01-01

    The adverse events of the dietary supplements should be subject to scrutiny due to their growing clinical application and consumption among U.S. adults. An effective method for mining and grouping the adverse events of the dietary supplements is to evaluate product labeling for the rapidly increasing number of new products available in the market. In this study, the adverse events information was extracted from the product labels stored in the Dietary Supplement Label Database (DSLD) and analyzed by topic modeling techniques, specifically Latent Dirichlet Allocation (LDA). Among the 50 topics generated by LDA, eight topics were manually evaluated, with topic relatedness ranging from 58.8% to 100% on the product level, and 57.1% to 100% on the ingredient level. Five out of these eight topics were coherent groupings of the dietary supplements based on their adverse events. The results demonstrated that LDA is able to group supplements with similar adverse events based on the dietary supplement labels. Such information can be potentially used by consumers to more safely use dietary supplements. PMID:29295169

  9. Two-Dimensional Model for Reactive-Sorption Columns of Cylindrical Geometry: Analytical Solutions and Moment Analysis.

    PubMed

    Khan, Farman U; Qamar, Shamsul

    2017-05-01

    A set of analytical solutions are presented for a model describing the transport of a solute in a fixed-bed reactor of cylindrical geometry subjected to the first (Dirichlet) and third (Danckwerts) type inlet boundary conditions. Linear sorption kinetic process and first-order decay are considered. Cylindrical geometry allows the use of large columns to investigate dispersion, adsorption/desorption and reaction kinetic mechanisms. The finite Hankel and Laplace transform techniques are adopted to solve the model equations. For further analysis, statistical temporal moments are derived from the Laplace-transformed solutions. The developed analytical solutions are compared with the numerical solutions of high-resolution finite volume scheme. Different case studies are presented and discussed for a series of numerical values corresponding to a wide range of mass transfer and reaction kinetics. A good agreement was observed in the analytical and numerical concentration profiles and moments. The developed solutions are efficient tools for analyzing numerical algorithms, sensitivity analysis and simultaneous determination of the longitudinal and transverse dispersion coefficients from a laboratory-scale radial column experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. A Stationary One-Equation Turbulent Model with Applications in Porous Media

    NASA Astrophysics Data System (ADS)

    de Oliveira, H. B.; Paiva, A.

    2018-06-01

    A one-equation turbulent model is studied in this work in the steady-state and with homogeneous Dirichlet boundary conditions. The considered problem generalizes two distinct approaches that are being used with success in the applications to model different flows through porous media. The novelty of the problem relies on the consideration of the classical Navier-Stokes equations with a feedback forces field, whose presence in the momentum equation will affect the equation for the turbulent kinetic energy (TKE) with a new term that is known as the production and represents the rate at which TKE is transferred from the mean flow to the turbulence. By assuming suitable growth conditions on the feedback forces field and on the function that describes the rate of dissipation of the TKE, as well as on the production term, we will prove the existence of the velocity field and of the TKE. The proof of their uniqueness is made by assuming monotonicity conditions on the feedback forces field and on the turbulent dissipation function, together with a condition of Lipschitz continuity on the production term. The existence of a unique pressure, will follow by the application of a standard version of de Rham's lemma.

  11. Analyzing research trends on drug safety using topic modeling.

    PubMed

    Zou, Chen

    2018-06-01

    Published drug safety data has evolved in the past decade due to scientific and technological advances in the relevant research fields. Considering that a vast amount of scientific literature has been published in this area, it is not easy to identify the key information. Topic modeling has emerged as a powerful tool to extract meaningful information from a large volume of unstructured texts. Areas covered: We analyzed the titles and abstracts of 4347 articles in four journals dedicated to drug safety from 2007 to 2016. We applied Latent Dirichlet allocation (LDA) model to extract 50 main topics, and conducted trend analysis to explore the temporal popularity of these topics over years. Expert Opinion/Commentary: We found that 'benefit-risk assessment and communication', 'diabetes' and 'biologic therapy for autoimmune diseases' are the top 3 most published topics. The topics relevant to the use of electronic health records/observational data for safety surveillance are becoming increasingly popular over time. Meanwhile, there is a slight decrease in research on signal detection based on spontaneous reporting, although spontaneous reporting still plays an important role in benefit-risk assessment. The topics related to medical conditions and treatment showed highly dynamic patterns over time.

  12. Diversifying customer review rankings.

    PubMed

    Krestel, Ralf; Dokoohaki, Nima

    2015-06-01

    E-commerce Web sites owe much of their popularity to consumer reviews accompanying product descriptions. On-line customers spend hours and hours going through heaps of textual reviews to decide which products to buy. At the same time, each popular product has thousands of user-generated reviews, making it impossible for a buyer to read everything. Current approaches to display reviews to users or recommend an individual review for a product are based on the recency or helpfulness of each review. In this paper, we present a framework to rank product reviews by optimizing the coverage of the ranking with respect to sentiment or aspects, or by summarizing all reviews with the top-K reviews in the ranking. To accomplish this, we make use of the assigned star rating for a product as an indicator for a review's sentiment polarity and compare bag-of-words (language model) with topic models (latent Dirichlet allocation) as a mean to represent aspects. Our evaluation on manually annotated review data from a commercial review Web site demonstrates the effectiveness of our approach, outperforming plain recency ranking by 30% and obtaining best results by combining language and topic model representations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. a New Model for Fuzzy Personalized Route Planning Using Fuzzy Linguistic Preference Relation

    NASA Astrophysics Data System (ADS)

    Nadi, S.; Houshyaripour, A. H.

    2017-09-01

    This paper proposes a new model for personalized route planning under uncertain condition. Personalized routing, involves different sources of uncertainty. These uncertainties can be raised from user's ambiguity about their preferences, imprecise criteria values and modelling process. The proposed model uses Fuzzy Linguistic Preference Relation Analytical Hierarchical Process (FLPRAHP) to analyse user's preferences under uncertainty. Routing is a multi-criteria task especially in transportation networks, where the users wish to optimize their routes based on different criteria. However, due to the lake of knowledge about the preferences of different users and uncertainties available in the criteria values, we propose a new personalized fuzzy routing method based on the fuzzy ranking using center of gravity. The model employed FLPRAHP method to aggregate uncertain criteria values regarding uncertain user's preferences while improve consistency with least possible comparisons. An illustrative example presents the effectiveness and capability of the proposed model to calculate best personalize route under fuzziness and uncertainty.

  14. Consequences of land-cover misclassification in models of impervious surface

    USGS Publications Warehouse

    McMahon, G.

    2007-01-01

    Model estimates of impervious area as a function of landcover area may be biased and imprecise because of errors in the land-cover classification. This investigation of the effects of land-cover misclassification on impervious surface models that use National Land Cover Data (NLCD) evaluates the consequences of adjusting land-cover within a watershed to reflect uncertainty assessment information. Model validation results indicate that using error-matrix information to adjust land-cover values used in impervious surface models does not substantially improve impervious surface predictions. Validation results indicate that the resolution of the landcover data (Level I and Level II) is more important in predicting impervious surface accurately than whether the land-cover data have been adjusted using information in the error matrix. Level I NLCD, adjusted for land-cover misclassification, is preferable to the other land-cover options for use in models of impervious surface. This result is tied to the lower classification error rates for the Level I NLCD. ?? 2007 American Society for Photogrammetry and Remote Sensing.

  15. Global Binary Optimization on Graphs for Classification of High Dimensional Data

    DTIC Science & Technology

    2014-09-01

    Buades et al . in [10] introduce a new non-local means algorithm for image denoising and compare it to some of the best methods. In [28], Grady de...scribes a random walk algorithm for image seg- mentation using the solution to a Dirichlet prob- lem. Elmoataz et al . present generalizations of the...graph Laplacian [19] for image denoising and man- ifold smoothing. Couprie et al . in [16] propose a parameterized graph-based energy function that unifies

  16. Implementation of Nonhomogeneous Dirichlet Boundary Conditions in the p- Version of the Finite Element Method

    DTIC Science & Technology

    1988-09-01

    Institute for Physical Science and Teennology rUniversity of Maryland o College Park, MD 20742 B. Gix) Engineering Mechanics Research Corporation Troy...OF THE FINITE ELEMENT METHOD by Ivo Babuska Institute for Physical Science and Technology University of Maryland College Park, MD 20742 B. Guo 2...2Research partially supported by the National Science Foundation under Grant DMS-85-16191 during the stay at the Institute for Physical Science and

  17. Lifshits Tails for Randomly Twisted Quantum Waveguides

    NASA Astrophysics Data System (ADS)

    Kirsch, Werner; Krejčiřík, David; Raikov, Georgi

    2018-03-01

    We consider the Dirichlet Laplacian H_γ on a 3D twisted waveguide with random Anderson-type twisting γ . We introduce the integrated density of states N_γ for the operator H_γ , and investigate the Lifshits tails of N_γ , i.e. the asymptotic behavior of N_γ (E) as E \\downarrow \\inf supp dN_γ . In particular, we study the dependence of the Lifshits exponent on the decay rate of the single-site twisting at infinity.

  18. Evaluation of the path integral for flow through random porous media

    NASA Astrophysics Data System (ADS)

    Westbroek, Marise J. E.; Coche, Gil-Arnaud; King, Peter R.; Vvedensky, Dimitri D.

    2018-04-01

    We present a path integral formulation of Darcy's equation in one dimension with random permeability described by a correlated multivariate lognormal distribution. This path integral is evaluated with the Markov chain Monte Carlo method to obtain pressure distributions, which are shown to agree with the solutions of the corresponding stochastic differential equation for Dirichlet and Neumann boundary conditions. The extension of our approach to flow through random media in two and three dimensions is discussed.

  19. Nonlocal Reformulations of Water and Internal Waves and Asymptotic Reductions

    NASA Astrophysics Data System (ADS)

    Ablowitz, Mark J.

    2009-09-01

    Nonlocal reformulations of the classical equations of water waves and two ideal fluids separated by a free interface, bounded above by either a rigid lid or a free surface, are obtained. The kinematic equations may be written in terms of integral equations with a free parameter. By expressing the pressure, or Bernoulli, equation in terms of the surface/interface variables, a closed system is obtained. An advantage of this formulation, referred to as the nonlocal spectral (NSP) formulation, is that the vertical component is eliminated, thus reducing the dimensionality and fixing the domain in which the equations are posed. The NSP equations and the Dirichlet-Neumann operators associated with the water wave or two-fluid equations can be related to each other and the Dirichlet-Neumann series can be obtained from the NSP equations. Important asymptotic reductions obtained from the two-fluid nonlocal system include the generalizations of the Benney-Luke and Kadomtsev-Petviashvili (KP) equations, referred to as intermediate-long wave (ILW) generalizations. These 2+1 dimensional equations possess lump type solutions. In the water wave problem high-order asymptotic series are obtained for two and three dimensional gravity-capillary solitary waves. In two dimensions, the first term in the asymptotic series is the well-known hyperbolic secant squared solution of the KdV equation; in three dimensions, the first term is the rational lump solution of the KP equation.

  20. Results from 15years of quality surveillance for a National Indigenous Point-of-Care Testing Program for diabetes.

    PubMed

    Shephard, Mark; Shephard, Anne; McAteer, Bridgit; Regnier, Tamika; Barancek, Kristina

    2017-12-01

    Diabetes is a major health problem for Australia's Aboriginal and Torres Strait Islander peoples. Point-of-care testing for haemoglobin A1c (HbA1c) has been the cornerstone of a long-standing program (QAAMS) to manage glycaemic control in Indigenous people with diabetes and recently, to diagnose diabetes. The QAAMS quality management framework includes monthly testing of quality control (QC) and external quality assurance (EQA) samples. Key performance indicators of quality include imprecision (coefficient of variation [CV%]) and percentage acceptable results. This paper reports on the past 15years of quality testing in QAAMS and examines the performance of HbA1c POC testing at the 6.5% cut-off recommended for diagnosis. The total number of HbA1c EQA results submitted from 2002 to 2016 was 29,093. The median imprecision for EQA testing by QAAMS device operators averaged 2.81% (SD 0.50; range 2.2 to 3.9%) from 2002 to 2016 and 2.44% (SD 0.22; range 2.2 to 2.9%) from 2009 to 2016. No significant difference was observed between the median imprecision achieved in QAAMS and by Australasian laboratories from 2002 to 2016 (p=0.05; two-tailed paired t-test) and from 2009 to 2016 (p=0.17; two-tailed paired t-test). For QC testing from 2009 to 2016, imprecision averaged 2.5% and 3.0% for the two levels of QC tested. Percentage acceptable results averaged 90% for QA testing from 2002 to 2016 and 96% for QC testing from 2009 to 2016. The DCA Vantage was able to measure a patient and an EQA sample with an HbA1c value close to 6.5% both accurately and precisely. HbA1c POC testing in QAAMS has remained analytically sound, matched the quality achieved by Australasian laboratories and met profession-derived analytical goals for 15years. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  1. Evaluation of the Dacos 3.0 analyser.

    PubMed

    Pons, J F; Alumá, A; Antoja, F; Biosca, C; Alsina, M J; Galimany, R

    1990-01-01

    The selective multitest Coulter Dacos 3.0 analyser was evaluated according to the guidelines of the Comisión de Instrumentación de la Sociedad Española de Química Clínica and of the European Committee for Clinical Laboratory Standards.THE EVALUATION WAS PERFORMED IN FOUR STEPS: examination of the analytical units; evaluation of routine working; study of interferences; and assessment of practicability.The evaluation included a photometric study. The inaccuracy is acceptable for 340 nm and 420 nm, and the imprecision at absorbances from 0.05 to 2.00 ranged from 0.06 to 0.28% at 340 nm and from 0.06 to 0.08% at 420 nm. The linearity showed some dispersion at low absorbance for PNP at 420 nm and the drift was negligible.The imprecision of the pipette delivery system, the temperature control system and the washing system were satisfactory.IN ROUTINE WORK CONDITIONS, SEVEN ANALYTICAL METHODS WERE STUDIED: glucose, creatinine, iron, total protein, AST, ALP and calcium. Within-run imprecision ranged, at low concentrations, from 0.9% (CV) for glucose, to 7.6% (CV) for iron; at medium concentrations, from 0.7% (CV) for total protein to 5.2% (CV) to creatinine; and at high concentrations, it ranged from 0.6% (CV) for glucose to 3.9% (CV) for ALP.Between-run imprecision at low concentrations ranged from 1.4% (CV) for glucose to 15.1% (CV) for iron; at medium concentrations it ranged from 1.2% (CV) for protein to 6.7% (CV) for iron; and at high concentrations the range is from l.2for AST to 5.7% (CV) for iron.No contamination was found in the sample carry-over study. Some contamination was found in the reagent carry-over study (total protein due to iron and calcium reagents). Relative inaccuracy is good for all the constituents assayed. Only LDH (high and low levels) and urate (low level) showed weak and negative interference caused by turbidity, and gamma-GT (high level) and amylase, bilirubin and ALP (two levels) showed a negative interference caused by haemolysis.

  2. Evidence Theory Based Uncertainty Quantification in Radiological Risk due to Accidental Release of Radioactivity from a Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Ingale, S. V.; Datta, D.

    2010-10-01

    Consequence of the accidental release of radioactivity from a nuclear power plant is assessed in terms of exposure or dose to the members of the public. Assessment of risk is routed through this dose computation. Dose computation basically depends on the basic dose assessment model and exposure pathways. One of the exposure pathways is the ingestion of contaminated food. The aim of the present paper is to compute the uncertainty associated with the risk to the members of the public due to the ingestion of contaminated food. The governing parameters of the ingestion dose assessment model being imprecise, we have approached evidence theory to compute the bound of the risk. The uncertainty is addressed by the belief and plausibility fuzzy measures.

  3. Mathematical modeling of PDC bit drilling process based on a single-cutter mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wojtanowicz, A.K.; Kuru, E.

    1993-12-01

    An analytical development of a new mechanistic drilling model for polycrystalline diamond compact (PDC) bits is presented. The derivation accounts for static balance of forces acting on a single PDC cutter and is based on assumed similarity between bit and cutter. The model is fully explicit with physical meanings given to all constants and functions. Three equations constitute the mathematical model: torque, drilling rate, and bit life. The equations comprise cutter`s geometry, rock properties drilling parameters, and four empirical constants. The constants are used to match the model to a PDC drilling process. Also presented are qualitative and predictive verificationsmore » of the model. Qualitative verification shows that the model`s response to drilling process variables is similar to the behavior of full-size PDC bits. However, accuracy of the model`s predictions of PDC bit performance is limited primarily by imprecision of bit-dull evaluation. The verification study is based upon the reported laboratory drilling and field drilling tests as well as field data collected by the authors.« less

  4. Multicenter Evaluation of Cystatin C Measurement after Assay Standardization.

    PubMed

    Bargnoux, Anne-Sophie; Piéroni, Laurence; Cristol, Jean-Paul; Kuster, Nils; Delanaye, Pierre; Carlier, Marie-Christine; Fellahi, Soraya; Boutten, Anne; Lombard, Christine; González-Antuña, Ana; Delatour, Vincent; Cavalier, Etienne

    2017-04-01

    Since 2010, a certified reference material ERM-DA471/IFCC has been available for cystatin C (CysC). This study aimed to assess the sources of uncertainty in results for clinical samples measured using standardized assays. This evaluation was performed in 2015 and involved 7 clinical laboratories located in France and Belgium. CysC was measured in a panel of 4 serum pools using 8 automated assays and a candidate isotope dilution mass spectrometry reference measurement procedure. Sources of uncertainty (imprecision and bias) were evaluated to calculate the relative expanded combined uncertainty for each CysC assay. Uncertainty was judged against the performance specifications derived from the biological variation model. Only Siemens reagents on the Siemens systems and, to a lesser extent, DiaSys reagents on the Cobas system, provided results that met the minimum performance criterion calculated according to the intraindividual and interindividual biological variations. Although the imprecision was acceptable for almost all assays, an increase in the bias with concentration was observed for Gentian reagents, and unacceptably high biases were observed for Abbott and Roche reagents on their own systems. This comprehensive picture of the market situation since the release of ERM-DA471/IFCC shows that bias remains the major component of the combined uncertainty because of possible problems associated with the implementation of traceability. Although some manufacturers have clearly improved their calibration protocols relative to ERM-DA471, most of them failed to meet the criteria for acceptable CysC measurements. © 2016 American Association for Clinical Chemistry.

  5. 3DFEMWATER: A three-dimensional finite element model of water flow through saturated-unsaturated media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, G.T.

    1987-08-01

    The 3DFEMWATER model is designed to treat heterogeneous and anisotropic media consisting of as many geologic formations as desired, consider both distributed and point sources/sinks that are spatially and temporally dependent, accept the prescribed initial conditions or obtain them by simulating a steady state version of the system under consideration, deal with a transient head distributed over the Dirichlet boundary, handle time-dependent fluxes due to pressure gradient varying along the Neumann boundary, treat time-dependent total fluxes distributed over the Cauchy boundary, automatically determine variable boundary conditions of evaporation, infiltration, or seepage on the soil-air interface, include the off-diagonal hydraulic conductivitymore » components in the modified Richards equation for dealing with cases when the coordinate system does not coincide with the principal directions of the hydraulic conductivity tensor, give three options for estimating the nonlinear matrix, include two options (successive subregion block iterations and successive point interactions) for solving the linearized matrix equations, automatically reset time step size when boundary conditions or source/sinks change abruptly, and check the mass balance computation over the entire region for every time step. The model is verified with analytical solutions or other numerical models for three examples.« less

  6. Formal Specification and Design Techniques for Wireless Sensor and Actuator Networks

    PubMed Central

    Martínez, Diego; González, Apolinar; Blanes, Francisco; Aquino, Raúl; Simo, José; Crespo, Alfons

    2011-01-01

    A current trend in the development and implementation of industrial applications is to use wireless networks to communicate the system nodes, mainly to increase application flexibility, reliability and portability, as well as to reduce the implementation cost. However, the nondeterministic and concurrent behavior of distributed systems makes their analysis and design complex, often resulting in less than satisfactory performance in simulation and test bed scenarios, which is caused by using imprecise models to analyze, validate and design these systems. Moreover, there are some simulation platforms that do not support these models. This paper presents a design and validation method for Wireless Sensor and Actuator Networks (WSAN) which is supported on a minimal set of wireless components represented in Colored Petri Nets (CPN). In summary, the model presented allows users to verify the design properties and structural behavior of the system. PMID:22344203

  7. Fuzzy Edge Connectivity of Graphical Fuzzy State Space Model in Multi-connected System

    NASA Astrophysics Data System (ADS)

    Harish, Noor Ainy; Ismail, Razidah; Ahmad, Tahir

    2010-11-01

    Structured networks of interacting components illustrate complex structure in a direct or intuitive way. Graph theory provides a mathematical modeling for studying interconnection among elements in natural and man-made systems. On the other hand, directed graph is useful to define and interpret the interconnection structure underlying the dynamics of the interacting subsystem. Fuzzy theory provides important tools in dealing various aspects of complexity, imprecision and fuzziness of the network structure of a multi-connected system. Initial development for systems of Fuzzy State Space Model (FSSM) and a fuzzy algorithm approach were introduced with the purpose of solving the inverse problems in multivariable system. In this paper, fuzzy algorithm is adapted in order to determine the fuzzy edge connectivity between subsystems, in particular interconnected system of Graphical Representation of FSSM. This new approach will simplify the schematic diagram of interconnection of subsystems in a multi-connected system.

  8. Scheduling optimization of design stream line for production research and development projects

    NASA Astrophysics Data System (ADS)

    Liu, Qinming; Geng, Xiuli; Dong, Ming; Lv, Wenyuan; Ye, Chunming

    2017-05-01

    In a development project, efficient design stream line scheduling is difficult and important owing to large design imprecision and the differences in the skills and skill levels of employees. The relative skill levels of employees are denoted as fuzzy numbers. Multiple execution modes are generated by scheduling different employees for design tasks. An optimization model of a design stream line scheduling problem is proposed with the constraints of multiple executive modes, multi-skilled employees and precedence. The model considers the parallel design of multiple projects, different skills of employees, flexible multi-skilled employees and resource constraints. The objective function is to minimize the duration and tardiness of the project. Moreover, a two-dimensional particle swarm algorithm is used to find the optimal solution. To illustrate the validity of the proposed method, a case is examined in this article, and the results support the feasibility and effectiveness of the proposed model and algorithm.

  9. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  10. Proposing integrated Shannon's entropy-inverse data envelopment analysis methods for resource allocation problem under a fuzzy environment

    NASA Astrophysics Data System (ADS)

    Çakır, Süleyman

    2017-10-01

    In this study, a two-phase methodology for resource allocation problems under a fuzzy environment is proposed. In the first phase, the imprecise Shannon's entropy method and the acceptability index are suggested, for the first time in the literature, to select input and output variables to be used in the data envelopment analysis (DEA) application. In the second step, an interval inverse DEA model is executed for resource allocation in a short run. In an effort to exemplify the practicality of the proposed fuzzy model, a real case application has been conducted involving 16 cement firms listed in Borsa Istanbul. The results of the case application indicated that the proposed hybrid model is a viable procedure to handle input-output selection and resource allocation problems under fuzzy conditions. The presented methodology can also lend itself to different applications such as multi-criteria decision-making problems.

  11. COMMUNICATING PROBABILISTIC RISK OUTCOMES TO RISK MANAGERS

    EPA Science Inventory

    Increasingly, risk assessors are moving away from simple deterministic assessments to probabilistic approaches that explicitly incorporate ecological variability, measurement imprecision, and lack of knowledge (collectively termed "uncertainty"). While the new methods provide an...

  12. Knowledge representation in fuzzy logic

    NASA Technical Reports Server (NTRS)

    Zadeh, Lotfi A.

    1989-01-01

    The author presents a summary of the basic concepts and techniques underlying the application of fuzzy logic to knowledge representation. He then describes a number of examples relating to its use as a computational system for dealing with uncertainty and imprecision in the context of knowledge, meaning, and inference. It is noted that one of the basic aims of fuzzy logic is to provide a computational framework for knowledge representation and inference in an environment of uncertainty and imprecision. In such environments, fuzzy logic is effective when the solutions need not be precise and/or it is acceptable for a conclusion to have a dispositional rather than categorical validity. The importance of fuzzy logic derives from the fact that there are many real-world applications which fit these conditions, especially in the realm of knowledge-based systems for decision-making and control.

  13. Self-calibration for lensless color microscopy.

    PubMed

    Flasseur, Olivier; Fournier, Corinne; Verrier, Nicolas; Denis, Loïc; Jolivet, Frédéric; Cazier, Anthony; Lépine, Thierry

    2017-05-01

    Lensless color microscopy (also called in-line digital color holography) is a recent quantitative 3D imaging method used in several areas including biomedical imaging and microfluidics. By targeting cost-effective and compact designs, the wavelength of the low-end sources used is known only imprecisely, in particular because of their dependence on temperature and power supply voltage. This imprecision is the source of biases during the reconstruction step. An additional source of error is the crosstalk phenomenon, i.e., the mixture in color sensors of signals originating from different color channels. We propose to use a parametric inverse problem approach to achieve self-calibration of a digital color holographic setup. This process provides an estimation of the central wavelengths and crosstalk. We show that taking the crosstalk phenomenon into account in the reconstruction step improves its accuracy.

  14. Three-quarter views are subjectively good because object orientation is uncertain.

    PubMed

    Niimi, Ryosuke; Yokosawa, Kazuhiko

    2009-04-01

    Because the objects that surround us are three-dimensional, their appearance and our visual perception of them change depending on an object's orientation relative to a viewpoint. One of the most remarkable effects of object orientation is that viewers prefer three-quarter views over others, such as front and back, but the exact source of this preference has not been firmly established. We show that object orientation perception of the three-quarter view is relatively imprecise and that this impreciseness is related to preference for this view. Human vision is largely insensitive to variations among different three-quarter views (e.g., 45 degrees vs. 50 degrees ); therefore, the three-quarter view is perceived as if it corresponds to a wide range of orientations. In other words, it functions as the typical representation of the object.

  15. Transport dissipative particle dynamics model for mesoscopic advection- diffusion-reaction problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, Li; Yazdani, Alireza; Tartakovsky, Alexandre M.

    2015-07-07

    We present a transport dissipative particle dynamics (tDPD) model for simulating mesoscopic problems involving advection-diffusion-reaction (ADR) processes, along with a methodology for implementation of the correct Dirichlet and Neumann boundary conditions in tDPD simulations. tDPD is an extension of the classic DPD framework with extra variables for describing the evolution of concentration fields. The transport of concentration is modeled by a Fickian flux and a random flux between particles, and an analytical formula is proposed to relate the mesoscopic concentration friction to the effective diffusion coefficient. To validate the present tDPD model and the boundary conditions, we perform three tDPDmore » simulations of one-dimensional diffusion with different boundary conditions, and the results show excellent agreement with the theoretical solutions. We also performed two-dimensional simulations of ADR systems and the tDPD simulations agree well with the results obtained by the spectral element method. Finally, we present an application of the tDPD model to the dynamic process of blood coagulation involving 25 reacting species in order to demonstrate the potential of tDPD in simulating biological dynamics at the mesoscale. We find that the tDPD solution of this comprehensive 25-species coagulation model is only twice as computationally expensive as the DPD simulation of the hydrodynamics only, which is a significant advantage over available continuum solvers.« less

  16. Nonparametric Bayesian inference for mean residual life functions in survival analysis.

    PubMed

    Poynor, Valerie; Kottas, Athanasios

    2018-01-19

    Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. A three-dimensional finite element model for biomechanical analysis of the hip.

    PubMed

    Chen, Guang-Xing; Yang, Liu; Li, Kai; He, Rui; Yang, Bin; Zhan, Yan; Wang, Zhi-Jun; Yu, Bing-Nin; Jian, Zhe

    2013-11-01

    The objective of this study was to construct a three-dimensional (3D) finite element model of the hip. The images of the hip were obtained from Chinese visible human dataset. The hip model includes acetabular bone, cartilage, labrum, and bone. The cartilage of femoral head was constructed using the AutoCAD and Solidworks software. The hip model was imported into ABAQUS analysis system. The contact surface of the hip joint was meshed. To verify the model, the single leg peak force was loaded, and contact area of the cartilage and labrum of the hip and pressure distribution in these structures were observed. The constructed 3D hip model reflected the real hip anatomy. Further, this model reflected biomechanical behavior similar to previous studies. In conclusion, this 3D finite element hip model avoids the disadvantages of other construction methods, such as imprecision of cartilage construction and the absence of labrum. Further, it provides basic data critical for accurately modeling normal and abnormal loads, and the effects of abnormal loads on the hip.

  18. Functional level-set derivative for a polymer self consistent field theory Hamiltonian

    NASA Astrophysics Data System (ADS)

    Ouaknin, Gaddiel; Laachi, Nabil; Bochkov, Daniil; Delaney, Kris; Fredrickson, Glenn H.; Gibou, Frederic

    2017-09-01

    We derive functional level-set derivatives for the Hamiltonian arising in self-consistent field theory, which are required to solve free boundary problems in the self-assembly of polymeric systems such as block copolymer melts. In particular, we consider Dirichlet, Neumann and Robin boundary conditions. We provide numerical examples that illustrate how these shape derivatives can be used to find equilibrium and metastable structures of block copolymer melts with a free surface in both two and three spatial dimensions.

  19. Thermodynamic Identities and Symmetry Breaking in Short-Range Spin Glasses

    NASA Astrophysics Data System (ADS)

    Arguin, L.-P.; Newman, C. M.; Stein, D. L.

    2015-10-01

    We present a technique to generate relations connecting pure state weights, overlaps, and correlation functions in short-range spin glasses. These are obtained directly from the unperturbed Hamiltonian and hold for general coupling distributions. All are satisfied in phases with simple thermodynamic structure, such as the droplet-scaling and chaotic pairs pictures. If instead nontrivial mixed-state pictures hold, the relations suggest that replica symmetry is broken as described by a Derrida-Ruelle cascade, with pure state weights distributed as a Poisson-Dirichlet process.

  20. Image Annotation and Topic Extraction Using Super-Word Latent Dirichlet Allocation

    DTIC Science & Technology

    2013-09-01

    an image can be used to improve automated image annotation performance over existing generalized annotators. Second, image anno - 3 tations can be used...the other variables. The first ratio in the sampling Equation 2.18 uses word frequency by total words, φ̂ (w) j . The second ratio divides word...topics by total words in that document θ̂ (d) j . Both leave out the current assignment of zi and the results are used to randomly choose a new topic

  1. Time-Bound Analytic Tasks on Large Data Sets Through Dynamic Configuration of Workflows

    DTIC Science & Technology

    2013-11-01

    Assessment and Efficient Retrieval of Semantic Workflows.” Information Systems Journal, . 2012. [2] Blei, D., Ng, A., and M . Jordan. “Latent Dirichlet...25 (561-567), 2009. [5] Furlani, T. R., Jones, M . D., Gallo, S. M ., Bruno, A. E., Lu, C., Ghadersohi, A., Gentner, R. J., Patra, A., DeLeon, R. L...Proceedings of the IEEE e- Science Conference, Oxford, UK, pages 244–351. 2009. [8] Gil, Y.; Deelman, E.; Ellisman, M . H.; Fahringer, T.; Fox, G.; Gannon, D

  2. Moving finite elements in 2-D

    NASA Technical Reports Server (NTRS)

    Gelinas, R. J.; Doss, S. K.; Vajk, J. P.; Djomehri, J.; Miller, K.

    1983-01-01

    The mathematical background regarding the moving finite element (MFE) method of Miller and Miller (1981) is discussed, taking into account a general system of partial differential equations (PDE) and the amenability of the MFE method in two dimensions to code modularization and to semiautomatic user-construction of numerous PDE systems for both Dirichlet and zero-Neumann boundary conditions. A description of test problem results is presented, giving attention to aspects of single square wave propagation, and a solution of the heat equation.

  3. On the Boussinesq-Burgers equations driven by dynamic boundary conditions

    NASA Astrophysics Data System (ADS)

    Zhu, Neng; Liu, Zhengrong; Zhao, Kun

    2018-02-01

    We study the qualitative behavior of the Boussinesq-Burgers equations on a finite interval subject to the Dirichlet type dynamic boundary conditions. Assuming H1 ×H2 initial data which are compatible with boundary conditions and utilizing energy methods, we show that under appropriate conditions on the dynamic boundary data, there exist unique global-in-time solutions to the initial-boundary value problem, and the solutions converge to the boundary data as time goes to infinity, regardless of the magnitude of the initial data.

  4. Quasi-periodic solutions of nonlinear beam equation with prescribed frequencies

    NASA Astrophysics Data System (ADS)

    Chang, Jing; Gao, Yixian; Li, Yong

    2015-05-01

    Consider the one dimensional nonlinear beam equation utt + uxxxx + mu + u3 = 0 under Dirichlet boundary conditions. We show that for any m > 0 but a set of small Lebesgue measure, the above equation admits a family of small-amplitude quasi-periodic solutions with n-dimensional Diophantine frequencies. These Diophantine frequencies are the small dilation of a prescribed Diophantine vector. The proofs are based on an infinite dimensional Kolmogorov-Arnold-Moser iteration procedure and a partial Birkhoff normal form.

  5. Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Ditkowski, Adi

    1996-01-01

    An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.

  6. Optimal decay rate for the wave equation on a square with constant damping on a strip

    NASA Astrophysics Data System (ADS)

    Stahn, Reinhard

    2017-04-01

    We consider the damped wave equation with Dirichlet boundary conditions on the unit square parametrized by Cartesian coordinates x and y. We assume the damping a to be strictly positive and constant for x<σ and zero for x>σ . We prove the exact t^{-4/3}-decay rate for the energy of classical solutions. Our main result (Theorem 1) answers question (1) of Anantharaman and Léautaud (Anal PDE 7(1):159-214, 2014, Section 2C).

  7. An Expert System for Identification of Minerals in Thin Section.

    ERIC Educational Resources Information Center

    Donahoe, James Louis; And Others

    1989-01-01

    Discusses a computer database which includes optical properties of 142 minerals. Uses fuzzy logic to identify minerals from incomplete and imprecise information. Written in Turbo PASCAL for MS-DOS with 128K. (MVL)

  8. A neural model of hierarchical reinforcement learning

    PubMed Central

    Rasmussen, Daniel; Eliasmith, Chris

    2017-01-01

    We develop a novel, biologically detailed neural model of reinforcement learning (RL) processes in the brain. This model incorporates a broad range of biological features that pose challenges to neural RL, such as temporally extended action sequences, continuous environments involving unknown time delays, and noisy/imprecise computations. Most significantly, we expand the model into the realm of hierarchical reinforcement learning (HRL), which divides the RL process into a hierarchy of actions at different levels of abstraction. Here we implement all the major components of HRL in a neural model that captures a variety of known anatomical and physiological properties of the brain. We demonstrate the performance of the model in a range of different environments, in order to emphasize the aim of understanding the brain’s general reinforcement learning ability. These results show that the model compares well to previous modelling work and demonstrates improved performance as a result of its hierarchical ability. We also show that the model’s behaviour is consistent with available data on human hierarchical RL, and generate several novel predictions. PMID:28683111

  9. Greedy feature selection for glycan chromatography data with the generalized Dirichlet distribution

    PubMed Central

    2013-01-01

    Background Glycoproteins are involved in a diverse range of biochemical and biological processes. Changes in protein glycosylation are believed to occur in many diseases, particularly during cancer initiation and progression. The identification of biomarkers for human disease states is becoming increasingly important, as early detection is key to improving survival and recovery rates. To this end, the serum glycome has been proposed as a potential source of biomarkers for different types of cancers. High-throughput hydrophilic interaction liquid chromatography (HILIC) technology for glycan analysis allows for the detailed quantification of the glycan content in human serum. However, the experimental data from this analysis is compositional by nature. Compositional data are subject to a constant-sum constraint, which restricts the sample space to a simplex. Statistical analysis of glycan chromatography datasets should account for their unusual mathematical properties. As the volume of glycan HILIC data being produced increases, there is a considerable need for a framework to support appropriate statistical analysis. Proposed here is a methodology for feature selection in compositional data. The principal objective is to provide a template for the analysis of glycan chromatography data that may be used to identify potential glycan biomarkers. Results A greedy search algorithm, based on the generalized Dirichlet distribution, is carried out over the feature space to search for the set of “grouping variables” that best discriminate between known group structures in the data, modelling the compositional variables using beta distributions. The algorithm is applied to two glycan chromatography datasets. Statistical classification methods are used to test the ability of the selected features to differentiate between known groups in the data. Two well-known methods are used for comparison: correlation-based feature selection (CFS) and recursive partitioning (rpart). CFS is a feature selection method, while recursive partitioning is a learning tree algorithm that has been used for feature selection in the past. Conclusions The proposed feature selection method performs well for both glycan chromatography datasets. It is computationally slower, but results in a lower misclassification rate and a higher sensitivity rate than both correlation-based feature selection and the classification tree method. PMID:23651459

  10. A Data Envelopment Analysis Model for Selecting Material Handling System Designs

    NASA Astrophysics Data System (ADS)

    Liu, Fuh-Hwa Franklin; Kuo, Wan-Ting

    The material handling system under design is an unmanned job shop with an automated guided vehicle that transport loads within the processing machines. The engineering task is to select the design alternatives that are the combinations of the four design factors: the ratio of production time to transportation time, mean job arrival rate to the system, input/output buffer capacities at each processing machine, and the vehicle control strategies. Each of the design alternatives is simulated to collect the upper and lower bounds of the five performance indices. We develop a Data Envelopment Analysis (DEA) model to assess the 180 designs with imprecise data of the five indices. The three-ways factorial experiment analysis for the assessment results indicates the buffer capacity and the interaction of job arrival rate and buffer capacity affect the performance significantly.

  11. On the topographic bias and density distribution in modelling the geoid and orthometric heights

    NASA Astrophysics Data System (ADS)

    Sjöberg, Lars E.

    2018-03-01

    It is well known that the success in precise determinations of the gravimetric geoid height (N) and the orthometric height (H) rely on the knowledge of the topographic mass distribution. We show that the residual topographic bias due to an imprecise information on the topographic density is practically the same for N and H, but with opposite signs. This result is demonstrated both for the Helmert orthometric height and for a more precise orthometric height derived by analytical continuation of the external geopotential to the geoid. This result leads to the conclusion that precise gravimetric geoid heights cannot be validated by GNSS-levelling geoid heights in mountainous regions for the errors caused by the incorrect modelling of the topographic mass distribution, because this uncertainty is hidden in the difference between the two geoid estimators.

  12. Debris flows: behavior and hazard assessment

    USGS Publications Warehouse

    Iverson, Richard M.

    2014-01-01

    Debris flows are water-laden masses of soil and fragmented rock that rush down mountainsides, funnel into stream channels, entrain objects in their paths, and form lobate deposits when they spill onto valley floors. Because they have volumetric sediment concentrations that exceed 40 percent, maximum speeds that surpass 10 m/s, and sizes that can range up to ~109 m3, debris flows can denude slopes, bury floodplains, and devastate people and property. Computational models can accurately represent the physics of debris-flow initiation, motion and deposition by simulating evolution of flow mass and momentum while accounting for interactions of debris' solid and fluid constituents. The use of physically based models for hazard forecasting can be limited by imprecise knowledge of initial and boundary conditions and material properties, however. Therefore, empirical methods continue to play an important role in debris-flow hazard assessment.

  13. TDRS orbit determination by radio interferometry

    NASA Technical Reports Server (NTRS)

    Pavloff, Michael S.

    1994-01-01

    In support of a NASA study on the application of radio interferometry to satellite orbit determination, MITRE developed a simulation tool for assessing interferometry tracking accuracy. The Orbit Determination Accuracy Estimator (ODAE) models the general batch maximum likelihood orbit determination algorithms of the Goddard Trajectory Determination System (GTDS) with the group and phase delay measurements from radio interferometry. ODAE models the statistical properties of tracking error sources, including inherent observable imprecision, atmospheric delays, clock offsets, station location uncertainty, and measurement biases, and through Monte Carlo simulation, ODAE calculates the statistical properties of errors in the predicted satellites state vector. This paper presents results from ODAE application to orbit determination of the Tracking and Data Relay Satellite (TDRS) by radio interferometry. Conclusions about optimal ground station locations for interferometric tracking of TDRS are presented, along with a discussion of operational advantages of radio interferometry.

  14. Interactive Tooth Separation from Dental Model Using Segmentation Field

    PubMed Central

    2016-01-01

    Tooth segmentation on dental model is an essential step of computer-aided-design systems for orthodontic virtual treatment planning. However, fast and accurate identifying cutting boundary to separate teeth from dental model still remains a challenge, due to various geometrical shapes of teeth, complex tooth arrangements, different dental model qualities, and varying degrees of crowding problems. Most segmentation approaches presented before are not able to achieve a balance between fine segmentation results and simple operating procedures with less time consumption. In this article, we present a novel, effective and efficient framework that achieves tooth segmentation based on a segmentation field, which is solved by a linear system defined by a discrete Laplace-Beltrami operator with Dirichlet boundary conditions. A set of contour lines are sampled from the smooth scalar field, and candidate cutting boundaries can be detected from concave regions with large variations of field data. The sensitivity to concave seams of the segmentation field facilitates effective tooth partition, as well as avoids obtaining appropriate curvature threshold value, which is unreliable in some case. Our tooth segmentation algorithm is robust to dental models with low quality, as well as is effective to dental models with different levels of crowding problems. The experiments, including segmentation tests of varying dental models with different complexity, experiments on dental meshes with different modeling resolutions and surface noises and comparison between our method and the morphologic skeleton segmentation method are conducted, thus demonstrating the effectiveness of our method. PMID:27532266

  15. An explicit dissipation-preserving method for Riesz space-fractional nonlinear wave equations in multiple dimensions

    NASA Astrophysics Data System (ADS)

    Macías-Díaz, J. E.

    2018-06-01

    In this work, we investigate numerically a model governed by a multidimensional nonlinear wave equation with damping and fractional diffusion. The governing partial differential equation considers the presence of Riesz space-fractional derivatives of orders in (1, 2], and homogeneous Dirichlet boundary data are imposed on a closed and bounded spatial domain. The model under investigation possesses an energy function which is preserved in the undamped regime. In the damped case, we establish the property of energy dissipation of the model using arguments from functional analysis. Motivated by these results, we propose an explicit finite-difference discretization of our fractional model based on the use of fractional centered differences. Associated to our discrete model, we also propose discretizations of the energy quantities. We establish that the discrete energy is conserved in the undamped regime, and that it dissipates in the damped scenario. Among the most important numerical features of our scheme, we show that the method has a consistency of second order, that it is stable and that it has a quadratic order of convergence. Some one- and two-dimensional simulations are shown in this work to illustrate the fact that the technique is capable of preserving the discrete energy in the undamped regime. For the sake of convenience, we provide a Matlab implementation of our method for the one-dimensional scenario.

  16. Review and Recommendations for Zero-inflated Count Regression Modeling of Dental Caries Indices in Epidemiological Studies

    PubMed Central

    Stamm, John W.; Long, D. Leann; Kincade, Megan E.

    2012-01-01

    Over the past five to ten years, zero-inflated count regression models have been increasingly applied to the analysis of dental caries indices (e.g., DMFT, dfms, etc). The main reason for that is linked to the broad decline in children’s caries experience, such that dmf and DMF indices more frequently generate low or even zero counts. This article specifically reviews the application of zero-inflated Poisson and zero-inflated negative binomial regression models to dental caries, with emphasis on the description of the models and the interpretation of fitted model results given the study goals. The review finds that interpretations provided in the published caries research are often imprecise or inadvertently misleading, particularly with respect to failing to discriminate between inference for the class of susceptible persons defined by such models and inference for the sampled population in terms of overall exposure effects. Recommendations are provided to enhance the use as well as the interpretation and reporting of results of count regression models when applied to epidemiological studies of dental caries. PMID:22710271

  17. Non-adiabatic holonomic quantum computation in linear system-bath coupling

    PubMed Central

    Sun, Chunfang; Wang, Gangcheng; Wu, Chunfeng; Liu, Haodi; Feng, Xun-Li; Chen, Jing-Ling; Xue, Kang

    2016-01-01

    Non-adiabatic holonomic quantum computation in decoherence-free subspaces protects quantum information from control imprecisions and decoherence. For the non-collective decoherence that each qubit has its own bath, we show the implementations of two non-commutable holonomic single-qubit gates and one holonomic nontrivial two-qubit gate that compose a universal set of non-adiabatic holonomic quantum gates in decoherence-free-subspaces of the decoupling group, with an encoding rate of . The proposed scheme is robust against control imprecisions and the non-collective decoherence, and its non-adiabatic property ensures less operation time. We demonstrate that our proposed scheme can be realized by utilizing only two-qubit interactions rather than many-qubit interactions. Our results reduce the complexity of practical implementation of holonomic quantum computation in experiments. We also discuss the physical implementation of our scheme in coupled microcavities. PMID:26846444

  18. Non-adiabatic holonomic quantum computation in linear system-bath coupling.

    PubMed

    Sun, Chunfang; Wang, Gangcheng; Wu, Chunfeng; Liu, Haodi; Feng, Xun-Li; Chen, Jing-Ling; Xue, Kang

    2016-02-05

    Non-adiabatic holonomic quantum computation in decoherence-free subspaces protects quantum information from control imprecisions and decoherence. For the non-collective decoherence that each qubit has its own bath, we show the implementations of two non-commutable holonomic single-qubit gates and one holonomic nontrivial two-qubit gate that compose a universal set of non-adiabatic holonomic quantum gates in decoherence-free-subspaces of the decoupling group, with an encoding rate of (N - 2)/N. The proposed scheme is robust against control imprecisions and the non-collective decoherence, and its non-adiabatic property ensures less operation time. We demonstrate that our proposed scheme can be realized by utilizing only two-qubit interactions rather than many-qubit interactions. Our results reduce the complexity of practical implementation of holonomic quantum computation in experiments. We also discuss the physical implementation of our scheme in coupled microcavities.

  19. A Multipixel Time Series Analysis Method Accounting for Ground Motion, Atmospheric Noise, and Orbital Errors

    NASA Astrophysics Data System (ADS)

    Jolivet, R.; Simons, M.

    2018-02-01

    Interferometric synthetic aperture radar time series methods aim to reconstruct time-dependent ground displacements over large areas from sets of interferograms in order to detect transient, periodic, or small-amplitude deformation. Because of computational limitations, most existing methods consider each pixel independently, ignoring important spatial covariances between observations. We describe a framework to reconstruct time series of ground deformation while considering all pixels simultaneously, allowing us to account for spatial covariances, imprecise orbits, and residual atmospheric perturbations. We describe spatial covariances by an exponential decay function dependent of pixel-to-pixel distance. We approximate the impact of imprecise orbit information and residual long-wavelength atmosphere as a low-order polynomial function. Tests on synthetic data illustrate the importance of incorporating full covariances between pixels in order to avoid biased parameter reconstruction. An example of application to the northern Chilean subduction zone highlights the potential of this method.

  20. ELICIT: An alternative imprecise weight elicitation technique for use in multi-criteria decision analysis for healthcare.

    PubMed

    Diaby, Vakaramoko; Sanogo, Vassiki; Moussa, Kouame Richard

    2016-01-01

    In this paper, the readers are introduced to ELICIT, an imprecise weight elicitation technique for multicriteria decision analysis for healthcare. The application of ELICIT consists of two steps: the rank ordering of evaluation criteria based on decision-makers' (DMs) preferences using the principal component analysis; and the estimation of criteria weights and their descriptive statistics using the variable interdependent analysis and the Monte Carlo method. The application of ELICIT is illustrated with a hypothetical case study involving the elicitation of weights for five criteria used to select the best device for eye surgery. The criteria were ranked from 1-5, based on a strict preference relationship established by the DMs. For each criterion, the deterministic weight was estimated as well as the standard deviation and 95% credibility interval. ELICIT is appropriate in situations where only ordinal DMs' preferences are available to elicit decision criteria weights.

Top